url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://research-information.bris.ac.uk/en/publications/the-hidden-unstable-orbits-of-maps-with-gaps | # The hidden unstable orbits of maps with gaps
Mike R Jeffrey, Simon Webber
Research output: Contribution to journalArticle (Academic Journal)peer-review
5 Citations (Scopus)
## Abstract
Piecewise-continuous maps consist of smooth branches separated by jumps, i.e. isolated discontinuities. They appear not to be constrained by the same rules that come with being continuous or differentiable, able to exhibit period incrementing and period adding bifurcations in which branches of attractors seem to appear ‘out of nowhere’, and able to break the rule that ‘period three implies chaos’. We will show here that piecewise maps are not actually so free of the rules governing their continuous cousins, once they are recognised as containing numerous unstable orbits that can only be found by explicitly including the ‘gap’ in the map’s definition. The addition of these ‘hidden’ orbits — which possess an iterate that lies on the discontinuity — bring the theory of piecewise-continuous maps closer to continuous maps. They restore the connections between branches of stable periodic orbits that are missing if the gap is not fully accounted for, showing that stability changes must occur in discontinuous maps via stability changes not so different to smooth maps, and bringing piecewise maps back under the powerful umbrella of Sharkovskii’s theorem. Hidden orbits are also
vital for understanding what happens if the discontinuity is smoothed out to render the map continuous and/or differentiable.
Original language English 0473 19 Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476 2234 https://doi.org/10.1098/rspa.2019.0473 Published - 12 Feb 2020
## Structured keywords
• Engineering Mathematics Research Group
## Keywords
• discontinuous
• map
• dynamics
• unstable
• bifurcation
• gap
## Fingerprint
Dive into the research topics of 'The hidden unstable orbits of maps with gaps'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617461919784546, "perplexity": 2688.1008294915837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00145.warc.gz"} |
https://proofwiki.org/wiki/Exponent_Combination_Laws/Power_of_Power/Proof_2 | # Exponent Combination Laws/Power of Power/Proof 2
## Theorem
Let $a \in \R_{>0}$ be a (strictly) positive real number.
Let $x, y \in \R$ be real numbers.
Let $a^x$ be defined as $a$ to the power of $x$.
Then:
$\left({a^x}\right)^y = a^{xy}$
## Proof
We will show that:
$\forall \epsilon \in \R_{>0}: \left\vert{a^{xy} - \left({a^x}\right)^y}\right\vert < \epsilon$
Without loss of generality, suppose that $x < y$.
Consider $I := \left[{x \,.\,.\, y}\right]$.
Let $I_\Q = I \cap \Q$.
Let $M = \max \left\{ {\left\vert{x}\right\vert, \left\vert{y}\right\vert} \right\}$
Fix $\epsilon \in \R_{>0}$.
$\exists \delta' \in \R_{>0} : \left\vert{a^x - a^{x'} }\right\vert < \delta' \implies \left\vert{\left({a^x}\right)^{y'} - \left({ a^{x'} }\right)^{y'} }\right\vert < \dfrac \epsilon 4$
$\displaystyle \exists \delta_1 \in \R_{>0} : \ \$ $\displaystyle \left\vert{x x' - y y'}\right\vert \ \$ $\, \displaystyle < \,$ $\displaystyle \delta_1$ $\implies$ $\, \displaystyle \left\vert{ a^{x x'} - a^{x y'} }\right\vert \,$ $\, \displaystyle <\,$ $\displaystyle \dfrac \epsilon 4$ $\displaystyle \exists \delta_2 \in \R_{>0} : \ \$ $\displaystyle \left\vert{x y' - x' y'}\right\vert \ \$ $\, \displaystyle < \,$ $\displaystyle \delta_2$ $\implies$ $\, \displaystyle \left\vert{a^{x y'} - a^{x' y'} }\right\vert \,$ $\, \displaystyle <\,$ $\displaystyle \dfrac \epsilon 4$ $\displaystyle \exists \delta_3 \in \R_{>0} : \ \$ $\displaystyle \left\vert{x' - x}\right\vert \ \$ $\, \displaystyle < \,$ $\displaystyle \delta_3$ $\implies$ $\, \displaystyle \left\vert{a^{x'} - a^x}\right\vert \,$ $\, \displaystyle <\,$ $\displaystyle \delta'$ $\displaystyle$ $\implies$ $\, \displaystyle \left\vert{\left({a^{x} }\right)^{y'} - \left({ a^{x'} }\right)^{y'} }\right\vert \,$ $\, \displaystyle <\,$ $\displaystyle \dfrac \epsilon 4$ $\displaystyle \exists \delta_4 \in \R_{>0} : \ \$ $\displaystyle \left\vert{y' - y}\right\vert \ \$ $\, \displaystyle < \,$ $\displaystyle \delta_4$ $\implies$ $\, \displaystyle \left\vert{\left({ a^{x} }\right)^{y'} - \left({ a^{x} }\right)^{y} }\right\vert \,$ $\, \displaystyle <\,$ $\displaystyle \dfrac \epsilon 4$
Further:
$\displaystyle \left\vert{y - y'}\right\vert$ $<$ $\displaystyle \frac {\delta_1} {\left\vert{x}\right\vert}$ $\displaystyle \implies \ \$ $\displaystyle \left\vert{x y - x y'}\right\vert$ $=$ $\displaystyle \left\vert{x}\right\vert \left\vert{y - y'}\right\vert$ Absolute Value Function is Completely Multiplicative $\displaystyle$ $<$ $\displaystyle \left\vert{x}\right\vert \frac{\delta_1}{ \left\vert{x}\right\vert }$ multiplying both sides by $\left\vert{x}\right\vert \ge 0$ $\displaystyle$ $=$ $\displaystyle \delta_1$
And:
$\displaystyle \left\vert{x - x'}\right\vert$ $<$ $\displaystyle \frac {\delta_2} M$ $\displaystyle \implies \ \$ $\displaystyle \left\vert{x y' - x'y'}\right\vert$ $=$ $\displaystyle \left\vert{y'}\right\vert \left\vert{x - x'}\right\vert$ Absolute Value Function is Completely Multiplicative $\displaystyle$ $\leq$ $\displaystyle M \left\vert{x - x'}\right\vert$ Real Number Ordering is Compatible with Multiplication $\displaystyle$ $<$ $\displaystyle M \frac {\delta_1} M$ multiplying both sides by $M \ge 0$ $\displaystyle$ $=$ $\displaystyle \delta_2$
Let $\delta = \max \left\{ {\dfrac {\delta_1} {\left\vert{x}\right\vert}, \dfrac {\delta_2} M, \delta_3, \delta_4} \right\}$.
$\exists r, s \in I_\Q: \left\vert{x - r}\right\vert < \delta \land \left\vert{y - s}\right\vert < \delta$
Thus:
$\displaystyle \left\vert{a^{x y} - \left({a^x}\right)^y}\right\vert$ $\le$ $\displaystyle \left\vert{a^{x y} - a^{x s} }\right\vert + \left\vert{a^{x s} - a^{r s} }\right\vert + \left\vert{a^{r s} - \left({a^r}\right)^s}\right\vert + \left\vert{\left({a^r}\right)^s - \left({a^x}\right)^s}\right\vert + \left\vert{\left({a^x}\right)^s - \left({a^x}\right)^y}\right\vert$ Triangle Inequality for Real Numbers $\displaystyle$ $=$ $\displaystyle \left\vert{a^{x y} - a^{x s} }\right\vert + \left\vert{a^{x s} - a^{r s} }\right\vert + \left\vert{ \left({a^r}\right)^s - \left({a^x}\right)^s}\right\vert + \left\vert{\left({a^x}\right)^s - \left({a^x}\right)^y}\right\vert$ Product of Indices of Real Number: Rational Numbers $\displaystyle$ $<$ $\displaystyle \frac \epsilon 4 + \frac \epsilon 4 + \frac \epsilon 4 + \frac \epsilon 4$ Definition of $r$ and $s$ $\displaystyle$ $=$ $\displaystyle \epsilon$
Hence the result, by Real Plus Epsilon.
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.977936327457428, "perplexity": 77.23315578560387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527000.10/warc/CC-MAIN-20190721123414-20190721145414-00151.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/112441-normal-subgroup.html | # Math Help - normal subgroup
1. ## normal subgroup
If H is a subgroup of a finite group G of index [G: H] two, then H is normal in G.
2. Originally Posted by apple2009
If H is a subgroup of a finite group G of index [G: H] two, then H is normal in G.
Hint - What can you say about right and left cosets of H? And, hence about H.
3. Originally Posted by aman_cc
Hint - What can you say about right and left cosets of H? And, hence about H.
To prove for normal, I need to have the right cosets equal to right cosets, is the two left cosets are the cyclic and I?
4. what is the two left and right coset there? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8996253609657288, "perplexity": 650.9199642174203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989826.86/warc/CC-MAIN-20150728002309-00070-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://espanol.libretexts.org/Vocacional/Vocacional/Aplicaciones_inform%C3%A1ticas_y_tecnolog%C3%ADa_de_la_informaci%C3%B3n/Aplicaciones_inform%C3%A1ticas/Elementos_esenciales_para_el_procesamiento_de_textos_(Busbee)/04%3A_Impresi%C3%B3n | Saltar al contenido principal
# 4: Impresión
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
This page titled 4: Impresión is shared under a CC BY license and was authored, remixed, and/or curated by Kenneth Leroy Busbee (OpenStax CNX) . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999711811542511, "perplexity": 2981.2653655920835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00274.warc.gz"} |
https://cs.stackexchange.com/questions/48527/what-does-logo1n-mean | # What does $\log^{O(1)}n$ mean?
What does $\log^{O(1)}n$ mean?
I am aware of big-O notation, but this notation makes no sense to me. I can't find anything about it either, because there is no way a search engine interprets this correctly.
For a bit of context, the sentence where I found it reads "[...] we call a function [efficient] if it uses $O(\log n)$ space and at most time $\log^{O(1)}n$ per item."
• I agree that one should not write things like this, unless one is very clear about what it is to mean (and tells the reader what that is) and uses it the same rules consistently. – Raphael Oct 21 '15 at 16:35
• Yes, one should instead write it as $\: (\log(n))^{O(1)} \;$. $\;\;\;\;$ – user12859 Oct 21 '15 at 17:16
• @RickyDemer That's not the point that Raphael is making. $\log^{\mathrm{blah}}n$ means exactly $(\log n)^{\mathrm{blah}}$. – David Richerby Oct 21 '15 at 19:35
• @Raphael This is standard notation in the field. Anyone in the know would know what it means. – Yuval Filmus Oct 21 '15 at 19:45
• @YuvalFilmus I think the variety of disagreeing answers is conclusive proof that your claim is false, and that one should indeed refrain from using such notation. – Raphael Oct 27 '15 at 9:07
You need to ignore for a moment the strong feeling that the "$O$" is in the wrong place and plough on with the definition regardless. $f(n) = \log^{O(1)}n$ means that there exist constants $k$ and $n_0$ such that, for all $n\geq n_0$, $f(n) \leq \log^{k\cdot 1}n = \log^k n$.
Note that $\log^k n$ means $(\log n)^k$. Functions of the form $\log^{O(1)}n$ are often called polylogarithmic and you might hear people say, "$f$ is polylog $n$."
You'll notice that it's easy to prove that $2n=O(n)$, since $2n\leq k n$ for all $n\geq 0$, where $k=2$. You might be wondering if $2\log n = \log^{O(1)}n$. The answer is yes since, for large enough $n$, $\log n\geq 2$, so $2\log n \leq \log^2n$ for large enough $n$.
On a related note, you'll often see polynomials written as $n^{O(1)}$: same idea.
• This is not supported by the common placeholder convention. – Raphael Oct 21 '15 at 16:29
• I retract my comment: you write $\leq$ in all the important places, which is sufficient. – Raphael Oct 22 '15 at 7:27
• @Raphael OK. I hadn't had time to check it yet but my feeling was you might be ordering quantifiers differently from the way I am. I'm not actually sure we're defining the same class of functions. – David Richerby Oct 22 '15 at 7:34
• I think you are defining my (2), and Tom defines $\bigcup_{c \in \mathbb{R}_{>0}} \{ \log^c n \}$. – Raphael Oct 22 '15 at 7:39
This is an abuse of notation that can be made sense of by the generally accepted placeholder convention: whenever you find a Landau term $O(f)$, replace it (in your mind, or on the paper) by an arbitrary function $g \in O(f)$.
So if you find
$\qquad f(n) = \log^{O(1)} n$
$\qquad f(n) = \log^{g(n)} n$ for some $g \in O(1). \hspace{5cm} (1)$
Note the difference from saying "$\log$ to the power of some constant": $g = n \mapsto 1/n$ is a distinct possibility.
Warning: The author may be employing even more abuse of notation and want you to read
$\qquad f(n) \in O(\log^{g(n)} n)$ for some $g \in O(1). \hspace{4.3cm} (2)$
Note the difference between (1) and (2); while it works out to define the same set of positive-valued functions here, this does not always work. Do not move $O$ around in expressions without care!
• I think what makes it tick is that $x \mapsto \log^x(n)$ is monotonic and sufficiently surjective for each fixed $n$. Monotonic makes the position of the $O$ equivalent and gives you (2) ⇒ (1); going the other way requires $g$ to exist which could fail if $f(n)$ is outside the range of the function. If you want to point out that moving $O$ around is dangerous and doesn't cover “wild” functions, fine, but in this specific case it's ok for the kind of functions that represent costs. – Gilles 'SO- stop being evil' Oct 24 '15 at 2:24
• @Gilles I weakened the statement to a general warning. – Raphael Oct 25 '15 at 19:11
• This answer has been heavily edited, and now I am confused: do you now claim that (1) and (2) are effectively the same? – Oebele Oct 26 '15 at 9:54
• @Oebele As far as I can tell, they are not in general, but here. – Raphael Oct 26 '15 at 10:15
• But, something like $3 log^2 n$ does not match (1) but does match (2) right? or am I just being silly now? – Oebele Oct 26 '15 at 14:33
It means that the function grows at most as $\log$ to the power of some constant, i.e. $\log^2(n)$ or $\log^5(n)$ or $\log^{99999}(n)$...
• This can be used when the function growth is known to be bounded by some constant power of the $\log$, but the particular constant is unknown or left unspecified. – Yves Daoust Oct 21 '15 at 13:47
• This is not supported by the common placeholder convention. – Raphael Oct 21 '15 at 16:29
"At most $\log^{O(1)} n$" means that there is a constant $c$ such that what is being measured is $O(\log^c n)$.
In a more general context, $f(n) \in \log^{O(1)} n$ is equivalent to the statement that there exists (possibly negative) constants $a$ and $b$ such that $f(n) \in O(\log^a n)$ and $f(n) \in \Omega(\log^b n)$.
It is easy to overlook the $\Omega(\log^b n)$ lower bound. In a setting where that would matter (which would be very uncommon if you're exclusively interested in studying asymptotic growth), you shouldn't have complete confidence that the author actually meant the lower bound, and would have to rely on the context to make sure.
The literal meaning of the notation $\log^{O(1)} n$ is doing arithmetic on the family of functions, resulting in the family of all functions $\log^{g(n)} n$, where $g(n) \in O(1)$. This works in pretty much the same as how multiplying $O(g(n))$ by $h(n)$ results in $O(g(n) h(n))$, except that you get a result that isn't expressed so simply.
Since the details of the lower bound are in probably unfamiliar territory, it's worth looking at some counterexamples. Recall that any $g(n) \in O(1)$ is bounded in magnitude; that there is a constant $c$ such that for all sufficiently large $n$, $|g(n)| < c$.
When looking at asymptotic growth, usually only the upper bound $g(n) < c$ matters, since, e.g., you already know the function is positive. However, in full generality you have to pay attention to the lower bound $g(n) > -c$.
This means, contrary to more typical uses of big-oh notation, functions that decrease too rapidly can fail to be in $\log^{O(1)} n$; for example, $$\frac{1}{n} = \log^{-(\log n) / (\log \log n)} n \notin \log^{O(1)} n$$ because $$-\frac{\log n}{\log \log n} \notin O(1)$$ The exponent here grows in magnitude too rapidly to be bounded by $O(1)$.
A counterexample of a somewhat different sort is that $-1 \notin \log^{O(1)} n$.
• Can't I just take $b=0$ and make your claimed lower bound go away? – David Richerby Oct 26 '15 at 22:02
• @DavidRicherby No, $b=0$ still says that $f$ is bounded below. Hurkyl: why isn't $f(n) = 1/n$ in $\log^{O(1)} n$? – Gilles 'SO- stop being evil' Oct 27 '15 at 0:27
• @Gilles: More content added! – user5386 Oct 27 '15 at 0:57
• @Gilles OK, sure, it's bounded below by 1. Which is no bound at all for "most" applications of Landau notation in CS. – David Richerby Oct 27 '15 at 7:51
• 1) Your "move around $O$" rule does not always work, and I don't think "at most" usually has that meaning; it's just redundant. 2) Never does $O$ imply a lower bound. That's when you use $\Theta$. 3) If and how negative functions are dealt with by a given definition of $O$ (even without abuse of notation) is not universally clear. Most definitions (in analysis of algorithms) exclude them. You seem to assume a definition that bounds the absolute value, which is fine. – Raphael Oct 27 '15 at 9:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122733235359192, "perplexity": 329.8892521906571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497309.31/warc/CC-MAIN-20200330212722-20200331002722-00501.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=0393276 | MathSciNet bibliographic data MR393276 (52 #14086) 20K35 Dempwolff, Ulrich On extensions of an elementary abelian group of order $2\sp{5}$$2\sp{5}$ by ${\rm GL}(5,\,2)$${\rm GL}(5,\,2)$. Rend. Sem. Mat. Univ. Padova 48 (1972), 359–364 (1973). Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991225004196167, "perplexity": 4781.943322410365}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831565.57/warc/CC-MAIN-20140820021351-00451-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.ask.com/question/750-ml-equals-how-many-ounces | # How many ounces are there in 750 ml?
When converting milliliters to ounces, 750 ml is the equivalent to roughly 25.4 fluid ounces. Milliliters are part of the metric system, while ounces are part of the US and imperial systems of measurements.
The most popular bottle size for both wine and spirits is 750 ml, although they are also sold in 1 L and 1.5 L sizes. Although the bottles are measured using the metric system, drink recipes usually are created using ounces in the United States. In order to calculate how many drinks a bottle will provide, bartenders need to convert to ounces.
A standard glass of wine contains 5 ounces, so a standard 750 ml bottle provides roughly five glasses of wine. A mixed beverage, like vodka and tonic, normally contains 1.5 ounces of spirits, so a 750 ml bottle contains enough alcohol for roughly 17 drinks.
Reference:
Q&A Related to "How many ounces are there in 750 ml?"
There are 25 ounces in 750 mL. You can determine figures like this on your own if you can remember that 30 mL is equivalent to one ounce. http://www.ask.com/web-answers/Reference/Other/how...
Converting 750 milliliter to ounces gives 25.3605 ounces. One milliliter = http://www.chacha.com/question/how-many-ounces-are...
750 mL is equivalent to approximately 25.36 ounces. It's also equal to 3.17 cups. Ask kgb_ http://www.kgbanswers.com/how-many-ounces-in-750-m...
750 milliliters is equal to 25.36052 ounces.
Convert to
Top Related Searches
Explore this Topic
A measurement of 500 ML equals 16.912 ounces. This answer is figured by the transferring a ML to an ounce. An actual milliliter is one thousandth of a liter (0.002 ...
One ounce is equal to 29.57353 milliliters (ml). ...
The amount of 60 ML is equal to 2.02 US fluid ounces. Fluid ounces are used in measuring liquids when baking or cooking. Milliliters can also be converted to pints ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134939670562744, "perplexity": 3740.9316888238404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00015-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://mathhelpforum.com/geometry/108369-triangles-print.html | triangles
• October 15th 2009, 11:02 PM
thereddevils
1 Attachment(s)
triangles
ABCD is parallelogram where X is a point that lies on BD such that DX=3XB . AX extended meets BC at Y and DC is extebded to Z . Prove that triangle AXB is similar to triangle ZXD .
Since ABCD is a prarallelogram , angle BAX = angle XZD .
angle ABX = angle XDZ
Then how can i make use of this information , DX=3XB to prove .
$DX\neq XB$ . But it's proportionate . So if the sides are proportionate , can i say that these triangles are congruent .
• October 16th 2009, 12:49 AM
red_dog
You have to prove that the triangles are similar. But this is obviously because AB ia parallel to DZ. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469520449638367, "perplexity": 1743.0602615932744}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455113263.14/warc/CC-MAIN-20150501043833-00074-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/367752/matrix-norm-set-2 | # Matrix Norm set #2
As a complement of the question
Matrix Norm set
and in order to complete the Problem 1.4-5 from the book: Numerical Linear Algebra and Optimisaton by Ciarlet. I have this additional conditions:
(3) if $\|\cdot\|^{\prime}$ be any matrix norm, then there exists (at least) one subordinate matrix norm $\|\cdot\|$ satisfying $\|\cdot\| \leq \|\cdot\|^{\prime}$.
(4) the matrix norm $\|\cdot\|$ is subordinate if and only if it is minimal element of the set $\mathcal{N}$.
Then, show that there exists matrix norms $\|\cdot\|$ satisfying $\|I\| = 1$, which are yet not subrodinate. Where my definition of matrix norm is:
\begin{eqnarray*} \|A\| & = & 0 \Leftrightarrow A=0, \mbox{ and } \|A\|\geq 0\\ \|\alpha A\| & = & |\alpha| \|A\|\\ \|A + B\| & \leq & \|A\| + \|B\|\\ \|AB\| & \leq & \|A\|\cdot\|B\| \end{eqnarray*}
for all $A,B\in M_n$ and $\alpha\in\mathbb{R}$. Besides, my definition of subordinate matrix norm is: there exists a vector norm $\|\cdot\|$, such that $$\|A\|\ =\ \sup_{v\neq 0}\frac{\|Av\|}{\|v\|}.$$
-
## Part (3)
If a matrix $\|\cdot\|'$ is given on $M_n(\mathbb{R})$, we can define a norm $|\cdot|$on $\mathbb{R}^n$ by embedding $\mathbb{R}^n$ into $M_n(\mathbb{R})$. For example, consider the following map $$L:\mathbb{R}^n \to M_n(\mathbb{R}),\quad v\mapsto (v,0,\dots,0).\tag{1}$$ That is to say, $L(v)$ is the $n\times n$ matrix whose first column is $v$ and other columns are zero. Then we can define $$|\cdot|:\mathbb{R}^n\mapsto [0,\infty),\quad v\mapsto \|L(v)\|'. \tag{2}$$ It is easy to verify that $|\cdot|$ defined in $(2)$ is a norm on $\mathbb{R}^n$. Note that for any $A\in M_n(\mathbb{R})$ and $v\in \mathbb{R}^n$, $$L(Av)=(Av,0,\dots,0)=A\cdot L(v),$$ and hence $$|Av|=\|L(Av)\|'=\|A\cdot L(v)\|'\le\|A\|'\cdot\|L(v)\|'=\|A\|'\cdot|v|.\tag{3}$$ Now let $\|\cdot\|$ be the matrix norm subordinate to $|\cdot|$. Then for every $A\in M_n(\mathbb{R})$, from $(3)$ we know that $$\|A\|=\sup_{v\ne 0}\frac{|Av|}{|v|}\le \sup_{v\ne 0}\frac{\|A\|'\cdot|v|}{|v|}=\|A\|'.$$
## Part (4)
"$\Rightarrow$" direction. Let $\|\cdot\|$ be a subordinate norm. To show $\|\cdot\|$ is minimal, let $\|\cdot\|'\le \|\cdot\|$, and it suffices to show that $\|\cdot\|'=\|\cdot\|$. According to the conclusion in part (3), there exists a subordinate norm $\|\cdot\|''$, such that $\|\cdot\|''\le \|\cdot\|'$. Therefore, $\|\cdot\|''\le \|\cdot\|$ are both subordinate norms, then as an immediate corollary of the conclusion in part (1) here, $\|\cdot\|''=\|\cdot\|$, i.e. $\|\cdot\|$ is minimal.
"$\Leftarrow$" direction. Let $\|\cdot\|$ be a minimal norm. Then by the the conclusion in part (3), there exists a subordinate norm $\|\cdot\|'$, such that $\|\cdot\|'\le \|\cdot\|$. Since $\|\cdot\|$ is minimal, these two norms must coincide, i.e. $\|\cdot\|$ is subordinate.
## The Last Part
Let $\|\cdot\|'$ be an arbitrary matrix norm with $\|I\|'=1$, and fix an arbitrary matrix $P\in M_n(\mathbb{R})$ which does not commute with some other matrix in $M_n(\mathbb{R})$. Then define $$\|\cdot\|:M_n(\mathbb{R})\mapsto [0,\infty),\quad A\mapsto \|A\|'+\|AP-PA\|'. \tag{4}$$ Claim: $\|\cdot\|$ is a matrix norm which is not subordinate to any vector norm, and $\|I\|=1$.
Proof: By definition, $\|I\|=1$, and only the condition $\|AB\|\le\|A\|\cdot\|B\|$ is nontrivial in verifying that $\|\cdot\|$ is matrix norm. Note that $$ABP-PAB=A(BP-PB)+(AP-PA)B,$$ so by $(4)$, \begin{eqnarray*} \|AB\|&= &\|AB\|'+\|ABP-PAB\|'\\ &\le &\|A\|'\cdot\|B\|'+\|A\|'\cdot\|BP-PB\|'+\|AP-PA\|'\cdot\|B\|'\\ &\le &(\|A\|'+\|AP-PA\|')\cdot(\|B\|'+\|(BP-PB)\|')\\ & = &\|A\|\cdot\|B\|, \end{eqnarray*} i.e. $\|\cdot\|$ is a matrix norm. On the one hand, by $(4)$, $\|\cdot\|\ge\|\cdot\|'$. On the other hand, there exists $A\in M_n(\mathbb{R})$, such that $A$ does not commute with $P$, so by $(4)$, $\|A\|>\|A\|'$, i.e. $\|\cdot\|\ne\|\cdot\|'$. According to part (4), subordinate matrix norms are minimal, so $\|\cdot\|$ is not subordinate.
-
Excelent, thanks so much for the answer. – user70195 Apr 21 '13 at 13:40
@ShanaKugimiya: You are welcome. The last part of your question is very interesting to me, and I didn't know any concrete example of not subordinate matrix norm with $\|I\|=1$ before considering how to answer your question. – 23rd Apr 21 '13 at 14:40
@ShanaKugimiya: I realized that my argument in the last part can be simplified. Please see my update. – 23rd Apr 21 '13 at 18:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995512068271637, "perplexity": 191.7369114628768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446943.4/warc/CC-MAIN-20141017005726-00124-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://digital.auraria.edu/AA00001666/00001 | Citation
## Material Information
Title:
Least-squares finite-element solution of the neutron transport equation in diffusive regimes
Creator:
Ressel, Klaus Jürgen
Place of Publication:
Denver, CO
Publisher:
Publication Date:
Language:
English
Physical Description:
viii, 101 leaves : illustrations ; 29 cm
## Subjects
Subjects / Keywords:
Neutron transport theory ( lcsh )
Least squares ( lcsh )
Finite element method ( lcsh )
Finite element method ( fast )
Least squares ( fast )
Neutron transport theory ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )
## Notes
Bibliography:
Includes bibliographical references (leaves 83-86).
Thesis:
Submitted in partial fulfillment of the requirements for the degree, Doctor of Philosophy, Applied Mathematics
General Note:
Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Klaus Jürgen Ressel.
## Record Information
Source Institution:
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
32586019 ( OCLC )
ocm32586019
Classification:
LD1190.L622 1994d .R47 ( lcc )
Full Text
LEAST-SQUARES FINITE-ELEMENT SOLUTION OF THE
NEUTRON TRANSPORT EQUATION IN
DIFFUSIVE REGIMES
by
KLAUS JURGEN RESSEL
B. S. (Math), Universitat zu Koln, 1985
B. S. (Physics), Universitat zu Koln, 1986
M. S. (Math), Universitat zu Koln, 1991
A thesis submitted to the
Faculty of the Graduate School of the
in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Applied Mathematics
1994
This thesis for the Doctor of Philosophy
degree by
Klaus Jurgen Ressel
has been approved for the
by
(? //h'fa**-/ /33
Date
Ressel, Klaus Jurgen (Ph.D., Applied Mathematics)
Least-Squares Finite-Element Solution of the Neutron Transport Equation in Diffusive Regimes
Thesis directed by Professor Thomas A. Manteuffel
ABSTRACT
A systematic solution approach for the neutron transport equation is considered that is
based on a Least-Squares variational formulation and includes theory for the existence and
uniqueness of the analytical as well as for the discrete solution, bounds for the discretization
error and guidance for the development of an efficient solver for the resulting discrete system.
In particular, the solution of the transport equation for diffusive regimes is studied.
In these regimes the transport equation is nearly singular and its solution becomes a solution
of a diffusion equation. Therefore, to guarantee an accurate discrete solution, a discretization
of the transport operator is needed that is at the same time a good approximation of a
diffusion operator in diffusive regimes. Only few discretizations are known that have this
property. Also, a Least-Squares discretization with piecewise linear elements in space fails
to be accurate in diffusive regimes, which is shown by means of an asymptotic expansion.
For this reason a scaling transformation is developed that is applied to the trans-
port operator prior to the discretization in order to increase the weight for the important
components of the solution in the Least-Squares functional. Not only for slab geometry but
also for x-y-z geometry it is proven that the resulting Least-Squares bilinear form is contin-
uous and V-elliptic with constants independent of the total cross section and the scattering
cross section. For a variety of discrete spaces this leads to bounds for the discretization
error that stay also valid in diffusive regimes. Thus, the Least-Squares approach in combi-
nation with the scaling transformation represents a general framework for the construction
of discretizations that are accurate in diffusive regimes.
For the discretization with piecewise linear elements in space a multigrid solver in
space was developed that gives V-cycle convergence rates in the order of 0.1 independent of
the size of the total cross section, so that one full multigrid cycle of this algorithm computes
a solution with an error in the order of the discretization error.
This abstract accurately represents the cgnten;
publication.
Signei
Thomas A. Manteuffel
CONTENTS
Acknowledgements ........................................................ v
List of Notation..................................................... vi
CHAPTER
1 INTRODUCTION AND PRELIMINARIES ....................................... 1
1.1 Introduction and Outline......................................... 1
1.1.1 Opening Remarks.............?.......................... 1
1.1.2 Outline ............................................... 1
1.2 Neutron Transport Equation and Diffusion Limit............... 2
1.2.1 Neutron Transport Equation . .............................. 2
1.2.2 Diffusion Limit.......................................... 6
1.3 Previous Work on Numerical Solution ....................... 7
1.4 Least-Squares Approach .......................................... 9
2 SLAB GEOMETRY....................................................... 13
2.1 Problems with Direct Least-Squares Approach.................. . 14
2.2 Scaling Transformation.......................................... 17
2.3 Error Bounds for NondiffUsive Regimes........................ 20
2.3.1 Continuity and V-ellipticity.............................. 20
2.3.2 Error Bounds.............................................. 27
2.4 Continuity and V-ellipticity with respect to a scaled norm... 30
2.5 Error Bounds for Diffusive Regimes........................... 38
. 3 X-Y-Z GEOMETRY .................................................... 46
3.1 Continuity and V-ellipticity ................................... 47
3.2 Spherical Harmonics ........................................... 51
3.3 Error Bounds .................................................. 54
4 MULTIGRID SOLVER AND NUMERICAL RESULTS............................... 60
4.1 Sjv-Flux and Pjv-i-Moment Equations ............................ 60
4.1.1 S^-Flux Equations......................................... 60
4.1.2 Moment Equations.......................................... 62
4.1.3 Least-Squares Discretization of the Flux and Moment Equations 64
4.2 Properties of the Least-Squares Discretization............... 66
4.3 Multigrid Solver ............................................. 75
4.3.1 Sjy Flux Equations........................................ 75
4.3.2 Moment Equations.......................................... 76
5 CONCLUSIONS........................................................ 81
5.1 Summary of Results.............................................. 81
5.2 Recommendations for Future Work................................. 82
BIBLIOGRAPHY............................................................... 83
APPENDIX
A FLUX STENCIL ......................................................... 87
B MOMENT STENCIL ..................................................... 98
ACKNOWLEDGEMENTS
I wish to thank, first and foremost, my advisor, Prof. Tom Manteuffel, for his
discuss problems in deep detail, along with his mathematical insight and creativity resulted
always in helpful answers and hints, which formed the foundation of this thesis. Secondly, I
would like to express my appreciations to Prof. Steve McCormick, who has been a consultant
to this work since its beginning and whose expertise in multilevel algorithms has proved
invaluable. I wish also to thank the remainder of my committee, Professors Jan Mandel,
Jim Morel and Tom Russell. Jan Mandel and Tom Russell taught me in their classes the
mathematical theory of Finite-Elements. From the many discussions with Jim Morel during
my stay of 9 months at Los Alamos National Laboratory I gained a lot of insight into
transport problems. I am also grateful to the Center of Nonlinear Studies at Los Alamos
National Laboratory for the financial support and the use of their facilities during this time.
Particular thanks goes to Debbie Wangerin, who guided me through the jungle of rules
imposed by the Graduate School. Moreover, I wish to express my gratitude to Dr. Gerhard
least, I would like to thank all members of the Center for Computational Mathematics of
the University of Colorado at Denver and my friends Marian Brezina, Dr. Max Lemke, Dr.
Jim Otto, Radek Tezaur and Dr. Petr Vanek for their friendship and support.
LIST OF NOTATION
For the most part, the following notational conventions are used in this thesis. Par-
ticular usage and exceptions should be clear from the context.
Scalars, Vectors and Sets
x,y,z standard space coordinates.
Zl,Zr left and right boundary of the slab.
9 polar angle with respect to z-axis.
V = cos(0).
Ot total cross section.
0's scattering cross section.
Oa = e := 1 / a T := e or Ey/a + e2, scaling parameter.
N number of Legendre Polynomials used in truncated expansion.
h mesh size.
r := 0 ,y, z).
dr := dxdydz, incremental spatial element.
n := (fij,, Qy,Qz) (cos(^) sin(0), sin(y>) sin(0), cos(0)), or in polar coordinates: D := (0, d£l := ; incremental solid angle element.
m real numbers.
w complex numbers.
iz region in IR3.
diz boundary of 7Z.
\n\ Lebesgue measure of TZ.
diam(7£) diameter of TZ.
S1 unit sphere in M3.
Th partition of [z;, zr] or triangulation of TZ.
D = [zi,zr] x [-1,1] in ID = TZ x S1 in 3D, (Computational domain).
Multi-index Notation
/? :=G8i,/?2,/%).
\P\ Pi + 02 + 03-
nP _______ 9^L_
dyP2 <)z^'-i '
Operators
7
P
5
5_1
£
£
Q
Q
Hjv
n,
nk
£s
An
JL
Identity.
:= 2 f-i'dp (in 1 D)>
:= JS1 dQ (in 3 D).
P + t(I P), Scaling Transformation.
= P+1(I-P)._
unsealed Transport, operator in 1 Dor in 3 D.
:= SC, scaled Transport operator.
:= -SfiS = (1 e)(Pp + pP) + Sfj,I.
:= sns.
Projection onto space of functions, that have truncated expansion into
Legendre polynomials or spherical harmonics.
Projection onto space of functions that are piecewise polynomials of degree r on a
partition of the slab [zi,zr].
Projection onto space of functions, that are piecewise polynomials of degree r
on a given partition.
_d_
dfi
(1-p2)^
dn
Sturm-Liouville operator.
Laplacian in polar coordinates on S1.
Sn Flux operator.
Moment operator.
Functions
Mp)
p-(p)
Y,m (Q)
?(z.p)>3(£.0)
is
angular Flux.
Legendre Polynomial of degree l.
:= V^TTT P;(p) normalized Legendre Polynomial of degree l.
Associated Legendre Polynomial of degree l.
Spherical Harmonics.
Moment with respect to pi(fi) or Y(0)
source term in 1 D or 3 D.
:= Sq, scaled source term.
:= S~xv, for v £ V.
complex conjugate of v 6 V.
Vll
Function Spaces
IPr(D) polynomials on the domain D with degree smaller or equal to r.
IPr(Th) piecewise polynomials of degree < r on 7^.
C(D) space of continuous functions on D.
C{D) space of infinitely differentiable functions on D.
L2(D) space of square Lebesgue integrable functions on D.
V Hilbert space.
Vh discrete subspace of V.
V := S~1(V), where S-1 is the inverse of the scaling transformation.
Bilinear Forms
(u, v)Mn standard Euclidean inner product of Mn.
zr 1
:= f f uv dfj, dx, (in 1. D).
m -l
:= f f uv* dQdr, (in 3 D).
ns1
:= (£u,£v}.
:= (£Su,CSv}, double scaled form.
:= A/(,)
:= (E||< ll^H2)1/2- norm of IT*(Gi) x L2(G2),
where Gi := [zj, zr] and G2 := [1,1] in ID
and Gi := 1Z and G2 := S1 in 3D.
:= (E|/3|=* ll^ull2)1/2, semi-norm of Hk{G{) x L2(G2).
Norm associated with V.
Norm associated with V.
= (II^Sll2+M2)1/!-
Matrices
w = (wi,... ,wN)T.
1 = (!,..., 1)T N elements.
R = l-wT.
Q = diag(wi,.. ,Wjv).
M = diag(//i,.. ,Wv)-
{u,v)
a(u,v)
a(it, v)
Norms
IMI
IMIm
||fc,0
II-Ik
II-Ik
IIMII
Vlll
CHAPTER 1
INTRODUCTION AND PRELIMINARIES
1.1 Introduction and Outline
1.1.1 Opening Remarks
The Least-Squares approach represents a systematic solution technique that in-
cludes theory for wellposedness of the continuous and discrete problem, error bounds for
discretization error and guidance for the development of an efficient solver for the resulting
discrete system. Furthermore, the Least-Squares approach is a general methodology that can
produce a variety of algorithms, depending on the choice of the Least-Squares functional,
the discrete space and the boundary treatment.
For many partial differential equations (PDEs), a straightforward Least-Squares
approach is ill-advised, since it requires more smoothness of the solution than a Galerkin ap-
proach and results in a squared condition number of the discrete operator. However, problems
of first order or elliptic problems with lower order terms are converted by a Least-Squares
formulation into a self-adjoint problem. Moreover, by introducing physically meaningful new
variables and transforming the original problem into a system of equations of first order, the
smoothness requirements of the solution can be reduced and squaring the condition number
of the discrete problem is avoided. This first-order system Least-Squares (FOSLS) technique
has recently been successfully applied to the solution of general convection-diffusion problems
(Cai et al. [10], [11]) and the Stokes equation (Cai et al. [12]). In combination with a mixed
finite element discretization it can even be worthwhile to apply this technique to self-adjoint
second-order elliptic problems in. order to circumvent the Ladyzhenskaya-Babuska-Brezzi
consistency condition (Pehlivanov et al. [47]).
The subject of this thesis is the extension of the Least-Squares approach to the
solution of the neutron transport equation. This equation mathematically describes the mi-
gration of neutrons through a host medium and their interaction (absorption or scattering)
with the nuclei of the host medium after a collision. The fact that the neutron transport equa-
tion is a already first order integro-differential equation motivates a Least-Squares approach.
However, due to special properties of the transport equation, this is less straightforward than
it might appear. In diffusive regimes, where the probability for scattering is very high while
that for absorption is very low, the transport equation is nearly singular and its solution
is close to a solution of a diffusion equation. To guarantee an accurate discrete solution in
these regimes, a discretization of the transport operator is required that becomes a good
approximation of a diffusion operator in diffusive regimes. Up to the present, only a few
special discretizations have this property. Therefore, diffusive transport problems are hard
to solve. The principal part of this thesis is devoted to the extension of the Least-Squares
approach to transport problems in diffusive regimes.
1.1.2 Outline
This thesis is organized as follows: Chapter 1 continues with an introduction to the
neutron transport equation in Section 1.2, where also properties of this equation, especially
for diffusive regimes (diffusion limit), are discussed. Section 1.3 provides an overview of
previous work on numerical solution of neutron transport problems. Finally, in Section 1.4
the Least-Squares discretization and its associated standard Finite-Element theory are briefly
reviewed.
Chapter 2 deals with the application of the Least-Squares approach to one-dimensional
(slab geometry) neutron transport problems. In Section 2.1, we analyze why a Least-Squares
discretization directly applied to the transport equation is not accurate in diffusive regimes
when combined with simple discrete spaces that use piecewise linear basis functions in space.
To cure this problem, in Section 2.2 we intorduce a scaling transformation is introduced,
which plays a key role in this thesis and is applied to the transport operator prior to the dis-
cretization. For the scaled Least-Squares functional, V-ellipticity and continuity are proved
with respect to a simple un-scaled norm in Section 2.3. Here we also obtain simple bounds
for the discretization, although they are only valid for nondiffusive regimes. This is a result
of the V-ellipticity and continuity constants dependence on the total cross section at and the
absorption cross section cra, which are the coefficients in the transport equation that deter-
mine how diffusive a region is (see Section 1.2). For diffusive regimes, cr( is very large, which
causes the simple error bounds in the unsealed norm to blow up. With respect to a scaled
norm, the V-ellipticity and continuity of the Least-Squares bilinear form with constants in-
dependent of a variety of discrete spaces, the V-ellipticity and continuity with constants independent of
valid in diffusive regimes. The first class of discrete spaces consists of spaces with functions
that can be expanded into the first N normalized Legendre polynomials in angle and are
piecewise polynomials in space, whereas the second class of spaces is formed by functions
that are piecewise polynomials in space as well as in angle.
In Chapter 3 we generalize the theory of Chapter 2 to three-dimensional problems in
x-y-z geometry. In Section 3.1 we prove the V-ellipticity and continuity of the Least-Squares
bilinear form with constants independent of ot and on spherical harmonics, which are used as basis functions for the discretization in angle
(FV-approximation). Finally, in Section 3.3 we establish error bounds for the Least-Squares
discretization using spherical harmonics as basis functions in angle and piecewise polynomials
on a triangulation of the spatial domain as basis functions in space.
Chapter 4 presents numerical results for slab geometry. In Section 4.1 we introduce
the discrete ordinates Sn flux and the Pn moment equations, which are semi-discrete forms
of the transport equation. All numerical results in this chapter are based on a Least-Squares
discretization of these forms using piecewise linear or. quadratic basis elements in space. In
Section 4.2 we summarize some properties of the Least-Squares discretization. In Section 4.3
we describe the components of full multigrid solvers, that were developed for the Sn flux as
well as for the moment equations and implemented in C+-1- under use of a special designed
array class. We also include convergence rates for the multigrid solvers in this section.
In Chapter 5 we summarize our results and give some recommendations for future
work.
1.2 Neutron Transport Equation and Diffusion Limit
1.2.1 Neutron Transport Equation
Transport theory is the mathematical description of the migration of particles
through a host medium. Particles will move through a host medium in complicated zigzag
2
paths, due to repeated collisions and interactions with host particles. As a consequence of
these collisions and interactions, the particles are transported through the host medium,
which explains the name transport theory.
Transport processes can involve a variety of different types of particles such as neu-
trons, gas molecules, ions, electrons, photons, or even cars, moving through various back-
ground media. All of these processes can be described by a single unifying theory, since they
are all governed by the same type of equation, a Boltzmann transport equation (Duderstadt
and Martin [17]). The origin of this theory is the kinetic theory of gases, in which case the
transported particles and the host particles are equal to gas molecules. For this situation
Boltzmann formulated his famous nonlinear equation in 1877 (Broda [9]). Transport theory
currently plays a fundamental role in many areas of science and engineering. For instance,
the diffusion of light through stellar atmospheres (radiative transfer) and the penetration of
light through planetary atmospheres is fundamental for astrophysics. Moreover, radiation
therapy, shielding of satellite electronics, and modeling of semiconductors require transport
calculations. Further, the transport of vehicles along highways (traffic flow problems) and
the random walk of students during registration can be analyzed by transport theory.
The transport of neutrons through matter, which is considered in this thesis, is a
major part of transport theory, because of its importance for the design of nuclear reactors
and nuclear weapons. In the mathematical description of the transport of neutrons through
matter three main interactions must be taken into account. After a neutron collides with
a host nucleus, it can either get built into the nucleus (absorption), or it can be reflected,
so that its travel direction and its energy have changed (scattering), or it can cause fission,
that is, the nucleus breaks into two smaller parts and neutrons and energy are released.
Since the collision of a neutron with a nucleus and its result are uncertain, the mathematical
formulation of neutron transport is based on probability. Therefore, the quantity computed
in neutron transport is the expected density N(r, Q, E, t) of neutrons at position r moving in
direction Q with energy E at time t. Although the knowledge of the simple particle density
p(r,t), which specifies the density of neutrons independent of their travel direction and
their energy, would be sufficient for most applications, there is no equation that adequately
describes this quantity. For this reason the density is further subdivided into the density
N(r,Q,E,t) of neutrons with a specific travel direction and a certain energy. Defining
the phase space as the space spanned by the location vector and the velocity vector or
equivalently spanned by the location vector, the unit vector describing the travel direction
and the energy of a neutron, the density N can also be viewed as the phase space density.
An equation for N(r, D, E, t) may be obtained either in an abstract way from Liou-
villes theorem (Cercignani [15]), (Osborn and Yip [46]), or from a simple balance argument,
based on the neutron conservation within a small volume element of the phase space (Lewis
and Miller [34]). Both ways result in the following balance equation for the expected neutron
density N:
dN
= Q -vVN-atvN + s. (1.1)
at
Here, v = yjis the speed of a neutron with mass that has energy E. The probability
that a neutron will collide with a nucleus while traveling along a path of length dl is given
by crt dl, where (Tt{r, E) is the so called total cross section1. Since the neutron density
is, in most applications, much smaller than the density of the host nuclei, neutron-neutron
JThe reciprocal of the total cross section is called mean free path (mfp), since this is exactly the length
that a neutron can travel on the average before encountering a collision.
3
collisions can be neglected, so that crt is assumed to be independent of the neutron density
N. Otherwise, the balance equation (1.1) would be nonlinear.
The first term on the right-hand side of (1.1) represents loss of particles due to
streaming, the second term represents loss due to collisions, and s = s(r,Q, E,t) represents
both, implicit sources of neutrons due to inscattering and explicit sources. This explains
why (1.1) is a balance equation.
In the following only steady state problems are considered. Letting £2, E, t) :=
vN(r, £2, E,t) be the angular flux of neutrons, the steady state form of (1.1) becomes
£2 Vip(r, £2, E) + o-tip(r, £2, E) = s(r, £2, E). (1.2)
Further, fission as a possible interaction is excluded in the following, so that, in addition
to an external source, s includes a term ss(r,Q,E) that describes the neutrons that are
scattered into the direction £2 and energy E from some other direction £2' and some other
energy E1-. The scattering source term can be modeled as (Lewis and Miller [34, p.35])
CO
ss(r,Q,E) := J j crs(r,Q! ^Q, E'^ E)4>(r,Q',E,)dn,dE',
os1
where S1 denotes the unit sphere in IR? and the scattering cross section bility that a neutron will be scattered from Qf £2 and E' > E. In most cases as depends
only on the angle between £27 and £2, so crs = of £2' £2, the scattering is said to be isotropic; otherwise, it is called anisotropic.
In most applications, it is appropriate to discretize in energy to form what is known
as the multi-group-equations (Duderstadt and Martin [17, p. 407]) (Lewis and Miller [34,
p. 61])., After dividing the energy range 2 into subintervals [Ek,Ek + dE] and denoting by
ipk the flux of neutrons with energy in this interval, this discretization results in a set of so
called single-group-equations
£2 Vij>k + = J s1
for any flux il>k. In addition to an external source, qk now includes a term that represents
particles scattered into the energy interval [Ek,Ek + dE] from all other energy groups. In
the absence of fission, neutrons generally lose energy. In this case the multi-group-equations
can be solved by starting with the highest energy level and solving for the next lower energy
level (1.3) in turn. The computed loss from higher energy levels appears then as source in
lower energy levels. Therefore, solving the multi-group-equations reduces to solving multiple
single-group-equations.
The topic of this thesis is the numerical solution of the single-group, steady state,
isotropic neutron transport equation. The restriction to problems with isotropic scattering
is justified by the fact that, for anisotropic problems a solution technique called multigrid-in-
angle was developed by Morel and Manteuffel [44], which depends upon solving an isotropic
problem on the coarsest level (see Section 1.3). Omitting the superscript k and including
boundary conditions, (1.3) becomes for isotropic scattering:
f [Q V + trt/ crsP] ?/)(r,£2) = g(fj£2) for (r, £2) 6 7£ x S1
\ £2) = g(r, £2) for r G dlZ A n(r) £2 < 0
2 Since there exists an energy E* such that the number of neutrons with energy E > E* can be neglected,
the energy range can be assumed to be a bounded interval of the form [0,iJ*].
4
which is an equation for the single-group angular flux ip(r,Q), to be determined for all points
L = (x, y, z) in a region H C IR3 and for all travel directions on the unit sphere Sl. The
operator P is defined by
PHz,Q) = J dQ,' , (1.5)
51
which is an Z.2-projection onto the space of all functions that are independent of direction
angle ft. The boundary conditions specify the inflow of particles into the region H, since
n(r) denotes the unit outgoing normal at r G dll.
In the slab geometry case, where it is assumed that = -|^ = 0 (then the
solution ip is in particular symmetric around the z-axis), so that tp(r, Q) = ip(z,/i) with
H := cos(0) where 9 is the angle between Q and the 2-axis, (1.4) reduces to
* d ^'dz + ip(zltn) = m{y) for (i > 0
ip(zr,fi) = 9r{y) for y. < 0
,
(1.6)
where the projection operator P is now given by
l
Pip(z,n) :=^ J ip(z,n')dy!
-l
(1.7)
The boundary conditions in (1.6) specify again the inflow of neutrons into the slab, since at
the left end zi of the slab the inflow directions are given by y, > 0, while at the right end zr
the inflow directions are given by \i < 0.
Figure 1.1: Computational domain for slab geometry
In neutron transport theory it is common to introduce the absorption cross sec-
tion crtt := £:=n-V + at(I-P) + (raP, (1.8)
and for slab geometry we have
C := + (1.9)
5
1.2.2 Diffusion Limit
Simple approximations to the transport operator, based either on ad hoc physical
assumptions such as Ficks law 3 (Lamarsh [27, pp. 125-137]) or on a Pi-approximation (Case
and Zweiffel [14, Section 8.3]), which assumes that the flux is has an expansion into the first
two Legendre Polynomials, result in a diffusion equation. This indicates that diffusion theory
is related to transport theory. Indeed, transport theory transitions into diffusion theory in
a certain asymptotic limit, called the diffusion limit.
When at * oo and 1, equations (1.4) and (1.6) respectively becomes singular.
By dividing (1.4) or (1.6) by at, it easily follows that the limit equation is (J P)ip = 0,
which is fulfilled for all functions that are independent of direction angle fi and p, respectively.
Moreover, when at * oo and ^ 1 in a certain way, the limit solution of (1.4) and (1.6),
respectively, will be a solution of a diffusion equation. This was discovered more than 20
years ago in the work by Larsen [28], Larsen and Keller [31] and Habetler and Matkowsky
[23]-
To be more specific, we consider first the slab geometry case and assume that the
slab has length 1, which can be established by a simple transformation (see Chapter 2).
Following the recent published summary of Larsen [30], we introduce a small parameter £
and scale the cross sections and the source in the following way:
q(z, fi) -> eq(z, fi), at - -, cra
£
sa,
(1.10)
where a is assumed to be 0(1). Then the transport equation becomes
£4>(z,/i) :=
O 1
fix- + ~{I P) + eotP
oz e
ip(z,fi) = eq(z,fi).
(1.11)
In combination with the special scaling (1.10), the diffusion limit is then defined by the
limit £ 0.
The physical meaning of scaling (1.10) is that the total cross section is large, so
the system is optically thick, whereas the absorption cross section and the external source
are small. We note that there are many scalings, which are capable to express this physical
situation. However scaling (1.10) stands out since the diffusion equation
= ??(*,/) (1.12)
is invariant under it in the sense that, if the substitutions (1.10) are inserted into (1.12),
then the resulting diffusion equation is independent of the scaling parameter e.
Since equation (1.11) is of singular perturbation form, general boundary layers
can be expected. Therefore, the solution is decomposed as i/>(z,fi) = ipi(z,fi) + i/'b(z, fi),
where x/)j denotes the interior solution some mean free paths away from the boundary, and
V>b denotes the solution near the boundaries. To determine i//, the following asymptotic
expansion ansatz (Friedrichs [19])
OO
o = V'(-*>?)
r=o
3 Ficks law states that the gradient of the neutron flux is proportioned to the neutron current J(r) :=
J £2) dQ.
S1
6
is inserted into (1.11). By equating the coefficients of different powers of e, it can be shown
(Larsen [30]) that ip is independent of angle, so ip(z, p) = ip(z), and that ip(z) is a solution
of the diffusion equation (1.12). To obtain the boundary layer part ips, an asymptotic
expansion of the boundary layer is performed, which is then matched with the interior
expansion. It follows (Habetler and Matkowsky [23]) that the boundary layer part ipB
decays exponentially with the distance from the boundary and that its leading term is
also independent of the direction angle. Altogether, this results in the following diffusion
expansion
V>(z, /i) = (z) + eR(z, /i), (1.13)
where the diffusion limit e > 0, the solution ip f the transport equation converges to the solution
This result can be directly extended to the three-dimensional case (Pomraning [48]).
With scaling (1.10), the three-dimensional transport equation becomes
£ip(r,Q)
0 V + -(/ P) + eaP
£
ip(r, £2) = eq(r, £2)
and its solution has the diffusion expansion
(1.14)
Vfc il) = where
equation
- V- JV offt
1.3 Previous Work on Numerical Solution
The number of actual neutron transport problems that can be solved in closed an-
alytical form is very small. Some examples are presented .in Wing [51, Chapter 6] and in
Duderstadt and Martin [17, Chapter 2]. Therefore, computational methods are required for
the solution of most neutron transport problems. They fall into two broad classes: deter-
ministic and stochastic. Stochastic methods, of which the Monte Carlo method is a chief
example, involves determination of the neutron distribution via random sampling of a large
number of neutrons in the system. As the number of particles in the simulation is increased,
the statistical accuracy of the resulting solution is improved. Consequently, these methods
are often prohibitively expensive, especially when high accuracy is needed. On the con-
trary, deterministic methods involve a discretization that transforms the neutron transport
equation into a finite system of algebraic equations, which can be solved by computers.
Since in this thesis a new deterministic approach is developed, we restrict the fol-
lowing overview, which is far from complete, to previous deterministic methods and begin
with the slab geometry case. For the discretization of. the angle dependence, most frequently
a discrete ordinates (Sn) method (Carlson and Lathrop [13]) is used. This approximation
assumes that the angular dependence of the solution can be expanded in a finite number
of Legendre Polynomials and a set of discrete equations results form a collocation at Gauss
quadrature points. For slab geometry, this discretization is equivalent to a Pjv-i approxi-
mation, which is a spectral Galerkin discretization, using the first N Legendre Polynomials
as basis functions (Lewis and Miller [34, Appendix D]).
7
Because of the property that the analytical solution of the transport equation is
converging in the diffusion limit to a solution of a diffusion equation in the interior of the slab,
the discretization of the spatial dependence is much more difficult. For accuracy reasons, the
discrete solution must have the same property. Therefore, this requires a discretization of the
transport operator that becomes a good approximation of a diffusion operator in diffusive
regimes. By applying the asymptotic expansion technique introduced in Section 1.2, to
the discrete solution, Larsen, Morel, and Miller [32] analyze the behavior of various special
discretizations in the diffusion limit. In the discrete case, the mesh size h has to be considered
as a second parameter besides the parameter e. Therefore, they define in their work the
following two different limits. If, for a fixed mesh size h, the discretization approximates
a diffusion operator in the limit s > 0, then the discretization is said to have the correct
thick diffusion limit. On the other hand, if the mesh size h varies linearly with e in the limit
£ 0 and this limit results in a consistent discretization of a diffusion operator, then the
discretization is said to have the correct intermediate diffusion limit.
Since standard finite difference discretizations, such as upwind differencing, fail to
have a correct thick diffusion limit, special discretizations have been developed that behave
correctly in diffusive regimes. Among them are the Diamond difference scheme and the
difference schemes of Lund-Wilson and of Castor, which, according to the analysis of Larsen,
Morel, and Miller [32], give the correct thick diffusion limit for the cell average flux.
Moreover, Finite-Element discretizations have been applied for spatial discretiza-
tion of the neutron transport equation in different ways. The direct Galerkin approach to
the first order integro-diiferential form (1.6) of the transport equation, as considered first
theoretically by Ukai [50] and numerically by Martin [40], does not have the correct behav-
ior in the diffusion limit, except when special discontinuous finite elements are used. The
use of discontinuous finite elements results, for example, in the Linear Discontinuous (LD)
discretization (Alcouffe et al. [2]) and the Modified Linear Discontinuous (MLD) scheme
(Larsen and Morel [33]). MLD has the additional property that with a suitable fine spatial
mesh, it can resolve boundary.layers at exterior boundaries or at interior boundaries between
media with different material cross sections.
Further, a variety of Ritz variational formulations have been proposed (see Kaplan
and Davis [26] for a summary). They have the self-adjoint second-order even-parity4 form
of the transport equation as its Euler equation and lead, independent of the choice of the
discrete Finite-Element space, to correct diffusion limit discretizations. However, the even-
parity form of the transport equation is only valid for nonvacuum regions and becomes very
tedious for anisotropic scattering or anisotropic sources (Lewis and Miller [34, p. 260]).
For the solution of the discrete system, a simple splitting iteration known as source
iteration or transport sweep has been used in the past. Because of the slow convergence of
this iteration for diffusive regimes (convergence factor 1 0(A,-)), the Diffusion Synthetic
Acceleration (DSA) method (Larsen [29]) was developed, which uses a diffusion approxima-
tion to accelerate the source iteration. By spectral analysis, Faber and Manteuffel [18] have
shown why this method is successful for problems with isotropic scattering. However, for
problems with anisotropic scattering, DSA is less effective.
Moreover, multigrid methods have been employed for the solution of discrete neu-
tron transport problems. For the LD scheme, a multigrid algorithm in space was developed
by Barnett, Morel and Harris [4] which proved to be effective even for highly anisotropic
4 It can be shown [35] by a certain transformation that the even-parity form is closely related to the
Least-Squares formulation considered here.
8
problems. For isotropic problems, this algorithm is competitive with DSA, although it uses
an expensive block-smoothing. The multigrid algorithm in space of ManteufFel, McCormick,
Morel, Oliveira and Yang [36] for isotropic problems, discretized in space by the MLD scheme,
employs a special operator induced interpolation and has been ported very efficiently to a
parallel architecture [37]. For anisotropic problems a technique called multigrid-in-angle was
developed by Morel and Manteuffel [44]. This scheme involves a shifted transport sweep to
attenuate the error in the upper half of the moments, so that the remaining error can be
approximated by the solution of a problem discretized in angle based on only the lower half
of the moments. Recursive application of this procedure leads to an isotropic problem on
the coarsest level, which can be solved by a multigrid method in space.
For higher dimensional problems, the discretization of the angle dependence also
becomes a problem. For problems with isolated sources in a strongly absorbing medium,
anomalies in the flux distribution, called ray effects (Lewis and Miller [34, p. 194]), are likely
to arise in combination with a discrete ordinates (Sjv) discretization. The 5jv discretization
causes a loss of rotational invariance, since this discretization transforms the fully rotational
invariant transport equation into a set of coupled equations that are at most invariant under
few discrete rotations. Thus, an azimuthally uniform flux, for example, is approximated by a
set of 6-functions at discrete angles, which can be very poor if the number of discrete angles
is not sufficiently large. One potential remedy is a Pn discretization, which is a spectral
Galerkin method using spherical harmonics as basis functions. This discretization results in
a fully rotational invariant discrete problem. However, the coupling of the discrete equations
is complicated and the treatment of boundary conditions is less straight forward.
As in the one-dimensional case, for higher dimensions the discretization in space
must have the correct behavior in the diffusion limit in order to obtain accurate discrete
solutions for diffusive regimes. The direct extension of the appropriate one-dimensional
discretizations is complicated, however. Bogers, Larsen and Adams [7] have shown that the
linear discontinuous (LD) finite element discretization on rectangles does not yield a correct
diffusion limit discretization, whereas the MLD discretization does. However, the efficient
solution of the discrete system resulting from the MLD discretization is an open problem.
Applying a similar multigrid algorithm, which was developed by Manteuffel et al. [36] for
the one-dimensional case, would require the extension of the operator induced interpolation
to higher dimensions, which is complicated. Morel et al. are in the process of developing a
method for three space dimensions based on the even-parity form of the transport equation
and using a Pjy discretization in angle.
We conclude that an arsenal of highly specialized computational methods exists,
whose design is adapted for particular transport problems. However, there is lack of a general
systematic solution approach that includes existence theory of the analytic and discrete
solution, error bounds for the error of discretization and guidance for the development of an
efficient solver of the resulting discrete problem. Especially for higher dimensional problems,
such an approach seems to be needed.
1.4 Least-Squares Approach
In this section, we introduce a systematic solution approach to the neutron trans-
port equation that relies on a Least-Squares self-adjoint variational formulation of (1.4), and
we summarize the associated standard Finite-Element theory. The Least-Squares approach
can be considered as a systematic solution approach, since it includes theory for the existence
and uniqueness of the analytical and discrete problem, as well as bounds of the discretization
error for a whole class of Finite-Element spaces. Furthermore, this approach will guide the
9
development of a Multilevel Projection Method (McCormick [43]) for the efficient solution
of the resulting discrete system.
A Least-Squares Finite-Element discretization with piecewise linear basis functions
in space directly applied to (1.4) does not have the correct behavior in the diffusion limit
(see Section 2.1). For this reason, the scaling-transformation [P + t(I P)], with parameter
t £ IR+ specified later, is applied to the transport operator prior to the discretization:
C := [P + T(I-P)][Q-V + atI-asP]
= P(Q.-Â¥.) + T(I~P){n-V) + T(Tt(I-P) + ((Tt-as)P (1.17)
= P(Q Y) + r(J P)(Q V) + rat(I -P) + aaP,
where in the last equation (at as) was Substituted by the absorption scattering cross section
aa. In this transformed operator, the Least-Squares variational formulation of (1.4) is given
by
minP(V), with F(ip) := f [\£ip(r,Q qa(r,£i)]2 dQdr, (1.18)
V J J
n
with qs = Sq. The Hilbert space V with underlying norm J| ||v will be specified later.
A necessary condition for ip £ V to be a minimizer of the functional F in (1.18)
is that the first variation (Gateaux derivative) of F vanishes at ip for all admissible v £ V,
resulting in the problem: find ip £ V such that
a(ip,v) := = //
m s1 n s1
qs Cv dFldr Vu £ V.
(1.19)
The essential part of the theory is to show that the symmetric bilinear form a(*, *) in (1.19)
is V-elliptic i.e., there exists a constant Ce> 0 such that, for all v £ V,
a(v,v)> Ce |M|v, (1-20)
and continuous-, i.e., there exists a constant Cc> 0 such that, for every u, v £ V,
|a(u,t>)| < Cc |||k IHk-
(1.21)
The proof of the continuity is straightforward, but the proof of the V-ellipticity is difficult
and tricky.
Denote the standard inner product and associated norm of L2(72. xS1) by
(
-//
u v* dQdr;
n s1
[[(/.[[ := y/(u, u) Vu,v£ L2(TZ x S1),
where v* is the complex conjugate of v. Using (1.20), (1.21) and the assumption that
Qs(l,^Q £ L2(1Z x S1), which ensures that the functional
qs Cv dCldr
71 s1
is bounded (|/(-u)| < Cc^2||g3|| ||t>||v ), then the Lcix-Milgram Lemma (Ciarlet and Lions
[16, p. 29]) can be applied. It follows that problem (1.19) is well posed in the sense that its
solution exists, is unique and depends continuously on the data qs. The latter follows from
Ce\\ip\\v < a(ip,ip) = l(ip) < Cc1/2||g5|| ||V-[k,
10
so
pl/2
MW < ~^\ks\\.
For the Least-Squares Finite-Element discretization of (1.19), the Hilbert space V
is replaced by a finite-dimensional subspace Vh C V, and (1.19) becomes: find fa e Vh
such that
a{fa,vh) = l(vh) Vvh E Vh. (1-22)
The existence and uniqueness of a solution fa £ Vh of the discrete problem (1.22) follows
again from the Lemma of Lax-Milgram since, as a finite-dimensional space, Vh is a closed
subspace of the Hilbert space V and is, therefore, also a Hilbert space with respect to the
inner product of V restricted to Vh.
By subtracting (1.22) from (1.19), it follows immediately that the error is orthogonal
to Vh with respect to the bilinear form a(-, ):
- fa,Vh) = 0 Vuj, £ Vb. (1-23)
The Cauchy-Schwarz inequality and (1.23) lead directly to Ceas Lemma (Brenner
and Scott [8, p.62]):
ai^-fa^-^^^a^-Vh^-Vk) Vvh £Vh or
fc~
\W i>h\\v < \ -pr- minJlV-t'hlk,
V vh£Vh
(1.24)
with the use of the V-ellipticity (1.20) and the continuity (1.21). By (1.24), the problem of
finding an estimate of the error is therefore reduced to estimating min ||^> ^>h||y. These
vh£Vh
kinds of estimates are provided by approximation theory for a wide class of spaces Vh. For
example, when we consider for simplicity only a semi-discretization in space where Vh is
formed by piecewise polynomials of degree r, V = Hm(JZ) x L2(Sl), Vh C V, and the exact
solution is in Hr+1(7l) x L2(Sl), it can be proved (Ciarlet and Lions [16, Theorem 16.2])
that
min \\'ip-Vh\\m,o vkevh
where h is the maximum mesh size of the triangulation of 12, used and
M|m,o
Y, f J \Dav\2dtldr
1/2
\i>\
fc+i,o :=
. M< n s1
1 1/2
Y j j \DaTj>\2dQ,dr
|a|=fc+l si
denote the standard Sobolev norm and semi-norm (Adams [1]), respectively. Here, we use
the standard multi-index notation
D^v :=
d^v
dxPldyfodzP*
\P\ :=EA-
i=l
for j3 := (Pi,P2,p3).
11
For Vh formed by piecewise polynomials of degree r, the combination of (1-24) and
(1.25) results in the overall error bound
U MW V
The crucial point here is that we have shown V-ellipticity (1.20) and continuity
(1.21) with respect to a weighted norm with constants Ce and Cc independent of crt and
cr0, so that an error bound similar to (1.26) for a discretization in space and in angle holds
independent of the size of crt and cra. Hence, the Least-Squares Finite-Element discretization
of the scaled transport operator with piecewise polynomials of degree r > 1 as basis functions
yields an accurate discrete solution even in the diffusion limit.
12
CHAPTER 2
SLAB GEOMETRY
The Least-Squares approach is applied in this chapter to the one dimensional (slab
geometry) neutron transport equation (1.6). Throughout this chapter, we assume without
loss of generality the following:
1) The total scattering cross section is constant in space, so o-t(z) = crt. This can be
established by the transformation
Z
f(Tt(s)ds
*'=7T---------- (2-1)
f crt(s) ds
z\
The transport equation then becomes
+ rti1 ~ p) + with
ZT Z j* Zf
*1 = J at(s) ds, 2| Zl Zl
2) The slab has length T, so (zr z\) = 1. If the transformation (2.1) was already
applied, this is directly fulfilled; otherwise, this can be established with the simple
transformation z" = This changes o" (zr zi)
3) We impose homogeneous (vacuum) boundary, conditions, so g\{n) = 0 and gr{n) = 0
in (1.6). This can be done in the following way. Define
gi(fi) for (j, > 0
gr(fi) for n<0
Then, clearly, if>t(z,fj.) 6 Jf1([z;, zr]) x L2([1,1]), so that Ctpt is well defined and
we can solve the problem Ci})o = q C^b with homogeneous boundary conditions.
The original solution is then given by ip = ipo + Vv
In this chapter, let D := [zi, zr] x [1,1] and let
i
uvd/idz and ||u|| := \/(u, u)
zi -l
denote the standard inner product and associated norm of L2(D).
2.1 Problems with Direct Least-Squares Approach
In the following, we give an explanation as to why a Least-Squares Finite-Element
discretization applied to (1.6) using piecewise linear basis functions in space does not, in
general, yield a correct diffusion limit discretization. We recall that the Least-Squares vari-
ational formulation of (1.6) is given by
Jr 1
min
in F(>!>), with := J J fi) eq(z, fi) dfidx, (2.2)
Z\ -1
and
f d'v
V := < v(z,fi) G L2(D) : fi-^- G L2(D),v(zi,fi) = 0 for ft > 0, v(zr,ft) = 0 for fi < 0
In (2.2) we used the parameterized form (1.11) of the transport equation, since it is better
suited for a diffusion limit analysis.
For the discretization of (2.2), the minimization of the Least-Squares functional is
restricted to a finite-dimensional subspace Vh C V. Without loss of generality, in the follow-
ing analysis for the discretization in angle we use a Pi approximation, which assumes that
the angle dependence of the solution has an expansion into the first two Legendre Polyno-
mials. One reason for this is that a semi discretization only in angle by a Pi approximation
results in a diffusion equation [14, Section 8.3]. Second, the behavior of the discretization in
diffusive regimes, where according to (1.3) the exact solution is nearly independent of angle,
is analyzed here; thus, a Pi approximation allowing a linear dependence in angle is sufficient.
For the discretization in space, we use piecewise polynomials on a partition Th of the slab.
Altogether, this results in the discrete space
Vh := {vh G C(D) : vh(z,fi) =.0(z) + fii(a;), where 0,i G TPr{Th)\
Vh(zi,fi) = 0 for fi > 0, vh(zr,(j,) = 0 for /i < 0 } , ' '
where lPr(Th) denotes the space of piecewise polynomials.of degree < r on the partition Th
of the slab.
By the asymptotic expansion introduced in Section 1.2, the minimizer of the Least-
Squares functional can be characterized as follows.
Theorem 2.1 (Characterization of Least-Squares minimizer)
Let the Least-Squares functional F and the discrete space Vh be given as defined in (2.2)
and (2.3) respectively. Suppose tj>h G Vh minimizes F restricted to Vh. Suppose further
that e < 1 and that iph has. the asymptotic expansion in e given by
i>h{z,y) = eQ{z) +n\{z), with
= #(z) :=
v-0 z/=0
(2.4)
where rjv,5v G lPr(Th) are independent of parameter e for all v. We then have:
(i) 50{z) = 0.
(ii) %(z) = -h{z).
14
(iii) Let
Uh := {Vo £ IPr{Th) : Vo(zl) = ^Io(zr) = 0, rjo fulfills (ii) for some i$i £ IPr(Th)} . Then for all tjq £Uh: J ^vWo + Wo dz = *T J qr)o dz. Z\ Zl Proof. We first prove (i). Using expansion (2.4) in (1.11) we have Zij>h = j [Mo] + m'o + ft + Mi] + 0(e), and, therefore, with F(1>k) = £ evFv(iph), v--2 Zr F-2(i>h) = | J 6o(z) dz, Zl Zr F-ityh) = ^ J%(z)6o(z) + S0(z)61(z)dz, (2.5) and Fv(%l>h) independent of e for v > 0. For e < 1, it is possible to bound Jr('0/!) from above independent of £ by F(i>h) since tph minimizes F and 0 £ Vh. Therefore, we must have F-2(i>h) ~ 0 and F-i(ip) = 0, since otherwise F(iph) oo in the limit £ > 0, which contradicts (2.6). In combination with (2.5), we conclude that tfoCO = 0. . To prove (ii), by virtue of (i) we can restrict the minimization of F to the space ( OO OO wh := < wh £ C(D) : wh(z,n) = ^£"^(z) + //^V<5(z); v=0 V = 1 Wh(zi,fJ.) = 0 for fi > 0, Wh(zr,fj) = 0 for /z < 0 where tjv(z), 8v(z) £ IPr(Fh) are independent of £ for all v. A necessary condition for £ Wh to minimize F is that the first variation of F vanishes at rph for all admissible Wh £ Wh, that is, (Zi>h,Zwh) = (eq, Cwhj VwhEWh and V e > 0. (2.7) 15 For wh G Wh we have Cwh= \p-io++e[iiT)[+n26[->r 1182 +ar]o\ +s2 [ni2 + fi26'2 + /i63 + arn] + 0(e3). Therefore, (2.7) is equivalent to ZT 1 J Js $$vWo + Mo) + (?0<5i + dfidz + eli + e212 + 0(e3) zi -1 2r 1 H2qS[ + aqrjo dfidz + 0(e3), zx -1 (2.8) where zr 1 hJ J H2 (TlWi + Zl -1 Zr 1 h J J fJ-2 (V0V2 + M2) + (7o53 + M3) + (Mo + V^i) + (^3f?o + ^3<5i) Zl -1 +rjWi + V1&2 + od>[T)o + M'i + M2 + aMi] +H46[6[ + a2rjoTfo dfidz. Since (2.8) holds for all e > 0 and for all Wh G Wh, in particular for Wh = ij>k, it follows that ZT 1 Zr 0 = J J f!2 (rjon'o + 2r?D6i + Mi) dfidz = | J (rj'0 + 61) dz. Zl -1 Zl Thus, Finally, we prove (iii). Because of (ii), we can restrict minimization of F to the space Wh := {wh G Wh : rf'0(z) = 61(2)}. The choice w;t 6 Wh in (2.7) will zero out the 0(1) and 0(e) term in (2.8). Comparing the 0(e2) term on the left-hand and right-hand sides gives J g ^ivWi + Mi) + (M2 + 8262) + a (Mo + 77o<5i)] + r Mi + 2a2rjoTfo dz 0 dz (2.9) 16 for all qv 6 Wh with v > 0 and for all 6 G Wh with v > 1. From the choice 6[ = 0, ?j0 = 0, T)[ = rj[ and S2 = S2, we conclude that ZT J (jfi+Sz) dz = 0 => rj[ = -S2. (2.10) ZJ Substituting (2.10) into (2.9) results in J (^i^o + rjo^Cj + ^i6i + Z^VoVo dz = J ^qS[ + 2aqrjo dz. z\ Z\ Choosing <5i = 0, then integration by parts leads to %r Zr j a6irj'0 + 2a2T)Q-qQ dz = 2a J qr]o dz, Zl Z\ which with (ii) and after division by 2a becomes zr z r J \vWo + Wo dz = J qr)o dz. (2.11) Zl Z\ Because of the choice wj, G Wh, equation (2.11) holds for all 770 G Uh. Q One major implication of Theorem 2.1 is that, when tj(z) and <5(z) are continuous piecewise linear functions, (ii) can only be fulfilled if rjo is a linear function. Otherwise, Si = t?0 is a step function, which would not be continuous. Taking the boundary conditions into account, it follows that rjo = 0. Therefore, Uh = {0}, so that (iii) is a vacant statement and does not contradict the fact that rjo = 0 is a solution. Consequently, in the diffusion limit e 0 the discrete minimizer i>h converges to iph = 0, independent of the choice of the right hand side q. This shows that the Least-Squares Finite-Element discretization of (1.6) with linear basis elements in space does not give a correct diffusion limit approximation, except in the case q = 0. For a different way of proving this result, we refer to (Manteuffel and Ressel [38]). On the other hand, for piecewise polynomial basis functions of degree r > 1, con- dition (ii) does not restrict rjo to a linear function. Therefore, Uh contains also nontrivial functions, so that (iii) implies that rjo is a Galerkin approximation of the diffusion equation + = q. Thus, the Least-Squares Finite-Element discretization with piecewise poly- nomials of degree r > 1 yields a correct discrete diffusion limit solution. However, numerical results for a discretization in space by piecewise quadratic basis functions show that applying a scaling transformation (introduced in the next section) prior to the discretization enhances the accuracy. 2.2 Scaling Transformation In this section, we introduce a scaling transformation that is applied to the transport operator prior to the Least-Squares discretization. This scaling transformation plays a key role in this thesis, since it guarantees the accuracy of the Least-Squares discretization in 17 diffusive regimes even for simple Finite-Element spaces, such as spaces using continuous piecewise linear elements in space (see Section 2.5). To motivate the scaling transformation we introduce the moment representation of the flux. Let Pi(fj) denote the 1-th Legendre polynomial. The normalized Legendre polynomials pi(fi) := y/2l + 1 Pi(p) form an orthonormal basis of L2{[ 1,1]): l \ JPk(p)pi(p)dfi = 6ki, (2.12) -l where Ski denotes Kronecker delta, i.e, Ski 1 for k = l and Ski = 0 otherwise. Assuming that il>{z,n) 6 L2([1,1]) for all z 6 [z;, zr], then ij) has the following expansion (moment representation) in angle: CO HZP) = Mz)pi(p)< (2-13) 1=0 where the Fourier coefficients i{z), which are called moments in neutron transport theory, are given by l j'l>{z,p)pi(p)dP- (2.14) -l We see directly that the projection operator P, defined in (1.7), is a projection onto zeor moments (Pip = o), the operator Pfi is a projection onto first moments (Ppip = and the operator (IP) is a projection onto moment one and all other higher moments. With the concept of the moment representation, the diffusion expansion (1.13) leads to the implication that, in diffusive regimes, only moment zero and one are the important components of the solution. Because of Ceas Lemma (1.24), the solution of the Least-Squares discretization can be viewed as the best approximation to the exact solution in the discrete space Vh with respect to the norm \/a(-, ) := < £,£ >. However, the different terms in the operator £, as defined in (1.11), are unbalanced (there are 0( j), 0(1) and 0(e) terms), so that different components of the approximation error are weighted differently in \/a(-, ). The leading term of £ is ~(I P), which means that the error in the higher moments is weighted in this norm very strongly in diffusive regimes (very small e), even though this part is not important according to the diffusion expansion (1.13). On the contrary, the error in moment zero, which is the important part in diffusive regimes, is hardly measured in the norm y/d(-, ), since it is weighted by e. The basic idea is, therefore, to scale equation (1.11), thus changing the weighting in the norm used in the Least-Squares discretization to determine the best approximation to the exact solution in the discrete space. Define for r 6 JR+ the following scaling transformation and its inverse: S~P + t(I-P), S-1 =P + ^(I-P). (2.15) After applying the scaling transformation S from the left and dividing by e, equation (1.11) becomes 1 ~ 1 (9?/} 7* Cil> := -SCi> = -Sii-£ + -^(I-P)4> + aPi> = qs, (2.16) 18 where q, Sq and l ^ dip \ dip t dip = -Pn-g- + J P)^. e dz £ dz £ dz Clearly, choosing r = 0(e) will increase the weight for moment zero and reduce the weights for the higher moments. Equation (2.16) can be balanced further by a scaling transformation from the right. Let the domain of operator C in (2.16) be the Hilbert space V. Then we define the space V by V := S'-1 V, so that ^ _ v = S_1v for all £V and Sv = v for all v E V. \ J Scaling (2.16) also from the right results in ^ 1 f)tb 7^ CSS~liP = CSiP = -SiiS^f- + -r{I P)i> + aPiP = qs, £ OZ £* (2.18) where -SfiS = -(r t2)(Ph + fiP) + e £ For r = 0(e) we have f E 0(1), so that in (2.18) the derivative of moment zero and one and the moments themselves are weighted equally. Moreover, we point out that the double-scaled operator CS can be bounded independent of £. In the Least-Squares context, the additional scaling from the right can be avoided, since min {CSiP qs, CSip qs) <=> min {Cip qs,Cip qs), (2-19) which will simplify the boundary conditions and so also the computations. Further, for slab geometry, because of transformation (2.1) we may assume without loss of generality that at and parameter £ are constant in space. However, for higher dimensional problems, this cannot be established, so that £ = e(r). For inhomogeneous material, e(r) is in general discontinuous, so that the scaling parameter r, which was chosen to be 0(e), would be discontinuous. To perform the scaling would then require to prescribe jump conditions in the scaled solution v across material interfaces. Therefore, we use the additional scaling from the right only as motivation for the choice of the scaling transformation and as a tool in the theory in Section 2.4, where we exploit the nice form of the double scaled operator (2.18). For another way of motivating the scaling by way of the moment equations, we refer the reader to Manteuffel and Ressel [38]. As outlined in Section 1.4, a necessary condition for ip E V to be a minimizer of the Least-Squares functional (2.19) is that the first variation vanishes at ip, which results in the problem: find ip EV such that a(ip,v) := (Cip,Cv) = (qs,Cv) Vu G V. (2.20) For a discretization of problem (2.20), the bilinear form a(-, ) is restricted to a finite di- mensional subspace Vh C V. In the remaining of this chapter, we analyze the error of this discretization for various subspaces Vh. 19 2.3 Error Bounds for Nondiffusive Regimes In this section we establish bounds in an unsealed norm for the discretization error of the Least-Squares discretization. However, in this norm it is not possible to prove V- ellipticity and continuity of the bilinear form (2.20) with constants independent of parameters e and a. In diffusive regimes, where e is very small, these bounds blow up and are therefore useless. Nevertheless, the bounds for diffusive regimes that are derived in Section 2.5 are only valid for e < l/\/3, so that the bounds in this section can be used to cover the range £ > 1/v^- As outlined in Section 1.4, the first step on the way to bounding the error is to prove V'-ellipticity and continuity of the bilinear form a(-,-) in some norm. From the view of standard elliptic boundary value problems, the choice V = ff1([z/) zr]) x L2([ 1,1]) (Adams [1]) with the norm l?,o //(£) +v3ilMlz seems natural. However, it is easy to see that the bilinear form a(-, ) cannot be bounded from below in this norm. Let Vk := \/2 sin(&7rz) B(/x) with B(ji) := /~3~ 6+li Y 26 S f£tzJL V 26 6 for (i £ [6,0] for fi £ [0, <5] otherwise Then, for all k £ IN, we have Vk £ i?1([z/,zr]) x L2([1,1]) and ||fc||i,o = (kit)2 + 1. Some simple calculations show that , . 1 + r2 (kir)262 2r2 + 2s4a2 a[V, V) < ----5----------- + --------7----- Choosing 6 = then the bilinear form a(-, ) is bounded for all k while ^lim [|vjb||i,o = - Thus, there is no lower bound for a(-, ) in the norm 11-1]! 0. The next obvious choice is Ml|2:= dv lYz + V := {u £ C(D), v(zi,fi) = 0 for (i > 0, v(zr,fi) = 0 for fj, < 0}. (2.21) Closure here is with respect to the norm |j|-[[[, so that V is a Hilbert space. In the following, we bound the Least-Squares discretization error in norm (2.21) for various Finite-Element spaces. 2.3.1 Continuity and V-ellipticity Before we establish continuity and V-ellipticity of the bilinear form a(-, ), we sum- marize some simple properties of our operators. 20 Lemma 2.2 (Properties of P, S and fi-^) For all u,v G V, we have: (i) (Pu,v) = {u,Pv)i and {(I P)u, v) = {u, (I P)v); P2 P\ and (I P)2 = (I P). Thus P and (J P) are orthogonal projections; (ii) {Pu,v) = (Pu,Pv)\ and {(I P)u,v) = ((/ P)u,(I P)v}\ (0 IHI < -Sv e for scaling parameter r = e and e < 1. (iv) ||H|2> - \W -PMl) ; (v) (y,v)>0; l Proof. (i): Zf 1 1 (Pu, v) = f f Pu v dfidz f Pu f v d\idz Z\ -1 Z\ -1 Zf zr 1 = f 2Pu Pv dz f Pv f u d\idz Z l Zl -1 z r 1 = f f u- Pv d^dz = (u, Pv), Zl -1 and the second identity follows directly from the first. From the definition of P, it is obvious that P2 = P and, therefore, (I P)2 = (I P). (ii) : follows immediately from (i). (iii) : IHI2 = ||Pv||2 + ||(I P)u||2 < fHI^II2 + Wi1 ~ -PHI2 = Il'^SuH2, since e <1. (iv) : Zr 1 = l \\Pv\\2 + 2 J J n2Pv (I P)v dfi dz + MI P)v Zl -1 21 The mixed term can be bounded by the Holder inequality as follows: zr 1 J J n2Pv (I-P)vdfidz Zi -1 Zr 1 < J \Pv\ Jn[ji\(I P)v] dfidz *i -i Therefore |fi(I P)V>|2 d/i dz 1/2 IIHf > l \\Pv\\2 ~ ||P*|| ||/i(J P)*|| + MI P)v IM-IK/-p)|| 2 (v): Applying integration by parts with respect to z, we get Zr 1 1 Zr 1 J J /i^ -v dfidz = J (J, [v2(zr,n) v2(zi,n)] dji-J Jn^-vdfidz- Z\ 1 1 Z\ 1 Taking into account the boundary conditions for v E V, it follows that l fiJzV'} = \ J fI[v2(Zr>fJ')-v2(zhlJ')] dV -1 = ^|/fiv2(zr,fi) dfi- J^/iv2(zi,ii) dfij > 0. It now easily follows from the Cauchy-Schwarz inequality that the bilinear form a(-, ) is continuous in the norm |||'|||, since for any u,v G V Ku>)l = ||(£u,£v)| < l|£ll ll^ll < 1 + T du dz , T + £ a ii i + M 1 + T dv + t + sza HI < Ce lllulll IIIHII. Here Cc '= step. (ir)2 _j_ ^zP_gPj I ^ ancj we used the discrete Holder inequality in the last 22 We prove now V-ellipticity of the bilinear form a(-, ) when a ^ 0. Lemma 2.3 (V-ellipticity for a 0) Suppose a^O and let r = e^fa. Then there exists Ce > 0 such that, for all v G V, a(v,v) > Ce ||M||2, where Ce := min {^, a, a2}. Proof. We have a(v, v) = (jCVyjCrV) 1 dv 2 , dv e2 + a V~PT, + -t\\(I-P)vtf + a>$$Pv\\ (2.22) The second mixed term can be written using (ii) of Lemma 2.2 as () = ) - p) According to (v) of Lemma 2.2, the first term here is always positive and the second term cancels with the first mixed term in (2.22), so that a(v,v) = 1 dv 2 . dv e* + a + ^\$$I-P)v\\2 + a2\\Pv\\ > Ce IIM with Ce := min { 77, a, a2}, which proves the lemma. For the more difficult task of establishing the V-ellipticity of the bilinear form a(-, ) when a = 0, we need the following Poincare-Friedrichs inequality. Lemma 2.4 (Poincare-Friedrichs Inequality) For any v G V, we have \\flV < Proof. We have f dv(s,n) r torf- Zl < f dv(s, fi) ~Jtai-0 . Z (2.23) 23 < < z 1 zi z r I 9v(s,n) ds dv(s,n) ds ds for fj, > 0 ds for fx < 0 Zt / 2>r < J dz < (zr ~ Zi)1/2 | J |/i dv dz 1/2 dz Taking into account that assumption (zr z/) = 1 implies IIHI2 < we obtain the lemma. We are now in a position to establish Lemma 2.5 (V'-ellipticity for a = 0) Suppose that a = 0 and 0 < e < 1, and let dv T = yf' 2 + 52 ^ = 7 + V1 ~ P)/*£ + 72 (I-P> + Pv- Then there exists Ce > 0 such that, for any v G V, a(v>v) > Ce [|MI|2 i where Ce = ^54. Proof. Recall that 1 dv -Pp-z~ £ dz £ Because of (i) in Lemma 2.2, we have a(v,v) = {£v,£ v) 1 / dv dv\ r2 / dv 2r2 / f)v \ <(' P>> V ~ P + IF {(P~ P)^ V ~ P>) Analyzing the mixed term and using (i) of Lemma 2.2, we see that [v-^TzV- F}") = ({I~ p)pf?) = (''I' ) (P"S'p 24 The first term is always positive according to (v) of Lemma 2.2. Consider the following arithmetic-geometric inequality: for any rj E 1R+ and for any a, be 1R, 2ab < -qa2 + We can thus bound the second term according to P^pv)< dv P^z ll-Pwll 2 dv ^Tz Therefore, the bilinear form a(-, ) can be bounded from below by a(v, v) > dv 2 T2 , a dv ^z H2 2 U-F>S +£ll (I ~ PM2 ~ ^\\Pv\\2 . Defining 6 := [|Pu||2 l|2 dv I T;=Jtei /Sir' SO that q ( T)_ Wit the above bound thus simplifies to a(v,v) > Ci dv % + ^2 |M|2 with Ci = ^2 {j--Jt,7 + t2^1~7^) ft ?(£>-*>-£), (2.24) By proper choice of 77, we now need only establish that Ci and C2 are positive. Unfortunately, for large enough 6, C2 will be negative, so we will need in this case to readjust the terms in (2.24), which we do by way of the Poincare-Friedrichs inequality. Case 1: S > y§: From the Poincare-Friedrichs inequality (2.23) and (iii) of Lemma 2.2, we conclude that dv Tz >\\H\2>[^\\Pv\\-MI-P)v\ Since fi E [1,1], then clearly ||/i(J P)v[| < ||(/ P)u||. Therefore, ||P|| ||ai(J PHI > ^ ||P|| ||(I P)VII > 0, where the last inequality follows from the assumption 8 > y| > | since (2.25) i||Pu||2>||(J-PH|2^35>(l-^) >-i 25 From (2.25), we get -J= \\Pv\\ MI pm) 2 > ||Pt,|| II(I P)vI It then follows that IHJlf Thus, We now use (2.26) to rewrite (2.24) as a(v, v) >[Ci-j >|C1 + 5? dv 1 dz dv "Tz + 2e2 dv Tz + c2 ihi + l^.+ C2)|M|2 V26e2 Choosing 7] = -j- and using the fact that 6 < 1 results in T^ 1 / +Ci = 26s2 and = ?(?(1-)+,(s-b))^^> c-£*-?(i(1-tJ'(1+S)) + tJ >yi since t 1/^2 + f§, so r2 (l + <1. Poo/s O* X / i^ Plinrvoinrr m ---- 0/1 Case 2: <5 < 1|: Choosing 77 = 24s, for Cl and C2 in (2.24) we obtain that and ' r2 / 25 \ r2 / 12 25\ r2 C2 e4 ^24 j e4 V1 13 24) 26s4 > Ci = \ (7 (1 24t2) + r2 (1 7)) > ^ > - since 24r2 = 7^2 < 1 for £ < y/ff (2.26) 26 Thus, altogether we have a(v, v) > Ct 2 with which completes the proof. From continuity and V-ellipticity of the bilinear form a(-, ) it follows directly from the Lax-Milgram Lemma (see Section 1.4) that problem (2.20) and all of its discretizations are well posed. The next step is to obtain discretization error bounds for a variety of discrete subspaces Vh, which is done in the next subsection. 2.3.2 Error Bounds As outlined in the introduction, continuity and P-ellipticity of the bilinear form a(-, ) lead directly to Ceas Lemma (1.24). Therefore, bounding the discretization error ||\ij) VViIII is reduced to the problem of bounding min |||?/> /i|||, which is a problem of vK&Vh approximation theory and depends on the choice of the finite-dimensional space Vh. Here we consider two main classes of discrete spaces Vh. The first consists of spaces with functions that can be expanded into the first N normalized Legendre polynomials with respect to the direction angle p and are piecewise polynomials of degree r in z on a partition Th of the slab [z\, zr\. This choice of the finite dimensional space Vh corresponds to discretization by a spectral method in angle p and a Finite-Element discretization in space. In transport theory the spectral discretization in angle with the first N Legendre Polynomials as basis functions is also called a -P/v-i discretization in angle. For any f(z) £ Hm([zi,zr]), with 1 < m < r+ 1, let HZf{z) denote the interpolant of f(z) by piecewise polynomials of degree r > 1 on a partition of [z\, zr\. It then can be shown (Johnson [25, p. 91]) that \\f(z) nj(z)\\LH[zitZr]) < Chm Wf^\{z)\\L^[zuzA) [f(z)-nzf(z)] < Ch! £2([*.,*r]) 1 /(m)(z) (2.27) where h is the maximum cell-width of the partition and the constant C is independent of h and /. Further, for any g(z, fi) £ Hm{[zi,zr]) x H2([ 1,1]) we define JV--1 n;v£f(z,M) := XI with Mz) r=o l \ j 9(z,p)pi(p)dv -i (2.28) as the truncated expansion of g into the first N normalized Legendre Polynomials pi(fi) (see (2.12) in Section 2.2). Note that the normalized Legendre-polynomials form an orthogonal basis of L2([ 1,1]) and are the eigenfunctions of the Sturm-Liouville operator (Gottlieb and Orszag [21, p. 37]) £sPi(n) ~ A. \(i dP'(p) dp _ ' dp = 1(1+ l)pi(p). (2.29) 27 Then the error of the truncated expansion can be bounded as follows Lemma 2.6 (Truncated expansion into Legendre polynomials) For r > 0, let g(z,[i) 6 Hr([zj,zT]) x H2([ 1,1]) and let 11// be defined as in (2.28). Then have: IIIM < (ii) <_______ - '('+1) (iii) For any m Vro < r and Vz G [zi, zr]; dmg C dmg dzm 11jv dzm ~ N s dzm (2.30) with C independent of g and N. Proof. (i): II// is orthogonal projection with respect to the inner product of L2([1,1]), so ||n/n7||i2([_li;l]) < IMIL2([_u]), and hence ||IIjvff|| < ||ff||. (ii): By. definition (2.29) and integration by parts, we have jJrHz) =lf ~ 21(1 + 1) / Cs dz P,(M) dp ~ ^21(1 Therefore, < /(/+!) Cs + 1) dmg Cs dmg dzm £2([-Ul) (2.31) Sz (iii): Since the Legendre Polynomials are an orthogonal basis, from (2.31) we obtain dmg dmg - II// dz dzn = 2£ |^m)WI2< i2([-l.l]) l=N Cs dmg dzr £2([-Ml) 1^2 For / > 1 we have j < so that the sum can be bounded by OO o - | . OO ^ TO ^ y I 4 4 Therefore with C = yj§, which proves the lemma. dmg <9mff ^ C dmg <9zm N dzm ~ N Ls dzm 28 Theorem 2.7 (Finite-Element in space, spectral discretization in angle) Suppose that 7ft is a partition of the slab [27, zr\ with maximum mesh size h. Let V be given as defined in (2.21). Define {JV1 vh e c(Dy, vh = J2 (*) e JPtVh) for / = o,..., n -1 1=0 vh(z,,n) = 0 for n > 0, Vh(zr,fj,) = 0 for n < 0 where 2Pr(7/l) denotes the space of piecewise polynomials of degree < r on the partition 7ft. Suppose 1 < m < r + 1 and let ij> G Vn (Hm([zi, zT]) x H2([-1,1])) be the solution of (2.20) and iph £ Vh be the solution of (2.20) restricted to Vh. Then VIII \\Csi>\\h0 + C2h m1 dmip dz" Proof. From Ceas Lemma (1.24), we have WH-MW < min Vh£Vh HIV1 vh < |||V> Hivn^lll Now note that |||?;||| < Q. Therefore, by (i) of Lemma2.6, (2.27) and (2.30), we conclude III n^n2v>||| < I]i> nJvV,llij0 + - n^V,)lli,0 which proves the theorem. The second main class of finite-dimensional spaces considered here are formed by functions that are piecewise polynomials in space z as well as in angle fi. This choice corresponds to a Finite-Element discretization in both space and angle with rectangular elements. Suppose that 7ft is a partition of the computational domain D = [27,27.] x [1,1] into rectangles T = [zj,zj+1] x [//, fJ,v+i] of maximum diameter h. To be able to handle the boundary conditions properly, we assume in addition that (27,0) and (zr, 0) are nodes of the triangulatiori 7ft. By 7ft we define the discrete space: dmip dzm yh vheC(D); *ft|T= 0 (2.32) Vh{zi,n) = 0 for n> 0, Vh(zr,fi) = 0 for fi < 0 29 For all v £ V, let H/,u £ Vh denote an interpolant1 of v with respect to the partition 7*. It can be proved (Ciarlet [16, Theorem 16.2]) that, for v £ V n7Tr^"1(£)) the following bound for the interpolation error holds: 11^ ~ < Chr+1~m \v\r+1, (2.33) where 0 < m < k + 1 and | \h-+'l(d) is the semi-norm of Hk+1(D) (Adams [1]). Combining Cea's Lemma and (2.33), we get Theorem 2.8 (Finite-Elements in space and angle) Let Vh, h be given as defined above. Suppose ip £ V D Hk+1(D) is the solution of (2.20) and let iph £ Vh be the solution of (2.20) restricted to Vh defined in (2.32). Then we have: \U-M\ < \f^chr\ip\Hr+1(D). Proof. By Ceas Lemma, we need only to bound |||V< II/1i/,|||- Note that, for all v £ V H Hr+1(D), IIMII < Hwllj 0 < ||v||ffl(1J). Thus, using (2.33) with m = 1, it follows that |||-0 Hft'0111 < chr \i>\Hr+^D), which proves the theorem. We point out that the error bounds in Theorem 2.7 and Theorem 2.8 depend on the ratio In the V-ellipticity bounds in Lemma 2.3 and Lemma 2.5, the scaling parameter r was chosen to be 0(e). Therefore, when e is small, Ce is 0(a) when a: / 0, while Ce is 0(1) when a = 0. In addition, for r = O(e), the continuity constant Cc is O(j-), so we have = O(j-), which blows up for diffusive regimes, where e is very small. However, numerical results show that the Least-Squares discretization of the scaled transport equation stays accurate in diffusive regimes. Thus, we conclude that the bounds, derived in this section, are not sharp enough to reflect the accuracy of the Least-Squares discretization in diffusive regimes. In order to obtain error bounds that do not blow up in diffusive regimes, it is essential to prove continuity and F-ellipticity of the bilinear form a(-, ) with constants independent of parameters e and a. This is done in the next section with respect to a scaled norm. 2.4 Continuity and V-ellipticity with respect to a scaled norm In this section, which is the central part of this thesis, we prove continuity and V-ellipticity of the form a(-, ) in (2.20) with constants independent of parameters e and a. This is the foundation for the bounds in Section 2.5 of the Least-Squares discretization error that do not blow up for diffusive regimes. Throughout this section, we assume that t = e and a < 1. In order to obtain continuity and V-ellipticity with constants independent of e and a, we use a scaled norm. To motivate its choice, we look at the double-scaled (from left and right) 1For r > 2, there axe many different interpolants, depending on the choice of the support abscissas and support ordinates on the rectangle, which are not specified here. For an overview of commonly used interpolants for rectangles, we refer the reader to Ciarlet [16, p. 129]. 30 transport operator (2.18). Let V denote the domain of the single-scaled (only from the left) transport operator (2.16) and V = S 1V the domain of the double-scaled transport operator (2.18). Defining Q := jSfiS = (l-e)(Pfi + fiP) + £fil, (2-34) we see that the norm 2^ + IHI2 (2.35) for v £ V would be a natural choice for bounding the double scaled bilinear form a(u,v) := (£Su,£Sv) . (2.36) However, because of the reasons mentioned in Section 2.2, it is desired to use the single- scaled transport operator for the computations. Therefore, using the relation v = 5_1 V, we derive from (2.35) the following norm for v £ V: II = jSfiSS- -i dv dz ii 1 dv -Sfi-K- £ OZ 2 + = 1 dv -Sfi-K- £ OZ 2 + + lls-M Pv+ -(I-P)v £ (I-P)v + \\Pv\\2 =: \v We define (2.37) V := {v £ C(D); v(zi,fj,) = 0 for fi > 0; v(zrifi) = 0 for (i < 0}, (2.38) where closure is with respect to the norm |] ||y, so that V is a Hilbert space with respect to the inner product {u,v)v := /^sfi^,^Sfi^\ + /^(I-P)u,^(I-P)v\ + (Pu,Pv). At a first view, the norm ll-llv also seems to be useless for error bounds when e > 0. However, suppose we could show that the error of discretization is bounded by \W ~h\\v ,qs,h,N), with lim qst h, N) - 0. iV oo, h+ 0 Then, in particular, every part of the norm ||^i i>h\\y is bounded by C, so that ||(7 P)(ip iph)\\ < eC, for example, which means that the error in the higher moments is decreasing with e. First, we prove continuity of the bilinear from a(-, ) in the norm H'lly. From the Cauchy-Schwarz inequality, it follows directly that, for any u,v £ V, |a(u,u)| = |(£u,£u)| < ||£u|| H-Ci'H . 31 Employing the discrete Holder inequality and using the assumption a < 1, we obtain IIjCuII < Isc- e *dz + -(I-P)u £ + PM 1 du 2 1, -Sfi-K- e dz + + ll-Pu; \ 1/2 I2 =V3 < V3 Thus, for all u, v 6 V, |a(u,v)| < Cc ||||v |M|V with Cc = 3. To prove V-ellipticity of the bilinear from a(-, ), we exploit the convenient form of the double-scaled transport operator and prove first that the double-scaled bilinear from a(-, ) in (2.36) is V-elliptic. The V-ellipticity of the bilinear from a(-, ) then follows easily as in Corollary 2.12. In order to prove V-ellipticity of the bilinear from a(-, ), we need the following lemmas. Lemma 2.9 For all u, v 6 V, v £ V and e. < 1, we have (i) ^ll^ll e||(J p)v\\ < IMI; (ii) {QTz'*) -0; (iii) (Poincare-Friedrichs inequality) IIHI < Proof. dv 1 dv dv < ^Yz < -SfiS £ dz QY (2.39) (i): Since \\fi(I P)w|| < ||(I P)v[| and ZT 1 Zl -1 l J J /x2 (Pv)2 dfidz = J (Pv)2 J /i2 dfidz = J (Pv) 1 Zl dz z r 1 \j J (Pv)2 dtidz=\\\Pv\\2, Zl -1 then iihi = iia*^+- p)]v ii > ii^ii c\w p)* i: >^npii-eii(/-p)% (ii): From (i) in Lemma 2.2, it follows that (Su,v) = (u,Sv) Vu, v E V. Therefore, using (2.17) leads to | (f?s) = (H5!?') =; (^st) = l ('£) where the last inequality follows from (v) of Lemma 2.2. 32 (iii): From the Poincare-Friedrichs inequality (2.23) proved in Lemma 2.4, we have ||^|| < llpfell- Using (iii) of Lemma 2.2 and the relation (2.17) results in dv 1 dv 1 dv dv "T* < -SV7T e dz -SfiS=- e az The following technical lemma is tedious but simplifies the proof of the major result, Theorem 2.11. Lemma 2.10 Suppose 0 < e < Then, for any 6 £ [0,1], there exists 6 > 0 such that H(b, 5) := | y^6 -(1+ £)&)' + 46(1-5)+ (l J) b + 6 j < 0.988, where s := j^1/2 e\/3(l 6)^2j In particular, for 5 < 0.875, we can choose 6 = 0. Proof. For 6 < 0.875, we choose 6=0 and get 77(0,6) = i |\/46 362 + 6} < 77(0,0.875) < 0.986, since 77(0,6) is monotone increasing for 6 £ [0,1]. For 6 > 0.875, using the assumption e-y/3 < 1, we have s = [61/2 eV3(l 6)1/2]2 > [61/2 (1 6)1/2]2 = 1 2^6(1 6) =: (3. Suppose we restrict the choice of 6 to 6 < |6. It then follows that < (- (i+1) <) < (- (*+1) o s (* C1+f) 0 since s < 1. From (2.40), we conclude that (s-(1+l)t)"s (i-(i + f)t)2. Therefore, (2.40) H(b, 6) < U J (s (i + £) &) + 46(1 6) + (l f'] + > =: 77(6,6). Simple calculus shows that 36 _ (3-/?) 36(1-6) 3 (3 + /?) (3 + /3)V P ~4 33 minimizes H and that b* > 0 if 6 > 0.875. After tedious but straightforward manipulation, we have that H(b*, 6) attains its maximum at 5* 0.893 and that H(b*, 6*) < 0.988. We are now ready to prove the central result of this section. Theorem 2.11 (F-ellipticity of a(-, )) Let a(-, ) and || ||~ be given as in (2.36) and (2.35). Suppose that 0 < a < 1, 0 < £ < ^=. Then there exists a constant Ce > 0 such that, for all a(v, v) dv Q-q^ + aPv + (I P)v >Ce + M2)=Ce (2.41) where Ce = 0.012, which is independent of e and a. Proof. We have d(v, v) = dv Q + aPv + (I P)v oz + a2||PS||2 + |](L-i>||2 +2 (£)+:2 (^,-p)") + a2\\Pvf + \\(I-P)v\\ +2a ( Qyz, v\ + 2(1 a) /q||, (I P)v (2.42) For the last term, we may write for any d G [0,1], by using (ii) of Lemma 2.2 and (ii) of Lemma 2.9: d (q^, (I P)v} + (1 -d) (Q^z, (I P)v} = d(Qtd)'d (Qt *>) + (1 d) (Qt V ~ P*) > -d (pQyz, Pv} + (1 d) ({I P)Q^z, (I P)v} . Substituting this into (2.42) and bounding the fourth term in (2.42) by (ii) of Lemma 2.9, 34 we get Z(v,v) > ||g|f|| +a*\\Pv\\* + \\(I-P)v\\* -2(1 a)d\(PQ§,Pv) | 2(1 a)(l i) |((J P)Qf, (I P)*>| which can be reduced by setting a = 0 to a(v,v) > ||Qf||2 + ||(7_P)l;||2 -2d | (PQ§, Pv) | 2(1 d) |((7 P)Q§, (I P)v) |. Defining <:=!£¥, so that (1 ) = IIMC, (2-43) 12 npqj?ii solhat fl T)_ii(f-p)osir Htf llelfir ' and using Cauchy-Schwarz inequality, we conclude from (2.43) that a(v, v) > Q dv dz + (1 5)||t)||2 2dV6y/j -2(i-d)V(T^)V(i^) tdv Tz (2.44) To maximize the lower bound in (2.44), we divide the region (5,7) E [0,1] x [0,1] into two triangles and choose d as follows: :-{i for 5 + 7 < 1 for 5 + 7 > 1 ' Next, we consider these two cases separately. Case 1: 5 + 7 < l,d = 1: For any 77 > 0 and any u,v and any norm || ||, the arithmetic-geometric mean inequality 2IMHMI < 7lMI2 +;HMI2 holds. We thus have a(v, v) > (1 777) 2 + 1 (2.45) It remains to choose 77 such that the terms (1 777) and (1 5 ^) in (2.45) can be bounded by a positive constant from below for all possible 7,5 with 7 + 5 < 1. For 5 < 0.5, we choose 77 so that (1 77?) = (l 5 0 , 35 which yields 77 = 6 + y/6*~+467 Applying Lemma 2.10, since 7 + 8 < 1 and therefore, 7 < 1 <5, then we have 71 = \{6 + V^+467} < \ + y/P + 45(1 5)} = H{ 0,8) < 0.988. Thus, from (2.45), the V-ellipticity of a(-, ) directly follows with Ce > 0.012. On the other hand, if 8 > 0.5, the second term in (2.45) can become negative. To keep it positive, we then rewrite (2.45) for any 6 6 [0,1] as follows a(v,v) > (1 b yr]) Q +b Q + (1 6 - ^dv 2 _dv Q9! + 6 % Since 8 > 0.5 and e < l/\/3 we have 0 < ^^=V6 ey/1 6^. We can now use the Poincare- Friedrichs inequality (2.39) of Lemma 2.9 and inequality (i) of Lemma 2.9 to bound the second term by o _ 2 V6-ey/0^S) ||i;|p dv 2 r > imi2 > which results in a(v, H) > (1 b 777) + (- - b 5 8+-s----- 3 77 where Again, we choose 77 so that s := (v^-ev/Ml-^))2- (1 6 777) = (\ 8 + ^ (2.46) (2.47) which yields ___________________ - (6 + |s 8) + J(b+ fs 6)2 + 467 V = 27 ' Next, to attain a positive constant in the lower bound in (2.46), we need to show that for all possible 8,7 with 8 + 7 < 1, 8 > 0.5, a positive b can be selected so that 1 Gi{b, 8,7) 6 + 777 6 + - 6^ + 467 + fb + 8 - } < 1. Since Gi(6,8,7) < Gi(6,8,1 6) for 8 + 7 < 1, it then is sufficient to prove V6e [0.5,1] 36 >0 : Gi(6,8,1 8) < C* < 1. But this follows immediately from Lemma 2.10 with G* = 0.988 since Gi(6,8,16) = H(6,6). The V-ellipticity of a(-, ) then follows directly from (2.46) with Ge > 1 0.988 = 0.012. 36 Case 2: 5 + 7 > 1: Setting d = 0 in (2.44) and preceding as in case 1 results in 2 d(v,v)> (1-(1-7)77) dv ' dz +1-5- 1-6 For 6 < 0.5, we choose so that S+^+4(l-5)(l-7) V 2(1-7) Using Lemma 2.10, since 6 + 7 > 1, and therefore, 5 > 1 7, we then have 1 (2.48) (1 (1 7)77) = ^1 6 - - 7 > 1, and therefore, <5 > 1 7, w< (1 -7)V = \ |\A2 + 4(1 5)(1 7) + 5} < | {y<52 + 4(l-5)5 + 5} = H(0,6) < 0.988. The Vr-ellipticity of a(-, ) then follows with Ce = 1 0.988 = 0.012. On the other hand, for 6 > 0.5, we introduce, as in case 1, a parameter b [0,1] and use the Poincare-Friedrichs inequality (2.39) to conclude from (2.48) that a{v,v) > (1 b (1 7)77) S+-s- 1-6 f] (2.49) with s as defined in (2.47). Again, we first choose 77, so that (1 6 (1 7)77) = (l 6 + Is - which yields O + Is s) + \/(&+!s-<5) +4(1 <5)(1 7) V~ 2(1-7) In order to attain a positive constant in the lower bound (2.49), we need to show that, for all possible 6,7 with 6 + 7 > 1, 6 > 0.5, a positive 6 can be selected so that, G2(b,6,j) := 6+ (1-7)77 1 2 b+-s-5 2 + 4(l-)(l-7) + i + -3* < C* < 1. Since G^b, 6,7) < G2(b, 6,15) = H(b, 6) for 5+7 > 1, this follows directly from Lemma 2.10 with C* 0.988. Finally, from (2.49) with Ce := 1 C* = 0.012 the U-ellipticity of a(-, ) follows, which proves the theorem. From the U-ellipticity of the bilinear form a(-, ) the U-ellipticity of the bilinear form a(-, ) can be proved as follows. Corollary 2.12 ( U-ellipticity of a(-, ) ) Let a(-, ) and H-Hy be given as in (2.20) and (2.37). Assume that 0 < a < 1, 0<£< Then there exists a constant Ce > 0 such that, for all dGU, a(v,v) > Ce \\v\\2v, (2.50) 37 where C& = 0.012, which is independent of a and e. Proof. By the definition of the norm ||-||K in (2.37) and the relation (2.17) we have = ||i)||p. Therefore, using (2.41) in Theorem 2.11 we obtain for any v 6 V a(v, v) £v £v dfidz £Sv £Sv dfidz = a(v,v) > Ce KM 2 V > which proves the corollary 2.5 Error Bounds for Diffusive Regimes Using continuity and V-ellipticity of the bilinear from a(-, ) in the norm ||-||v with constants independent of a and e, in the following section we establish discretization error bounds that do not blow up in the limit e 0. We use the same discrete spaces introduced in Section 2.3.2. We first consider discrete spaces with functions that can be expanded into the first N normalized Legendre polynomials with respect to the direction angle fi (Pn-i discretiza- tion in angle) and are piecewise polynomials of degree < r in z on a partition 7), of the slab. To combine Ceas Lemma (1.24) with the interpolation error bounds in (2.27) and (2.30) to obtain an error bound for this class of discrete spaces, we need the following lemma. Lemma 2.13 (Bound for commutator [IIjv£ Til at]) Let £ be the transport operator as defined in (2.16), £s the Sturm-Liouville operator as defined in (2.29) and II/y the projection operator as defined in (2.28). Suppose IV > 2, m > 0, and v £ V fi Hm+3(D). Then, there exists a constant C > 0 independent of e and a such that [BjvT TII/v] dmv dzm c dm+1v - N2 Lsdzm+1 (2.51) Proof. Recall that and that has the moment expansion (see (2.13) in Section 2.2) dmv dzm = £ Am\z)pi{p) with = \ ( 9 ;=0 Ti Note that and ftin /i , v v n n p = <^nm p n jf " dzm 0 dzm N-1 = n* 4, = <£im) = Am)pi = p 1=0 38 Therefore, [njv^-^Djv] = UN fi-^- fiUN OZ Oz Using the relation (see Chapter 4) P Pi (p) = k-1 p(_ i (fi) + hi pl+1 (fi), with h - l + 1 v/4(/+ 1)2-1 and p~i(p) = 0, we see that d dmv UHIMTzd^ =IIiT Pi+i Lf=o (=0 AT JV-2 1=0 1=0 On the other hand, d dmv dz dzm =/-x;V+v 1=0 = ^2 <{>\m+1)bi-iPi-i 1=0 N-1 + £#! 1=0 (m+l) , Thus, in combination with (2.52) we have dmv [n^-£ £Hjv] = (j)^+^ 6jv_i pn-i ~ ^n-i Pn = bN-i (4>p?+1'*pn-i - Now we notice that, for any integers k, l > 0, / (m+l) (2.52) (2.53) ZT 1 Zr J (<^;m+1)(z)) j p\{p) dpdz = 2 j (<^m+1)(z)) dz Zi -1 z I t) (m+l) < W +1)]2 Cs 3m+1v 3zm+! where the last inequality follows from (ii) of Lemma 2.6. Therefore, (2.53) can be bounded 39 as follows: [IIjv£ £II^r] dmv dz" N + V4AT2 1 \N(N 1) N(N + 1) £s dm+1v dzm+1 JV2- 1 £s dm+1v dzm+1 < 1 8 1 £s dm+1ip dz*+' y/3 3 N2 since N^_1 < O x 2, < |, which is valid for N > 2. We are now ready to prove the following error bound. Theorem 2.14 (Finite-Element in space, spectral discretization in angle) Suppose that Th is a partition of the slab [zj, zr\ with maximum mesh size h. Let V be given as defined in (2.38). Define Nl V ft ___________ Vh e cm vh=Y, WnOO; (*) e iPr(Th) for / = o,..., n -1; [=0 Vh(zi,fi) = 0 for // > 0 Vh(zr,fi) = 0 for /i < 0 > , where TPr{Th) denotes the space of piecewise polynomials of degree < r on the partition Th- Suppose 0 < e < 0 < a.< 1 and 1 < m < r + 1. Let ip E VflHm+3(D) be the solution of (2.20) with right-hand side qs E Hm+2(D). Let iph G Vh be the solution of (2.20) restricted to Vh. Then \\lp ihWv 2 + l(I-P)(lP-lPh) +\m-Tph)\\2 1/2 < 1 vc,; Ciu r ,ca C3hm dmg, dzm hm N2 Cs dm+1ip dzm+l > with Ci,C2, C3, C4 independent of a and e. In particular, (2.54) Proof. d(ip iph) dz Using Ceas Lemma (1.24) < eeh, \\(I P)(tp iph)\\ < eeh, < eft, \\P(ip iph)\\ < eft. and the U-ellipticity of the bilinear form a(-, ), from 40 Corollary 2.12 we conclude that HV-V/illv < ~^rVaii> iph) < -7= min \/a{ip vh,ip vh) VOe vOe vheVh < ~^r^/a(^ ~ ^z^-Nip, ip TlzllNip) = -J=z \\£(ip n^njvVOII (2.55) < -j= {\\a> crniPW + \\cnNij) rn.n^Vll}, since 11*11^ £ Vh. In order to bound the first term in (2.55), we use (2.51) of Lemma 2.13 and (2.30) of Lemma 2.6 to get WW jcnNii>\\ < \\£ip n^'vil + \\nNCip £nNip\\ < ||g. -nw3s|| + r dip (2.56) jCi r C2 n H£s9sII 7V2 r dip Cs dz To bound the second term in (2.55), observe that PHzv = IIzPv for any v £ V, since P operates only on fi. Therefore, denoting T -(I P) + aP, we have \\£U.Nip £n.zUNip\\ < C^SfiJlNip- ^ShUzILniP + \\TKNip THzUNip\\ d_ dz -S(iuNip nz-SfiiLNip S + \\TUniP n.THivV-ll. Applying (2.27) to bound the last two terms leads to \\£uNip jCnziiNip\\ Qm+l 2 dzm+1 e S/jJIpjip + Chn dm dzm TUniP < Chm Iljv dmip dzm CHn dmip dzm where we used the V-ellipticity of the bilinear form a(-, ) in the last step. We proceed further by using (2.51) of Lemma 2.13 and the fact that Iljv is an orthogonal projection ((i) of Lemma 2.6) to get \\£uNip £uziiniP\\ njv£ dmip dzm + , dmip - n^£] dmqs dzm , C4 hm + VCIN* dm+1ip Ls dzm+' (2.57) 41 Finally, substituting (2.56) and (2.57) into (2.55) results in (2.54). Remark 2.15 (Interpretation of error bound) In the following, we interpret the error bound (2.54) more closely. For diffusive regimes (e < 1), the exact solution of the continuous problem has the diffusion expansion2 (see (1.13) in Section 1.2.2) = while the the right-hand side in this case is assumed to have the form 2 = 2o (z) + efiqi(z) + 0(e2). Therefore, it follows that JZs4> = 0(e) and £,sqs = 0(e2), since qs = Sq = Pq + e(I P)q. Taking this into account, for the error e/, in (2.54) we get eft = yB:(0(£,)+£0W)+A(ai' dmqs dzn + Caj^O(s) Thus, the error in the zeroth moment \\P(ip Vft)|| is bounded by 0(hm)+0(e) and the error in the higher moments ||(I P)(ip ^>ft)|| is bounded by 0(ehm) + 0(e2). In particular, for diffusive regimes, where e is very small, convergence of the discrete solution is also assured by the above bound for small N, which is a reasonable choice in this case, since the exact solution is nearly independent of fx. Moreover, the bound in Theorem 2.14 directly gives the optimal order of conver- gence for the spatial discretization without the use of Nitsches trick (Johnson [25, p. 97]). For example, for piecewise linear elements, which means r = 1, we can choose m = 2 and get an 0{h2) error bound in the L2-norm under the regularity assumptions on ip required in the theorem. On the other hand, if e is close to so that e > hm and s > 1/N, then an error bound can be obtained more easily, since Mlv < i + 4 dv dz + i+4 Ml 2 1,0 Therefore, for any v G VTl (Hm+1([zi, zr] x H2([ 1,1]), Ceals Lemma (1.24) and the bounds in (2.27) and (2.30) for the interpolation error lead directly to fC~ f 1\1/2 \H~M\v + IM-n*n^lli.o i \ 1/2 / /-Y 1 + ?) hr||^||i,o + ^ m1 dmip (2.58) dzm 1,0/ However, we point out that this bound will blow up in the diffusion limit e 0 for fixed N and h. 2The assumption that the exact solution has a diffusion expansion would have simplified the proof of Theorem 2.14; however, we preferred to present here the more general version. 42 For the case e > ^=, which is not covered by Theorem 2.14, the error bounds in Section 2.3.2 can be used instead. The second main class of finite-dimensional spaces considered here are formed by functions that are piecewise polynomials in space z as well as in angle fi. This choice corresponds to a Finite-Element discretization in both space and angle. Because Section 2.3.2 contains error bounds for this class of discrete spaces that are valid in non-diffusive regimes, we concentrate in the following only on error bounds for diffusive regimes. Therefore, we assume in the following theorem, which combines Ceas Lemma and (2.33), that the exact solution has a diffusion expansion, which simplifies the proof. Theorem 2.16 (Finite-Elements in space and angle) Suppose that 0 < a < 1 and 0 < e < . Let T/,bea partition of the computational domain D = [zi, zr] x [1,1] into rectangles T= \zj,Zj+1] x [/i,/i+i] of maximum diameter h. To be able to handle the boundary conditions properly, we assume in addition that (zj, 0) and (zr, 0) are nodes of the triangulation. Let V be given as in (2.38) and define W1 := | vh E C(D); vh\T = £ Cp,y z? f V T E Th l 0^/2,7 Vh(zi,ft) = o for /I > 0, Vh (zr /i) = 0 for \i< 0 Suppose ip E V C\Hr+1(D) is the solution of (2.20) and let iph G Vh be the solution of (2.20) restricted to Vh. Further, suppose that the diffusion expansion ip(z, p) = is valid. Then we have \C< \\ip- i>h\\v < \r?rChr (|Vlff'-+i(c) + Hr\h-+i(d)) , (2.59) with C independent of e, a, and h. Proof. From Ceas Lemma (1.24), we have U-M\v <^feM-nhiP\\v < Ipfl^-UhTP) + {I P)ti{ip -uhip) + _(/ PM Jihip) 1/2 + m n^voif (2.60) <4 -Pn^-{rp Hhip) e az + (/ p)ii-^(ip n h*p) + ~(i p)(v> n^) + ITO-n^)|| . 43 By (2.33) we now bound any of the above four terms separately and use the fact that lift is as an interpolation operator linear, so that = Ilft0 + ellhR- Since (z) in the diffusion expansion of if> is independent of angle //, we conclude that Ilft^z) is also independent of fi. Therefore, Pfi= 0 = so that ip/^-HftVO 1 d (2.61) < ||^ii Ilft^Rd^!^ where the last inequality follows from (2.33). Using (2.33) for the second term in (2.60) results in (/-P)/Z^(V<-IIftf) 5: IH HfcVllfri(D) < C hr IVI Hr+1(_D)- (2.62) Because and Ilft^ are independent of angle, we have (I P) = 0 = (I P)TLh- Therefore, the third term in (2.60) can be bounded by 1 (I-P)W- IlftVO -(i-p)e{R-nhR) <||^-nft^|| < Chr + 1\R\Hr+l(D) < Chr\^>R\Hr+l^D^ (2.63) since h < (zr z{) = 1. Similarly, a bound for the last term in (2.60) is given by \m n^VOII < U n^ll < Chr+1 < Chr WW^y (2.64) Inserting (2.61), (2.62), (2.63), and (2.64) into (2.60) results in (2.59), which proves the theorem. Remark 2.17 (V-ellipticity constant Ce) Both error bounds (2.54) an Ce- According to Theorem 2.11 and Corollary 2.12, Ce = 0.012, so 1/Ce = 83.3, which is fairly large. However, we would like to point out that we simplified the proof of Theorem 2.11 by considering only the worst case a = 0. Without setting a = 0, (2.46) would change to i(u,u) > + (1-6) ||u|| 2cf(l a)VV7 -2(1 d)(l a)y(T35)^(rZ7j Q dv 8z HI. HI (2.65) which clearly shows that the V'-ellipticity constant Ce increases with a. To judge the quanti- tative behavior, we computed Ce for certain values of a using (2.65). The results are plotted in Figure 2.1. Already for a = 0.3, 1/Ce drops down to 7.04. 44 Ce vs. alpha Figure 2.1: V-ellipticity constant Ce and its reciprocal as a function of the absorption parameter a. 45 CHAPTER 3 X-Y-Z GEOMETRY In this chapter, we generalize the scaling transformation and the error bounds for the Least-Squares Finite-Element discretization, from one-dimensional slab geometry to three dimensions. Since the main focus of this chapter is on diffusive transport problems, in the following we use the parameterized form (1.14) of the transport operator C. In addition, we assume that the total cross section Ut is constant in space ( parameter e is constant on the computational domain D := 1Z x S1, where 1Z C IR3 is a region with sufficiently smooth boundary, for example, of class C1,1 (Grisvard [22, p. 5]), and S1 denotes the unit sphere. Further, we suppose throughout this chapter that a < 1. Moreover, in the following we restrict our attention without loss of generality to problems with vacuum boundary conditions (so g(r,Q) = 0 in (1.4)). Problems with inhomogeneous boundary conditions can easily be transformed to problems with homogeneous boundary conditions and different right-hand sides (Oden and Carey [45, p. 27]). As in the one-dimensional case, a scaling transformation is applied to the transport operator prior to the Least-Squares discretization to ensure accuracy of the discrete solution in diffusive regions. In the three-dimensional case, the scaling transformation and its inverse are given by S := P + e(I P)] S'1 :=P + j(I-P). (3.1) They have the same form as in the one-dimensional case with the only difference that r = e and that the L2-orthogonal projection P onto the space of functions that are independent of direction vector Q is. now defined by Pil> := J ip(r,Q) dn. (3.2) s1 After applying the scaling transformation S from the left to the transport operator C in (1.14) and dividing by e, the transport equation becomes W := -SXV = YV> + -(/ P)i> + aPif> = qs, e e e (3.3) with qs := Sq. Throughout this chapter, we denote the standard inner product and the associated norm of L2(1Z x S1) by u v* dQ, dr; R s1 |]ti|| := y/(u, u) Vu,v G L2(TZ x S1), where v* is the complex conjugate of v. Further, for u,v E C(D), we define the following inner product (u,v)v II TZ S1 -50- -Vu -50 Vv + £ £ (I-P)u.(I-P)v + Pu Pv dQdr, its associated norm IMIv := y/(u>u)v = 1 2 1 -SQ Vu £ + -(I -P)u + IM2 1/2 (3.4) (3.5) and the space V := {v E C(D); u(r, 0) = 0 for r E dlZ, and fi n{r_) < 0}, (3-6) where the closure is with respect to the norm ||-||y, so that it is a Hilbert space. As mentioned in the introduction, the Least-Squares variational formulation of (3.3) is given by (1.19). Our first goal is to show that the bilinear form a(-,-), defined in (1.19) is continuous and V-elliptic with constants independent of parameter e and a. From these results will follow not only the well posedness of problem (1.19), as outlined in the introduction, but also the accuracy of the Least-Squares Finite-Element discretization applied to it in the diffusion limit. 3.1 Continuity and V-ellipticity As in Chapter 2, we conclude from the Cauchy-Schwarz inequality that the bilinear form a(-, ) is continuous, since |a(u, u)| = |(Cu, Cv)\ < ||£u|| ||£v||. We now use the discrete Holder inequality to get ||£u|| < < Vs SQ Vu £ + (I-P)u + HPul 1 2 l, Vu £ + -£(I-P)u + Il-Pul \ 1/2 I2 =vS! IV I since a < 1 by assumption. Thus, for any u, v E V, K,tOI < Ce Hiy |M|V, (3.7) with Cc = 3. For the more difficult part of the proof of the V-ellipticity of a(-, ), we proceed in the same way as in the one-dimensional case. We first scale (3.3) in addition from the right by S to get £S = vV+ {I P) + ocPi> = qs, 47 (3.8) where -0 := S 1'0. Define the new space V and associated norm by where V S-'V, ||| :=||Q-Yt>f + ||v| Q := -SQS = (1 e) (PQ + QP) + eQI. Shortly, we will prove that the bilinear form a(u, v) := J J CSu LSv dtldr V u, v £ V n s1 (3.9) is V-elliptic. The K-ellipticity of a(-, ) in (1.19) will then follow in the same way as in Corollary 2.12. Before we do this, we first establish the following lemmas, which are generalizations of Lemma 2.2, Lemma 2.4, Lemma 2.9, and Lemma 2.10. Lemma 3.1 For all u, v £ V, and e < 1, we have (i) (Pu,v) = (u,Pv); and {(I P)u,v) = {u,(I P)v); P2 = P; and (7 P)2 = (I P). Thus P and (I P) are orthogonal projections. (ii) (Pu, v) = (Pu, Pv); and ((7 P)u, v) = ((7 P)u, (7 P)v); (iii) ||t)|| < -5t e (iv) ||P||-e||(7-P)w|| < |H|; (v) (fi Vv, v) > 0; (vi) (Q Vv, u) > 0. Proof. The proofs of (i), (ii), (iii), (iv) follow analogously to that in Lemma 2.2 and Lemma 2.9. To prove (v), we apply the fundamental Greens formula (Ciarlet [16, p.34]) to get (Q Vu, v) : n s1 //S2' Vv v dQdr We therefore have J J v Q Vv dQdr + J J v2 Q n dQds. n s1 on s1 (Q Vv, v) = i J J v2 Q-n d£lds. 8KS1 48 Splitting the boundary dll x S1 into the parts T+ := {(r, Q) G dll x S1; 0. n(r) >0} and r~ := {(r, fi) G dll x S1; Q-n(r) < 0}, the boundary integral becomes //2 97IS1 H n d£lds vZQ. 21 dfids + v2Q n dQds ?;2Q n dllds > 0, since v(r,Q) = 0 for (r, £2) G F and v G V. Thus, altogether we obtain (£2 Vu, u) > 0. To prove (vi), we observe from (i) that S is self-adjoint with respect to ). Con- sequently, we have (Q Vv, v) \ 1 1 SQS Vv,v) = (Q VSv, Sv) = (£2 Vv, v) > 0, where the last inequality follows from (v). Lemma 3.2 (Poincare-Friedrichs Inequality) Suppose e < 1 and let 11 C M3 be a bounded domain. Then, for any v G V, we have ||v|| < diam(7£) [|£2-Vt7|| < diam(7£) ||Q-Vu||. (3.10) Proof. For ri}rk G 11 let [Li, Lk] := {n : r = 2Ii + (1 s)rk, for 0 < s < 1} denote the line segment between and rk. Let arbitrary r Ell and £2 G S'1 be given. We define *1 := min{f G IR [r, r -(- f£2] C 11} ^2 := max {2 G IR : [r, r + t£2] C 11} n := r + tiQ,] 7*2 := r + t2Q. Then it is easy to see that £2 -2l(2Li) < 0. Taking into account the boundary conditions for v G V, we therefore have that 9(7^, £2) = 0, hence, r r v(r, £2) = Jd/ds = j 0 Vv ds, where ds denotes the arc-length differential along the line {r + tU,t G 1R}. Therefore, we conclude from Holders inequality that L L2 / U Ke,2)I < / |£2 Vu| ds < J [£2 Vu| ds < diam^)1/2 I j |£2 Vu|2 ds ni 211 Vi \ 1/2 / 49 Applying Fubinis theorem, it follows that 2 J J kfcil)|2 dQdr Li n s1 < diam(77)2 J J |fi-Vu|2 dtldr. 71 S1 R. S1 From the relation v = Sv and (iii) of Lemma 3.1, we thus have -SQ.S Vv e = diam(77) ||Q Vv 988, ||u[| < diam(77) ||fi Vu|| = diam(77) [|fi VSu|| < diam(77) which proves the lemma. Lemma 3.3 Suppose 0 < £ < 1. Then, for any 6 £ [0,1], there exists 6 > 0 such that H(b, S) := \ | ^ (5 (1 + |)6) 2 + 45(1 <5) + (l 0 6 + 5 J < 0. where s := jtf1/2 e(l 6)1(,2J In particular, for 5 < 0.875, we can choose 6 = 0. Proof. The only difference to the proof of Lemma 2.10 is that now s = j^1/2 e(l <5)1 ^2j instead of s jV^2 eV3(l 6)1/,2J Therefore, when 6 > 0.875, we use the assump- tion e < 1 to get s > 1 2yjd(l d) =: (3. Everything else is analogous to the proof of Lemma 2.10. We are now in a position to state the central result of this section. Theorem 3.4 (17-ellipticity of (-, )) Let a(-, ) and ||-||~ be given as in (3.9) and (3.8). Suppose that 0 < a < 1, 0 < £ < 1 and that the diameter diam(77) of the domain1 77 is 1. Then there exists a constant Ce> 0 such that, for all v £ V, 12 a(v,v) Vv + aPv + (/ P)u (3.11) Ce (||Q Vti||2 + ||n||2) = Ce \\v\\2~, where Ce = 0.012, which is independent of e and a. Proof. In the proof of Theorem 2.11, we replace Q§% by Q Vv and for P and a(-, ) use the definitions of this chapter. Then the proof of Theorem 3.4 follows exactly els the proof of Theorem 2.11, except that the Poincare-Friedrichs inequality (3.10) of Lemma 3.2 and (iv) of Lemma 3.1 are now used to get ||Q-Vu||2 > [V6 eVI <5]2 ||u||2 1This can be established by a simple transformation of the space coordinates r. 50 Therefore, s in (2.47) is replaced by s = bound the functions Gi and G2. [Vtf e-s/1 6 and Lemma 3.3 is applied to From the V-ellipticity of the bilinear form a(-, ), the V-ellipticity of the bilinear form a(-, ) follows immediately as in Chapter 2. We summarize this result in the following corollary. Corollary 3.5 ( V-ellipticity of a(-, ) ) Let a(-, ) and ||-||F be given as defined in (1.19) and (3.5). Suppose that 0 and that diam(P) = 1. Then there exists a constant Ge > 0 such that, for all v G V, a(v,v) > Ce ||v||y , (3.12) where Ce = 0.012, which is independent of a and e. 3.2 Spherical Harmonics Since a truncated expansion into spherical harmonics (Pjv-approximation) is used throughout this chapter for the the disretization in angle, we introduce here the spherical harmonics and summarize important properties that are needed for the error bounds. Recall that the associated Legendre polynomials are defined for / > 0 and m = 0,..., l by (Margenau and Murphy [39, p. 106]) jm (3.13) where P;(/i) is the (unnormalized) Legendre polynomial of degree l. By the formula of Rodrigues (Arfken [3, p. .554]) for the Legendre Polynomials given by this definition becomes 1 j/+m = (3.14) Expression (3.14) can be used to extend the definition of P,m(fi) to negative integer values of m. It follows that P,m(/r) and Pl~Tn(fi) are related by pfmoo = (3-i5) The associated Legendre Polynomials satisfy the following recurrence relations (Ar- fken [3, p. 560]): pr() = [(Z + mjP^/x) + (Z m + ljPfoOO] , (3.16) y/T^fiPTO*) = 27^1 [^(^-^i-f1^)]. (3.17) y/l-fPTijl) = ^-[(Z + mKZ + m-ljP^Oi) (Z m + 1)(Z m + 2)PI^1(/i)] . (3.18) 51 These recurrence relations, although derived in (Arfken [3]) only for positive integers m, remain valid for negative values of m. This can be easily checked by substituting in the relation (3.15) into the left and right parts of the recurrence relations. Further, the associated Legendre polynomials satisfy the orthogonality relation -1 (3.19) Based on the associated Legendre Polynomials the spherical harmonics are de- fined by (Arfken [3, p. 571]) Y,m(6, p) := (-l)m Ci,m P,m(cos(fl)) eimv, (3.20) where . (2l+l)(l-m)\ \l----n X,---- (/ + m)! Here, 9 denotes the polar angle with respect to the z-axis, while p denotes the azimuthal angle about the z-axis is. The spherical harmonics form an orthonormal basis of L2(51): In particular, 2tr 7r ^: J J Y(9, p) YlYl'*(9, p) sin(0) d9 dp SU' 8mm>, o o which, by letting L! := (6, p) and dQ := sm(ff)dedv | can be written as J-Ylm(a)Y,r'*(tt)dn = 6w6mm., (3.21) where Y'*(Q) denotes the complex conjugate of Y (H) From (3.15) it follows directly that Y,-m(Q) = (-l)mY,m*({2). (3.22) By the definitions of 6 and p, we have = (f2x, Qy, fiz) = (cos(v?) sin(0), sin(^) sin(0), cos(0)) = (cos(^)a/1 p2, sin(y>)\/l - (3.23) with fi := cos(0). The spherical harmonics satisfy the following recurrence relations. Lemma 3.6 (Recurrence relations for spherical harmonics) For all l > 0 and we have, nxY,m = Pl^Y^t1 l,mC -Pl,-mY£? + Qy Y,m = i ((-l)A,m^r + + A.-mFTr1 - QzY = J^mY! + TJ^mY] j, (3.24) 52 where &I,m 7l,m (/ + m + 2)(1 + m + 1) 4(2Z + l)(2/ + 3) Pl,m (l m)(l m 1) 4(2/ 1)(2Z + 1) (/ m)(l + m) (2/-l)(2/ + l) Vl,m (l + m + 1)(1 m + 1) (2/ + 3)(2/+ 1) Proof. To prove the first recurrence relation, we replace cos(^) by ei so that Using for the first term recurrence relation (3.17) and for the second term relation (3.18), after simple but tedious calculations we obtain the first recurrence relation. The second recurrence relation follows in a way similar to the first. For the third relation, recurrence relation (3.16) is used. Since the spherical harmonics form an orthonormal basis of i2(S'1), every v £ Hr(TV) x i2(51) has an expansion of the form oo l C (r,Q)=S E hm(r)Yrm, with ^,m(r)= / w(r, ft)Y,m*01) dSl. (3.25) /=0 m=l qi For any v £ Hr(TZ) x iT2 (S'1), we define jv-i ; r ^Nv(r,n):= 53 53 l>m(r)Yr(Q), with ^,m(r) = j v(r, Q)Y;m*(i2) dSl (3.26) /=0 m= l S1 as the truncated expansion of v into spherical harmonics. To bound the error of the truncated expansion, in the following lemma we use the fact that the spherical harmonics are the eigenfunctions of the Laplacian operator on the unit sphere, so AnV,m(fi) sin 9 39 (Sm6dd __L_____ + sin2 9 dip2 y,m(fl) (3.27) = -l(l + l)Y,m( for / > 0 and m = / + 1,..., 0,..., l. Lemma 3.7 (Truncated expansion into spherical harmonics) Let /3 be any multi-index and recall that D@v := gxfi1dJyfild2?3 Suppose that v(r,Q) £ H^(H) X if2(51) and, for JV > 2, let IIjv be defined as in (3.26). Then \\AnDev\\ for l > 0; -/ < m < l, (3.28) ||v njwll < ^||Anu||, (3.29) 53 with C independent of v and N. Proof. By the definition of 4>i,m and (3.27), we have 2 1 [/(/ + i)F (lDfv 1 [/(/+1)]2 2 where we applied integration by parts in the last step and took into account the fact that the boundary terms vanish. Since ||T(m||L2(5i) = 1) then the Holder inequality implies that ||^m||<^||A.vD'4- Further, since the spherical harmonics form an orthonormal basis, we have CO l CO l A ||w-hhI2 = E X / \ ]=Nm=-li l=Nm=-l Using (3.28) now results in lu i - ||v njvv|| <-||An|| X JX^ [/(/+!)] = X[/(/ + l)]2' CO f 21+1 Since < js and < jf for N > 2, then we can bound the sum in the following way: V 2/ + 1 f. 2 f 2 (^-1): < N2 so that ||v njvv|| < ||Anv|| 3.3 Error Bounds In this section, we use continuity and V-ellipticity of the bilinear form a(-, ) and Ceas Lemma to bound the error of a Least-Squares Finite-Element discretization applied to problem (1.19). The discretizations considered here are based on finite-dimensional spaces 54 with functions that have a truncated expansion into spherical harmonics with respect to direction angle Q and are piecewise polynomials of degree Iona triangulation of the region % into tetrahedrons. This choice of finite-dimensional spaces corresponds to a discretization by a spectral method in direction angle and a Finite-Element discretization in space. In transport theory, the spectral discretization in angle using spherical harmonics as basis functions is also called a Pjv-approximation. Let Th be a triangulation of 1Z into tetrahedrons T of maximum diameter h. For any u(r, Q) £ Hr+1(Tl) X L2{S1), let Il/jt; denote the interpolant of v by piecewise polynomials of degree r on the triangulation Th- Then, similar to (2.27), it can be shown (Ciarlet [16]) that, for 0 < m < 1, II nHL,0 < C hr+1~m Mr+1,0, (3.30) where ||-||m 0 denotes the standard norm of Hm(TZ) x L2 (S1) and | |r+i,o denotes the standard semi-norm of Hr+1(H) x L2(5'1) (see Section 1.4). In order to combine Ceas Lemma with (2.29) and (3.30) to obtain a discretization error bound, we need the following lemmas. Lemma 3.8 (Bound for commutator [!!//£ £II/v]) Suppose N >2 and let the operator An be defined as in (3.27). Let v £ H1(It) x H2(S1). Then there exists C > 0 independent of a and e such that ||[IIjv£ £IIjv] v|| < |Anv|i,o. (3.31) Proof. By expansion (3.25), it easily follows that HpfPv = PUpjv and IljvPil Vu = Pfi VII/vu. Therefore, [Hjv£ £11*] = IM- V-fi- VBjv. (3.32) Now using the recurrence relation (3.24) in Lemma 3.6, we get OO / dv ox E E ^(/v^-zv^-r1 1=0 ml N 1 1=0 m=l N-2 1 f=0 m=l Similarly, we have 1=0 m=l 55 so that m=N N-l Ed^N 1 ,m / \rm1 ym+l\ dx (.-N-l.-mljv OCN-l,mYN ) . (3.33) m=JV+1 Similarly we get i(n',0'S_t* £}**) = m= JV+1 3j/ and N dv d nNnz -r qz ^-iijvv = Y dN,m vm 9N-l,r, m=N dz "Tat, Vm ST' vm (m7jy_i / , ^ ^AT-l,mJJV m=-JV+l Denoting by |77.| the L2-measure of 1Z it is clear that |]Y;m[| < \J\1Z\. Therefore, by (3.28) in Lemma 3.7, we can bound (3.33) as follows: dv 9 a dv a ox < Asit- ox 2y/W N(N + 1) N Y', @N,m + m= N 2 VW\ (N-1)N N-1 E m=JV+1 The sums can be bounded by f _ V l(N-m)(N-m-1) A (IV m) lV(2Ar + 1) 2-/ w-m 2^ \/ 4(2A^ 1)(2JV +1) ~ 9f'9Ar-1'\ 9f9/V-1'l - m=-N m=N 2(2AT 1) 2(27V 1) IV m=N and K yk1 l(N + m-l)(N + m) y^1 N + m (2JV 1)JV 4(2JV 1)(2JV H-1) ij+1 2(2* 1, 2(2JV-1) so that IljV fij; Clx "T oa: In the same way, it follows that di) d Djvfiy a--DyIl/yU dy dy < < *VW\ N dv An-r- ox N An dv dy 56 and dv d dv Hiv Clz- uz Unv dz dz < VW\ N N(N + TT X 7Nm + ' m=-N We now continue by bounding the sums in the following way: m* N M 0 N-1 ____ ^7Ar,m 2N 1 + 2N 1 ^ ^N2 -JV m=l s2j^i+2?n(w-1)w=w and so that V'1 - V'1 l(N + m)(N-m) ^ A . X, w i,rn X Y (2iV + 1)(2JV 1) ^ TJV> )VJ-1 m=-JV+l V ^ ^ A > m=N m=N+1 ----IlivU 0Z dz < 3^ ~ AT An <9u 3z which proves the lemma. Lemma 3.9 Let V and ||-||^ be given as defined in (3.8). Then for all v E V fl (H1( 1Z) x L2(51)): 1% < CIRIi.o. with C independent of e and a. Proof. By definition, it follows that 11% < ([(1 <0 {\m V|| + ||n -Pv.o11} + e\u- u||]2 + ||u||2) 1/2 Notice that lia-vfi|| = dv dv dv ^ dv dv dv < fbr-S- d x + + dz < v^3 ||u||1)0 since |fij;| <1, |f2y| < 1, and |fi0| < 1. Similarly we have ||£2 PVu|| < nxp dv dx + n P1 UyPdy + n*pl and < a/3 ||u||10 , ||Pfi-Vu||<||Q-Vu||||1)0. VN-l,m (3.34) 57 From these bounds, (3.34) follows immediately with C = ^[3%/3]2 + lj = \/28. Now we are in a position to establish the following error bound. Theorem 3.10 (Finite Element in space, Pm in angle ) Let 7^ be a triangulation of 1Z into tetrahedrons of maximal diameter h. Suppose 0 < a < 1, 0 < e < ^=, and diam(77) < 1. Let V be given as defined in (3.6) and let Vh be defined by Vh:=LhV: ft(r,Q) = f IPr(Th)\ , l 1=0 m=-l J where fPr(7ft) denotes the space of piecewise polynomials of degree < r on the triangulation 7ft. Let il> eVn (Hr+1(Tl) x H2(S1)) be the solution of (1.19) with qs e L2{11) x H^S1) and let tph 6 Vh be the solution of (1.19) restricted to Vh. Further assume that ip has the diffusion expansion ip(r,Q) = Q). Then U Mv (llA^s|| + |Antf|i,o) fc, +\ 7T C2 hr (|<£|r+i,o + |^ir|r+i,o), V '-'e with Ci and C2 independent of a and e. Proof. By Ceas Lemma, we have (3.35) H-M\v <\P£ (\w n^v-llv + ||n^ nftnjvVll) V Og (3.36) The first term can be bounded by using the V-ellipticity of a(-, ), (3.29) of Lemma 3.7 and (3.31) of Lemma 3.8 in the following way: W-nNnv < -4= ll^(^-nJvV,)ll < Vcl 1 -4= (\\C1> - + IIPjvjC £11*] tf[|) (3.37) V^e < c c, We U l|An9i|1 + NMh To bound the second term in (3.36), we use that, by definition, ||i||v = \\t>\\y and that S~1EhUNip = TlhS-'ENip = Ilftl!*^ Therefore, \\ENip -EhEN^\\v = E-M-ip- lift II* V < C n*v> nftiLvV 1,0 58 where the last inequality follows from (3.34) in Lemma 3.9. We now use (3.30) and the diffusion expansion of ip to get nNip nhn^ < chr n^ = or V 1 £ n / 1,0 r+1,0/ since Plljv =: IIjvP. Remark 3.11 (Nondiffusive regimes) In Theorem 3.10, it is assumed that the analytical solution has a diffusion expansion in order to get an error bound in (3.38) with a constant that is independent of parameter e. For regimes where the diffusion expansion is not valid, j is of moderate size, so that there is no need for an error bound that is independent of e. Therefore, in this case, (3.38) can simply be bounded by so that the overall bound becomes lr+1,0 ' 59 CHAPTER 4 MULTIGRID SOLVER AND NUMERICAL RESULTS According to the theory derived in our earlier chapters, the Least-Squares approach yields accurate discrete solutions, even for diffusive regimes. In this chapter we confirm this very efficiently by a full multigrid solver. The following tests are restricted to the one-dimensional transport problem (1.6). For the discretization in angle, a Pjv-approximation is used, which is a spectral method using the first N Legendre polynomials as basis functions. For the discretization in space, slab (((j>f(z) G JPr(Th) with r = 1 or r = 2). Defining m := dim(IPr(7)l)) 1 and letting {rjk(z), k = 0,1,..., m} be a Finite-Element basis of IPr(T/t) (in the case r = 1 for example, r]k(z) are the usual hat functions), then the discrete space is given by for all / = 0,..., N 1 and all k = 0,..., m. For the computations in this chapter, we simplify the discrete problem (4.2) further by using a Gauss quadrature formula to approximate all integrations over angle fi, resulting in a Least-Squares discretization of either the 5V-flux equations or the moment equations. These two semi-discrete forms of the transport problem are introduced in the next section. result by numerical tests and demonstrate that the resulting discrete system can be solved we employ a Finite-Element discretization with linear or quadratic basis functions. To be more precise, we recall that the analytical solution has the moment expansion For the discretization, we truncate the sum N-1 h(z,fj.) = £>?(*) Mm) (4.2) 4.1 Sjy-Flux and P/v-i-Moment Equations 4.1.1 S'jv-Flux Equations Let Recall that, for a function v E VN we can use a Gauss quadrature formula with N support abscissas {fi\,..., and weights {wi,..., wjy} to write (4.3) since this quadrature with N support abscissas is exact for polynomials of degree < 2N 1 (Stoer and Bulirsch [49, p. 153]). In the 5jv-discretization, the flux is only computed for the discrete set of angles {fix,..., Hn}, so that the unknowns are given by the vector fp := / VfoMi) \ \ ^(z,hn) By collocating at the Gauss points and approximating the operator P by the sum in (4.3), the following SV-flux equations for ip can be derived from the transport equation (1.6): Hip M + CTt{lN ~ R) + VaR where q := ( 1 ^ , I := , := \ q(z,m) / l 1 ) =q, ( u l \ &N (4.4) M :=diag(/ii,...,//jv), i?:=WT. Further, for v E VN we note that the scaling transformation S = P + t(I P) in this notation becomes Sv = Snv := [R + t(Ijv R)] u, with In denoting the N x N identity matrix. Therefore, the scaled Sjv-flux equations are given by ILip := StfILip = 3 SnM + T(Tt(lN ~ R) + VaR 02 i> = q.> (4.5) with q := Sjv In., 0 V>(z() = 9i{v-i) \ 9i(v%) / and Q,Ii ip{zr) - / griUK+x) \ V 9r{m) J (4.6) respectively, where In denotes the ^ X y identity matrix. 61 4.1.2 Moment Equations In order to derive the moment equations from the flux equations, we note that, for V> G VN, N-l tP=^2Mz)pi(v)- (4-7) 1=0 The moments are given by 1 } N Mz) = o / (z>P)Pi{p)dP = VHz>N)Pi(Pi) (4-8) -'i where in the last step we again used the Gauss quadrature formula. Defining <£(z) := ((^0(z),..., ^jv_.!(2:))t and the matrices T and II as PDij := Pi-i(Pj), == diag(wi,...,uN), then the respective relationships (4.7) and (4.8) between the flux ij> and the moments can be written in matrix vector notation as l = TQ, and V>=Tt£. (4.9) In the following lemma, we summarize simple properties of the matrices R and T that we need for the derivation of the moment equations. Lemma 4.1 (Properties of R and T) We have: (i) wTl 1, so R2 = R and R(In R) = 0; (ii) Rt fl = QR; (iii) TQTJ = IN and TJTi2 = IN] (iv) Tfil = Tw = (1,0,..., 0)T and utTt = (1,0,..., 0); (v) Leting :*-* 3 the n r o bo 0 &0 0 h 0 T£2MTt = 0 h 0 h 0 B NxN (vi) r i 0 .. 0 1 0 0 .. . 0 TQRTt _ 0 0 .. 0 NxN 62 Proof. N . 1 (i) : wTl = E = 2 / 1 dp = 1. Therefore, R2 = luTluT = l(wTl)wT = luT = R. i=i -i (ii) : RtQ ~ wl_T£l = u>uiT = filwT = QR. N 1 (iii) : (TOTt) = £ Pi-i(Pk)ukPj-i{Pk) = \ f pi-i(p) pj-i(p) dp, = 6i:j. ,J *=i -l Therefore, TQTt = I, so Tr nonsingular => 3C such that TtC = I => TQTJC = TQ => C = TO => TtTO = I. N N 1 (iv) : (TO),. = J2 Pi-i(Vj)uj = E pi-i(Vj)ujPo(pj) = \ f pi-i(p)p0(p) dp = 6a. j=i i=i -i (v) : The unnormalized Legendre polynomials Pi(p) satisfy the recursion (Arfken [3, p. 540]) ppM = + ^^fl+iO*)- (4-10) Since the normalized Legendre polynomials are given by pi(p) := y/21 + 1 Pi(p), from (4.10) we have PPi(p) = bi-ipi^i(p) + bipl+i(p,) (4.11) with bi-i := \y-J2_1 Using (4.11), we then have (MTT)i,j = ViPj-ifai) = h3-iPi-z(Pi) + bj-iPj(Pi)- Therefore, N . [TQMTr]. = Y^Pi-i(Pk)uk (bj-2Pj-2(Pk) + bj-ipj(pk)) k=\ 1 1 = bj-Jpi-i{p)pj-2{p) dp + bj-v^ Jpi-i(p)pj(p) dp -1 -1 bj2^i j'1 T bj 15* j y -(_ 1. (vi): mRTr = (TO 1) (wtTt) = Tu utTt = (1,0,..., 0)T(1,0,..., 0), where we used (iv) in the last step. Multiplying the unsealed flux operator IL by TT from the right and by TO from 63 the left and using Lemma 4.1 gives TQILTt = TQMTt ^ + ( ' 1 0 . . 0 ' \ ' 1 0 . . 0 ' Bk+' In - 0 0 . . 0 + CTfl 0 0 . . 0 0 o , 0. ) _ 0 0 . 0. (4.12) ~Btz + =: JM, where 1M is the unsealed moment operator. Therefore, multiplying (4.5) by TO from the left and using (4.9) results in the following scaled moment equations: TOZLV> = TnsNTTmmTr = (T£lRTr + tTOTt tTQRT7) / 1 T \ 0 + tIn - 0 V 0 0 ) JMj>_ (4.13) IMh =: TMj>_ = TO<^ Again using relation (4.9), it follows from (4.6) that the boundary conditions for the moment equations are given by la, 0 2 TT{Zl) = ( 9l(j*l) \ \ gi(fJ-K) / 0 ,Ia\TTl(zr) = / gr(x+1) V 9i(vn) (4.14) We conclude that the SV-flux equations and the P/v-moment equations are equiv- alent semi-discrete forms of the transport equation. The difference between these two sets of equations is that the non-derivative part in the flux equations is fully coupled, while the derivative part is decoupled. For the moment equations, the reverse is true. 4.1.3 Least-Squares Discretization of the Flux and Moment Equa- tions After deriving the £V-flux and moment equations we return to the discrete P/v- problem (4.2). Using a Gauss quadrature formula with weights {wi,...,wjv} and points 64 {fii,..., fix} to approximate the integration over angle n results in Zv N / pi)] (fij) [£bki,(z, fi)] (Hj)dz 1 i=1 ZI N I i=i (4.15) We note that on the left-hand side, the approximation of the integration over angle jx by the Gauss quadrature formula is exact as long as / < N 1: since then Cbk,i is a polynomial in fj, of degree l + 1, while Lijih is a polynomial in fi of degree N, then the product is a polynomial in fi of degree < 2N 1, for which the Gauss quadrature formula with N support abscissas is exact. Therefore, only in the equations for i, k = 0,..., m} must we introduce an error on the left-hand side by approximating the integration over angle. On the other hand, the same argument shows that the right-hand side is represented exactly by the Gauss quadrature formula as long as qs(z,fi) has an expansion into the first N 2 Legendre polynomials. With the notation introduced in Section 4.1.1 we have ( [Liph{z,ii)\ (m) V [Abh(z,n)]{nN) = and [Cbkti(ztfi)](fxi) > [£bki,(z,n)}(/iN) JLr}k(z)tJ+1, where tj+l denotes the (/+l)-nth column of the matrixTT defined in Section 4.1.2. Denoting by (,')jftN the standard Euclidean inner product of 1RN, (4.15) then becomes Z r j (MLh,ILT)k(z)ti+1)mN dz Z\ ZT = J dz Zl (4.16) for all k G {0,1,..., m} and / G {0,1,..., N 1}. Since the columns of TJ span 1RN, then we can substitute {t^,.. by the canonical basis {e^ ..., ew} of IRN and we recognize that (4.16) is a Least-Squares discretization of the SW-flux equations using the discrete space ' / Vh{z,fl i) > < N m jl Hft = >v-h = J2Y,vik,ik(z)z-j \ vh(z,fxN) y O II il II (4.17) This is the space of iV-vector functions whose components are piecewise linear (for r = 1) or piecewise quadratic (for r = 2) polynomials on the partition Tk of the slab. 65 Using (4.9) and (iii) of Lemma 4.1, we can rewrite (4.16) as J (p£TTlh,TTTn£TTTntf+1r)k(z)}iRN dz *1 Zr = J(^,TTmmTTmtJ+1r,k(z))MN dz Zi for all k G {0,..., m} and / G {0,..TV 1}, which by (4.13) and (iii) of Lemma 4.1 is equivalent to 2r j (Mlh,Mem{z))^ dz (4.18) Zr = J (TQ.qs,Meir)k{z))^ dz. Hi This is a Least-Squares discretization of the moment equations using the discrete space ' f &%(z) > m jl o 1* d) i-h . for l = 0,..., N 1 ^ N-l(z) J k=0 J All computations in the following sections are based either on discrete problem (4.16) or (4.18). 4.2 Properties of the Least-Squares Discretization In this section, we use the results of numerical experiments to observe properties of the Least-Squares discretization. The results plotted in Figure 4.1 and Figure 4.2 demonstrate the accuracy of the Least-Squares discretization in combination with the scaling transformation for diffusive regimes. The test problem we chose here is the same one used by Larsen et al. in [32]. The exact solution of the corresponding diffusion equation is (z) = 3/2z2 + 15z, which is plotted in solid in Figure 4.1 and Figure 4.2. The scalar flux 0 '= Piph of the solution iph of the Least-Squares discretization of the scaled transport equation using piecewise linear elements in space is shown by the crosses. For the problem in Figure 4.1, where the absorption cross section is zero, we used r = 1/of = e2 as the scaling parameter, which gives a higher accuracy than the scaling with r = e. An explanation of this result is given in the analysis presented in (Manteuffel and Ressel [38]). For the test problem in Figure 4.2, where cra ^ 0, the scaling parameter was chosen to be r = \/(Ta/crt = e^/a. Taking into account the fact that the mesh size of 1.25 is order of magnitudes larger than l/ both cases the results are very accurate. Further, we mention that the Least-Squares discretization without the scaling trans- formation results in the zero solution for both cases as indicated by the asterisks in Figure 4.1 and Figure 4.2. This outcome confirms the asymptotic analysis in Theorem 2.1, according to which the scalar flux of the Least-Squares discretization of the unsealed transport equation with piecewise linear basis elements in space is a straight line connecting the values at the boundary in diffusive regimes. Moreover, the asymptotic analysis in Theorem 2.1 asserts that the Least-Squares discretization of the unsealed transport equation using piecewise polynomials of degree > 2 in space has the correct diffusion limit. This too is supported by the observed maximum errors for a Least-Squares discretization of the unsealed transport equation with piecewise quadratic elements in space, which we list in Table 4.1. However, using the scaling transformation in combination with the Least-Squares discretization with quadratic elements in space achieves dramatically better accuracy in the discrete solution. For piecewise linear elements in space, the error bound in Theorem 2.14 indicates an 0(h2) behavior of the Least-Squares discretization error for a sufficient smooth solutions. To analyze the order of the Least-Squares discretization numerically, we used a problem with smooth exact solution sin(7rz). We then computed the discrete L2-error of the Least-Squares discretization with linear elements in space for a sequence of grids that were created from the coarsest grid by halving the mesh size from one to another grid. Table 4.2 depicts the ratio of these errors for each two consecutive grids. The value of approximately 4 of this quotient confirms numerically an 0(h2) behavior of the discretization error for linear elements. The solution of the transport equation is physically a density distribution and should therefore always be positive. The Least-Squares discretization has the drawback that it does not in general guarantee a positive solution. This is shown by the example in Figure 4.3, where the exact solution of the corresponding diffusion equation is again plotted as a solid line and the discrete Least-Squares solution is depicted by the crosses. Of course, this boundary layer can be resolved by refinement of the mesh, as shown in Figure 4.4. However, in the region [2,10], the solution is nearly constant, so that a refinement makes sense only in the region around the boundary layer. Therefore, the aim is to use adaptive refinement, which can be combined very naturally with a full multigrid solver (McCormick [42]). One easy criterion for determining the area of further refinement would then be to check where the solution is negative. Of course, this has to be combined with more sophisticated criteria that compare the solution of consecutive grids, for example. Besides having the correct diffusion limit, a discretization for transport problems must satisfy the extra condition to resolve, with a suitable fine spatial mesh, interior bound- ary layers between media with different material cross sections. To test numerically if the Least-Squares discretization meets these extra conditions, we used the test problem from (Larsen and Morel [33]), which is given in Figure 4.5. The solid solution plotted in Fig- ure 4.5 is computed by a Least-Squares discretization using 50 cells in both [0,1] and [1,11]. This solution approximates the exact solution plotted in (Larsen and Morel [33]) fairly well. We see further that the boundary layer is not resolved fully when the mesh spacing for the Least-Squares discretization is too coarse (crosses in Figure 4.5). In addition, the Least- Squares solution itself indicates an error by becoming negative. Again adaptive refinement would be an appropriate remedy. 67 Scalar Flux dib /i^- + 100(J-P)V> = 0.01 oz -0(0, //) = 0 for fi > 0 -0(10, //) = 0 for fj, < 0 (e = 0.01, a = 0.0) Solution of corresponding diffusion equation: cf>(x) = 3/2 x2 + 15s. Figure 4.1: Scalar Flux of exact (solid) and Least-Squares solution with scaling transfor- mation (crosses) and without (asteriks). 68 Scalar Flux (e = 0.01, a = 1.0) Solution of corresponding diffusion equation: = 3/2 x'2 + 15*. Figure 4.2: Scalar Flux of exact (solid) and Least-Squares solution with scaling transfor- mation (crosses) and without (asteriks). 69 Table 4.1: Comparison of maximum error for scaled and unsealed Least-Squares discretiza- tion with piecewise quadratic elements in space. OL 0.0 OL 1.0 l/h scaled unsealed scaled unsealed 4 1.8-10-4 5.2-10-2 1.2-10-4 3.2-102 8 1.2-10-5 1.2-10-2 7.7-10-6 7.7-10-3 16 ^3 1^ O 1 3 1 O i-H CO 4.8-10-7 1.9-103 32 4.8-10-8 1 o T( 00 3.0-10-8 4.6-10-4 64 3.0-10-9 1.8-10-4 1.8-109 1.1-10-4 128 1.8-1010 3.8-105 1.1-1010 2.2-10-5 256 1.3-10-11 6.3-10-6 7.9-10-12 3.8-106 Test problem: [ftjl + vtil-P) + (z,n) =q for (z, y) 6 [0,1] x [1,1] < -0(0, /z) = 0 for fi > 0 > , V^IjZ-O = 0 for fj, < 0 where = 1000.0, (ra = q := (in cos(irz) + Exact Solution: = sin(Trz), Number of Moments: N = 2. 70 Table 4.2: Order of Least-Squares discretization for linear elements in space. £ = 1.0 £ = 0.001 a = 0.0 a = 1.0 a = 0.0 ||e2fc||3 ll^lh o= 1.0 IlCah [|2 II gfc. II a IMI2 2h kid limits IMh C2h l5ll IKIh 4 8 16 32 64 128 256 512 1024 2048 4096 8192 8.5-10-2 2.1 -102 5:4-10-3 1.3- 10-3 3.4- 10-4 8.5- 10-5 2.1-10-5 5.3- 10"6 1.3- 10-6 3.3- 10-7 8.2-10-8 2,0-10- 4.0 3.9 4.1 3.8 4.0 4.0 3.9 4.0 3.9 4.0 4.1 2.610 6.7- 10' 1.7- 10' 4.2-10' 1.0-10 2.6-10' 6.6-10 1.6-10 4.1-10' 1.0-10 2.5- 10 6.5- 10 3.9 3.9 4.0 4.2 3.8 3.9 4.1 3.9 4.1 4.0 3.8 1.5-10-2 3.8-10-3 9.7- 10-4 2.4- 10-4 6.1-10-5 1.5- 105 3.8- 10-6 9.5- 10-7 2.4- 10-7 6;o-io-8 1.4- 10- 3.5- 10-9 3.9 3.9 4.0 3.9 4.0 3.9 4.0 3.9 4.0 4.2 4.0 1.2-10-2 2.9- 10-3 7.5- 10-4 1.8-10-4 4.7- 10-5 1.2-10-5 2.9- 10-6 7.3-10-7 1.8- 10-7 4.6- 10-8 l.l-lO-8 2.8-10-9 4.1 3.9 4.1 3.8 3.9 4.1 3.9 4.0 3.9 4.2 3.9 Test problem: [a*If + V>(0,//) q for (z,n) G [0,1] x [-1,1] 0 for fi > 0 0 for n < 0 where at = 1, cra = q := fiir cos(wz) + Exact Solution: ij>(z,n) = sin(7r;z), Number of Moments: N = 4. 71 Scalar Flux (e = 0.01,. a = 10.0) Solution of corresponding diffusion equation: = 1^e20v^0.+ (X + e2ojo_i) 6~VZ- Mesh size: h ^, Moments: N = 4. Figure 4.3: Example of violation of the positivity property by the Least-Squares discretiza- tion 72 Scalar Flux (e.= 0.01, a = 10.0) Solution of corresponding diffusion equation: Mesh size: h = , Moments: N = 4. Figure 4.4: Refinement resolves boundary layer. 73 Scalar Flux ( dib + a-tip Pcrsip = 0.0 = 1 for fi > 0 . ip{ll,/j,) = 0 for // < 0 , 2 0 < z < 1 , r\_/ 0 0 < z < 1 100 1 < z < 11 an Number of Moments: N = 16. The solid-line solution is computed by the Least-Squares discretization using 50 cells in both [0,1] and in [1,11]. Figure 4.5: Behavior of the Least-Squares discretization in the case of interior boundary layers. 74 4.3 Multigrid Solver In this section we describe the multigrid solvers, that were developed for solving the problems resulting from a Least-Squares discretization of Sjy-flux (4.16) and moment (4.18) equations with piecewise linear elements in space. We refer the reader who is not familiar with multigrid methods to (Briggs [6]) for an introduction and to (Hackbusch [24]) and (McCormick [41]) for more advanced topics. Essential for the efficiency of a multigrid solver is the proper choice of its compo- nents, mainly the intergrid transfer operators, coarse grid problems, and relaxation schemes. The choice of the first two components is naturally given by the Least-Squares variational formulation: the sequence of discrete spaces V\ C V2 C C Vj = Vh determines the coarse grid problems since they are just the restriction of the variational problem to these discrete subspaces; the prolongation operator, which is a mapping from a coarse grid to the next finer grid in the grid sequence, is formed directly by composing the isomorphisms between the discrete spaces and their corresponding coordinate spaces with the injection mapping between 14_i and 14 (Bramble [5]), (McCormick [43]); and the restriction operators, which are mappings from a finer grid to the next coarser grid, are just the adjoints of the prolon- gation operators. Therefore, the only multigrid components that need to be chosen here are the sequence of discrete spaces and the relaxation. No relaxation scheme is currently in use for transport problems that smooths the error in angle and in space simultaneously. Thus, instaed of devising a multigrid scheme that coarsens simultaneously in space and in angle, we consider first applying the multilevel-in- angle technique of (Morel and Manteuffel [44]), which is based on a shifted source relaxation scheme. After reducing the degrees of freedom in angle, a multigrid method in space is used to solve the remaining discrete problem. Thus, here we consider only the development of a multigrid solver in space. For the discrete subspaces, we Use the Finite-Element spaces with linear basis ele- ments on increasingly finer partitions (halving the cells) of the slab. 4.3.1 Sn Flux Equations The stencil that results from a Least-Squares discretization of the Sn flux equations (4.16) with these Finite-Element spaces is given in Appendix A and shows full coupling in angle. This suggests the use of a line relaxation in angle, which updates all angles for a given spatial point simultaneously. The matrix that must be inverted for each spatial point for this scheme is of the form (see Appendix A) := (aifi + a2QM + az&M2) + (ci MwuTM + c2ljujJ) . The first part is diagonal, and the second has the rank 2 factorization (ci MujjjT M + c2uxjJ) = Thus, A{ can be cheaply inverted by the Sherman-Morrison formula (Golub and Van Loan [20, p. 51]). Our computational tests showed essentially no differences in the error reduction and smoothing properties of this line relaxation scheme for various different orderings of the spa- tial points. To save computational, we thus use this line relaxation scheme in a red-black fashion, since then the residual after one relaxation sweep is zero at the black points and 75 not need not to be computed for the restriction to the next coarser grid. This scheme is also more amenable to advanced computer aechitecture efficiency. The convergence factors for this multigrid algorithm 1 listed in Table 4.3, are com- puted in the following way. A problem with zero source term and and whose exact solution equal is zero is used in combination with a randomly generated initial iterate. Then 30 multi- grid cycles are performed and the convergence rate is computed from the geometric average of the per-cycle reduction factors of the last 20 cycles. We thus reduce the influence of the initial iterate on convergence and observe what tends to be the worst-case factors. Here we study the (1, l)-V-cycle, which uses one relaxation before and one after coarse grid correc- tion. Observed factors for coefficient a. Factors for (2,l)-V-cycles are also included. Such factors are sufficient to get a solution with an error on the order of the discretization error by one full multigrid cycle, as demonstrated by the results in Table 4.4. The additional V cycle on the finest level 10, performed subsequent to the full multigrid cycle, is reducing the error only by a small amount. Thus, we can conclude, that the error after the full multigrid cycle is completed is already on the order of the discretization error. 4.3.2 Moment Equations The Stencil for the Least-Squares discretization of the moment equation (4.18) is given in Appendix B. In the interior of the computational domain, it is a 15-point stencil that connects the neighboring spatial points and the two higher and two lower moments. At the spatial boundary, however, the stencil couples all moments. Therefore, we use a line moment relaxation, that updates all moments simultaneously for a given spatial point all moments simultaneously. Since the efficiency of the smoothing again is observed to be independent from the relaxation ordering, as in the Sn flux case, we use a red-black ordering of the lines. The convergence factors for this multigrid algorithm, are listed in Table 4.5. For very large values of at, this multigrid solver is more stable with regard to roundoff errors than the multigrid solver for the Sn flux equations. Even for values of 106, we get (1, l)-V-cycle convergence factors of order 0.1. Again, these convergence factors are sufficient to get a solution with an error on the order of the discretization error by one single full multigrid cycle, as demonstrated in Table 4.6. 1This algorithm was implemented in C++ and a special array class was designed for this purpose. 76 Table 4.3: Multigrid convergence factors for solving the flux equations. (l,l)-V-cycle O'* a = 1.0 a = 0.5 a = 0.25 a = 0.1 a 0.0 Id)0- 0.088 0.085 0.087 0.118 0.169 101 0.082 0.083 0.083 0.110 0.136 102 0.052 0.052 0.053 0.106 0.130 103 0.088 0.091 0.088 0.105 0.130 104 0.091 0.091 0.091 0.105 0.130 105 0.092 0.092 0.092 0.105 0.130 106 0.090 0.092 0.092 0.102 0.133 (2,l)-V-cycle 10 0.053 0.050 0.053 0.105 0.155 101 0.047 0.047 0.047 0.082 0.104 102 0.019 0.024 0.024 0.077 0.097 103 0.020 0.021 0.021 0.076 0.096 104 0.020 0.022 0.022 0.076 0.096 105 0.020 0.011 0.023 0.076 0.096 10s 0.019 0.023 0.018 0.077 0.099 Test problem: \p-jh + V>(0,/z) = 0 for (z,fi) 6 [0,1] x [1,1] = 0 for n > 0 = 0 for fi < 0 where aa = Exact Solution: iJ>(z,(j,) = 0. Initial Iterate: randomly generated grid function. Mesh size: h = Number of Moments: N = 8. 77 Table 4.4: Full Multigrid (l,l)-V-Cycle convergence factors for solving the SW-flux equa- tions. Test problem: + ip(0,n) = q for £ [0,1] x [1,1] = 0 for fj, > 0 = 0 for (i < 0 where cra = q := /iircos(Trz) + aa sin(7rz), Exact Solution: ip(z, /j.) = sin(7rz), Number of Moments: N = 4. 78 Table 4.5: Multigrid convergence factors for solving the moment equations. (1,1 -V-cycle a 1.0 a = 0.5 a = 0.25 a = 0.1 o o II S To0- 0.052 0.086 0.083 0.118 0.169 101 0.091 0.092 0.091 0.117 0.136 102 0.056 0.056 0.071 0.106 0.131 103 0.092 0.093 0.092 0.105 0.127 104 0.095 0.094 0.094 0.106 0.129 105 0.095 0.094 0.093 0.107 0.130 106 0.095 0.092 0.092 0.107 0.130 107 0.095 0.092 0.092 0.107 0.130 10 0.095 0.092 0.092 0.107 0.130 109 0.095 0.094 0.092 0.107 0.130 1010 0.095 0.094 0.092 0.106 0.130 (2,1' -V-cycle 0-( a= 1.0 a = 0.5 or = 0.25 a = 0.1 a 0.0 0.074 0.051 0.054 0.105 0.155 101 0.055 0.055 0.055 0.082 0.104 102 0.025 0.025 0.039 0.077 0.097 10 0.023 0.026 0.042 0.076 0.096 104 0.023 0.023 0.042 0.076 0.096 105 0.023 0.023 0.042 0.076 0.096 106 0.023 0.023 0.042 0.076 0.095 107 0.023 0.023 0.042 0.076 0.095 10 0.023 0.023 0.042 0.076 0.095 109 0.023 0.023 0.042 0.076 0.095 1010 0.023 0.023 0.042 0.076 0.095 Test problem: [l*-jfc + V>(0,/r) = 0 for (z, fi) 6 [0,1] x [-1,1] = 0 for fi > 0 = 0 for n < 0 where cra = ^. Exact Solution: i>(z, fi) = 0. Initial Iterate: randomly generated grid function. Mesh size: h = Number of Moments: N = 8. 79 Table 4.6: Full Multigrid (l,l)-V-Cycle convergence factors for solving the moment equa- tions. Test problem: [Vlh + (z, fl) V(O^) 0(1. A*) = q for (z, fi) G [0,1] x [1,1] = 0 for fi > 0 = 0 for n < 0 where cra = q := fnrcos(irz) + cra sin(7r.z), Exact Solution: ip(z, fi) = sin(7rz), Number of Moments: N = 4. 80 CHAPTER 5 CONCLUSIONS 5.1 Summary of Results In this thesis, we have studied a systematic Least-Squares approach to the neutron transport equatio. The Least-Squares formulation converts the first-order transport problem into a self-adjoint variational form, which makes it accessible to the standard Finite-Element theory. Essential for this theory is the V-ellipticity and the continuity of the variational form, which leads directly to the existence and uniqueness of the analytic and discrete solutions and to bounds for the discretization error for a variety of different discrete spaces. Moreover, the variational formulation guides in a natural way the development of a multigrid solver for the resulting discrete problem. However, due to special properties of the transport equation, the Least-Squares approach is less straightforward than it first appears. In this thesis, we focused on neutron transport problems in diffusive regimes. In these regimes, the transport equation is singularly perturbed and its solution tends to a solution of a diffusion equation. Therefore, to guarantee an accurate discrete solution, a discretization for the transport operator is needed, that becomes a good approximation of the diffusion operator in diffusive regimes. Only a few conventional discretization schemes are known to have this property. By an asymptotic expansion, we show in Theorem 2.1 for slab geometry that a Least-Squares discretization with piecewise linear elements in space fails to be accurate in diffusive regimes. The choice of linear elements in space will for any right-hand side always result in a straight line connecting the prescribed values at the boundary for the principal part of the solution, which is independent of direction angle fi. Numerical tests confirm this behavior. On the other hand, we prove in Theorem 2.1 that, if piecewise polynomials of degree > 2 are used, then the principal part of the discrete Least-Squares solution becomes a Galerkin approximation to the correct diffusion equation in diffusive regimes. This means that the Least-Squares discretization will be accurate in this case. Numerical tests with piecewise quadratic elements again confirm this result. Because of Ceas Lemma, the Least-Squares discretization can be viewed as the best approximation to the exact solution in the discrete space with respect to the Least-Squares norm ||£-||, where C denotes the transport operator. In diffusive regimes, the different terms in the transport operator become totally unbalanced, which means that different parts of the solution are weighted much differently by the Least-Squares norm. With P denoting the L2-orthogonal projection onto the space of functions that are independent of direction angle, it is clear that the Least-Squares norm in diffusive regimes hardly measures the components Pip of the solution ip, although this is the main component for these regimes. The idea is therefore to scale the transport operator prior to the Least-Squares discretization, with the effect of changing the weighting in the Least-Squares norm. Clearly, the scaling from the left by S = P + r(I P) with r = 0(1/ solution component Pip. Numerical tests show that a Least-Squares discretization of the scaled transport equation, even for piecewise linear elements in space, yields an accurate solution in diffusive regimes. Moreover, they show for piecewise quadratic elements that the scaling transformation dramatically increases accuracy. The major part of this thesis is devoted to proving that the Least-Squares dis- cretization in combination with the scaling transformation S gives for a variety of simple Finite-Element spaces always accurate discrete solutions, even in diffusive regimes. As men- tioned above, essential for bounding the error is the V-ellipticity and the continuity of the Least-Squares form with respect to some norm. It is easy to show that the scaled Least- Squares form cannot be bounded from below by a standard Sobolev norm. Therefore, the first obvious choice in the one-dimensional case is the norm (ll^fe-ll2 + II II2)1/2- With re- spect to this norm, we prove V-ellipticity and continuity of the scaled Least-Squares bilinear form and derive error bounds for discrete spaces that use piecewise polynomials in space and piecewise polynomials or Legendre polynomials in angle as basis functions. However, since the V-ellipticity and the continuity constants for this norm depend on at and aa, these bounds blow up in diffusive regimes. To prove the V-ellipticity and continuity with constants independent of cr* and aa, we use a scaled norm. Based on the V-ellipticity and continuity with constants independent of at and aa, we obtain discretization error bounds for the same discrete spaces mentioned above, with constants independent of at and aa. Thus, these bounds stay valid also in diffusive regimes. This result is generalized to three-dimensional x-y-z geometry for discrete space that use piecewise polynomials as basis functions in space and spherical harmonics as basis functions in angle. We conclude that the Least-Squares approach in combination with the scaling trans- formation represents a general framework for finding discretizations for the transport equa- tion that are accurate in diffusive regimes. Further, it naturally guides naturally the develop- ment of an efficient multigrid solver for the resulting discrete system. This is demonstrated in this thesis for slab geometry and piecewise linear elements. The developed multigrid solver for this discrete problem has convergence factors on the order of 0.1, so that one full multigrid cycle of this algorithm computes a solution with an error on the order of the discretization error. 5.2 Recommendations for Future Work Our numerical results show that, when simple discrete spaces in space are used, refinement is needed in order to resolve boundary layers. Therefore, the aim for the future would be to combine the full multigrid solver with adaptive refinement. On the other hand, with the V-ellipticity and the continuity given, it seems fairly straightforward to establish error bounds for more complicated discrete spaces that can better resolve boundary layers, including those of exponential or hierarchical type. Furthermore, generalization of the scaling technique to anisotropic transport prob- lems suggests itself. 82 BIBLIOGRAPHY [1] R.A. Adams, Sobolev Spaces, Academic Press, 1975. [2] R.E. Alcouffe, E.W. Larsen, W.F. Miller and B.R. Wienke, Computational Efficiency of Numerical Methods for the Multigroup, Discrete Ordinates Neutron Trans- port Equations: The Slab Geometry Case, Nuclear Science and Engineering 71, pp. 111-127, 1979. [3] G.B. Arfken, Mathematical Methods for Physicists, second edition, Academy Press, New York, 1971. [4] A. Barnett, J.E. Morel and D.R. Harris, A Multigrid Acceleration Method for the One-Dimensional Sn Equations with Anisotropic Scattering, Nuclear Science and Engineering 102, pp. 1-21, 1989. [5] J.H. Bramble, Multigrid Methods, Pitman Research Notes in Mathematics Series 294, Longman Scientific and Technical, Essex, 1993. [6] W.L. Briggs, A Multigrid Tutorial, SIAM, Philadelphia, 1987. [7] C. Borgers, E.W. Larsen and M. L. Adams, The Asymptotic Diffusion Limit of a Linear Discontinuous Discretization of a Two-Dimensional Linear Transport Equation, Journal of Computational Physics 98, pp. 285-300, 1992. [8] S.C. Brenner, L.R. Scott, The Mathematical Theory of Finite Element Methods, Texts in applied mathematics, Springer Verlag Inc., New York, 1994. [9] E. Broda, Ludwig Boltzmann. Mensch. Physiker. Philosoph., Franz Deuticke Verlags- gesellschaft m.b.H., Wien, 1986. [10] Z. Cai, R. Lazarov, T.A. Manteuffel and S.F. McCormick, First-Order System Least Squares for Partial Differential Equations: Part I, SIAM J. Numer. Anal., Vol. 31, 1994. [11] Z. Cai, T.A. Manteuffel and S.F. McCormick, First-Order System Least Squares for Partial Differential Equations: Part II, submitted to SIAM J. Numer. Anal., March 1994. [12] Z. Cai, T.A. Manteuffel and S.F. McCormick, First-Order System Least-Squares for the Stokes Equation, submitted to SIAM J. Numer. Anal., June 1994. [13] B.G. Carlson and K.D. Lathrop, Transport Theory The Method of Discrete Or- ' dinates, in Computing Methods in Reactor Physics, (H. Greenspan, C.N. Kelber, and D. Okrent, eds.), Gordon and Breach, New York, p. 166, 1968. [14] K.M. Case and P.F. Zweiffel, Linear Transport Theory, Addison-Wesley Publishing Company, Reading, Massachusetts, 1967. [15] C. CERCIGNANI, The Boltzmann Equation and Its Applications, Applied Mathematical Sciences, Vol. 67, Springer-Verlag, New York, 1988. [16] P.G. ClARLET AND J.L. Lions, Handbook of Numerical Analysis, v. II, Finite Element Methods, Elsevier Science Publishers B. V. North-Holland, Amsterdam, 1991. [17] J.J. Duderstadt and W.R Martin, Transport Theory, John Wiley & Sons, New York, 1978. [18] V. Faber AND T.A. Manteuffel, Neutron Transport from the Viewpoint of Linear Algebra, Transport Theory, Invariant Imbedding and Integral Equations, (Nelson, Faber, Manteuffel, Seth, and White, eds.), Lecture Notes in Pure and Applied Mathematics, 115, pp. 37-61, Marcel-Decker, April 1989. [19] K.O. Friedrichs, Asymptotic Phenomena in Mathematical Physics, Bull. Am. Math. Soc., 61, pp. 485-504, 1955. [20] G.H. Golub and C.F. Van Loan, Matrix Computations, second edition, The Johns Hopkins University Press, Baltimore, 1989. [21] D. Gottlieb and S.A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications, Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, 1977. [22] P. Grisvard, Elliptic Problems in Nonsmooth Domains, Pitman Advanced Publishing Program, Boston, 1985. [23] G.J. Habetler and B.J. Matkowsky, Uniform Asymptotic Expansion in Transport Theory with Small Free Paths, and the Diffusion Approximation, Journal of Mathemat- ical Physics 16, No. 4, pp. 846-854, April 1975. [24] W. Hackbusch, Multi-Grid Methods and Applications, Springer, Berlin, 1985. [25] C. Johnson, Numerical Solution of Partial Differential Equations by the Finite Element Method, Cambridge University Press, Cambridge, 1990. [26] S. Kaplan and J.A. Davis, Canonical and Involutory Transformations of the Varia- tional Problems of Transport Theory, Nucl. Sci. Eng., 28, pp. 166-176, 1967. [27] J.R. Lamarsh, Introduction to Nuclear Reactor Theory, Addison-Wesley Publishing Company, Inc., Reading, Massachusetts, 1965. 84 [28] E.W. Larsen, Diffusion Theory as an Asymptotic Limit of Transport Theory for Nearly Critical Systems with Small Mean Free Path, Annals of Nuclear Energy, Vol. 7, pp. 249- 255. [29] E.W. Larsen, Diffusion-Synthetic Acceleration Method for Discrete Ordinates Prob- lems, Transport Theory and Statistical Physics, 13, pp. 107-126, 1984. [30] E.W. Larsen, The Asymptotic Diffusion Limit of Discretized Transport Problems, Nuclear Science and Engineering 112, pp. 336-346, 1992. [31] E.W. Larsen and J.B. Keller, Asymptotic Solution of Neutron Transport Problems for Small Mean Free Paths, J. Math. Phys., Vol. 15, No. 1, pp. 75-81, January 1974. [32] E.W. Larsen, J.E. Morel, and W.F. Miller, Asymptotic Solutions of Numerical Transport Problems in Optically Thick, Diffusive Regimes, J. Comp. Phys., 69, pp. 283-324, 1987. [33] E.W. Larsen and J.E. Morel, Asymptotic Solutions of Numerical Transport Prob- lems in Optically Thick Diffusive Regimes II, J. Comp. Phys. 83, (1989), p. 212. [34] E.E. Lewis and W.F. Miller, Computational Methods of Neutron Transport, John Wiley & Sons, New York, 1984. [35] T.A. Manteuffel, unpublished personal notes on even-parity. [36] T.A. Manteuffel, S.F. McCormick, J.E. Morel, S. Oliveira and G. Yang, A Fast Multigrid Solver for Isotropic Transport Problems, submitted to SIAM J. Sci. Comp., to appear. [37] T.A Manteuffel, S.F. McCormick, J.E. Morel, S. Oliveira and G. Yang, A parallel Version of a Multigrid Algorithm for Isotropic Transport Equations, submitted to SIAM J. Sci. and Stat. Comp. 15, No 2, pp. 474-493, March 1994. [38] T.A. Manteuffel and K.J. Ressel, Multilevel Methods for Transport Equations in Diffusive Regimes, Proceedings of the Copper Mountain Conference on Multigrid Methods, April 5-9, 1993. [39] H. Margenau AND G.M. Murphy, The Mathematics of Physics and Chemistry, sec- ond edition, D. Van Nostrand Company, Inc., Princeton, 1968. [40] W.R. Martin, The Application of the Finite Element Method to the Neutron Transport Equation, Ph.D. Thesis, Nuclear Engineering Department, The University of Michigan, Ann Arbor, Michigan, 1976. [41] S.F. McCormick, Multigrid Methods, Frontiers in Applied Mathematics 3, SIAM, Philadelphia, 1987. 85 [42] S.F. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, Frontiers in Applied Mathematics, SIAM, Philadelphia, 1989. [43] S.F. McCormick, Multilevel Projection Methods for Partial Differential Equations, SIAM, Philadelphia, 1992. [44] J.E. Morel and T.A. Manteuffel, An Angular Multigrid Acceleration Technique for the Sn Equations with Highly Forward-Peaked Scattering, Nuclear Science and En- gineering, 107, pp. 330-342, 1991. [45] J.T. Oden and G.F. Carey, Finite Elements, Mathematical Aspects, Volume IV, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1983. [46] R.K. Osborn, S. Yip, The Foundations of Neutron Transport Theory, Gordon and Breach, Science Publishers, Inc., New York, 1966. [47] A.I. Pehlivanov, G.F. Carey and R.D. Lazarov, Least-Squares Mixed Finite Elements for Second-order Elliptic Problems, SIAM J. Numer. Anal., Vol. 31, No. 5, pp 1368-1377, October 1994. [48] G.C. Pomraning, Diffusive Limits for Linear Transport Equations, Nuclear Science and Engineering 112, pp. 239-255, 1992. [49] J. Stoer and R. Bulirsch, Introduction to Numerical Analysis, second edition, Texts in applied mathematics, Springer Verlag, New York, 1993. [50] S. Ukai, Solution of Multi-dimensional Neutron Transport Equations by Finite Element Methods, Journal of Nuclear Science and Technology, 9(6), pp. 366-373, 1972. [51] G.M.Wing, An Introduction to Transport Theory, John Wiley and Sons, Inc., New York, 1962. 86 APPENDIX A FLUX STENCIL In this section, we derive the stencil for the Least-Squares discretization of the Sn flux equations (4.16) for piecewise linear elements. We assume that the slab is partitioned into zi zq < zi << zm = zr and denote by h* := z*, Zk-i the cell width of cell k. We are then looking for a discrete solution in the form m N m h(*)=EE^ = Y.Mz)> fc=0 j = l Jfe=0 where := ..., ^fc,jv)T and with Vizi+1) 0 elsewhere Zh-z hk ~j 2r,*jW II l = (0, ,o)T. Plugging (A.l) into (4.16) gives then (A.1) £ fc-1 Zk-1 k-1Zk-l for all (i,j) e {(i, i) : 1 < Z < rn, 1 < i < ivj U j(0, j) : 1 < j < j} U |(m, j) : y < j < ivj . Since j has support only in cell i and cell i + 1, then (A.2) is equivalent to zi *i+1 J (aiL>1Lnr,ij)]Rrd* + J {QIL>IL!h,i+ij)Kd* (*i) (*2) Z% zi+l f (qq ,ILt] \ dz + [ (dq ,TLr\1 \ dz J \ ** -r,hJ/lRN J \ is A*+WjRW (A.3) Zi-1 (*3) (*4) In the following, we consider the terms (*1) to (*4) separately. To (*1): Applying the substitution z = z z,-_i, we have Zi j (^,%r..)^dz = Zi-1 dz where h hi and i := -r,J Z := h** with ipl := V'fc_1 and i/,r := }P_k Then it follows that 11% j = \ T ~ T)] Afe.j + ^ [r<7t(J lwT) + cralwT] ze;- 1L = ^ [lwT + r(J lwT)] M ^ ^ [r7t(J lwT) + o-alwT] - z) + rz 88 so that J(niLt,ILrLri.)]RNdz o h [lwT + t(I lwT)] M (ifrr - , i [luj + t(I lwT)] Me^J +y ^ jr [r +Y [i^T + r(J ^-T)] M (^r ^j) X ['r<7<(/ i^T) + ^lw7] e_j ^ +y |j<7t(J lwT) + <7alwT] ipj, ^ [rat(I lwT) + +y [Tr, ^ [r = l ( lM^T + ^{1 -hLX)]M (r - , e;)^ +\ (Q KMIwt + rVtM(J lwT)] (3^ + V-J ^ ^ (fi [^aIwT + rVt(I lwT)] M (jfr. 3^) ,£, +/, ^SJ KliST + rV?(7 laT)] (=t + ==) .S,)^ Consider all possible j and recall that h = h{ and that T are the corresponding values in cell i which we denote by cra i, crt i, respectively r,-, and that 3^ = ^Pi_l,'Pr = 3^. We then 89 get the following contribution from (*1): 1 \(t? 1) Mum7M t?Q.M2] fli [W 0-o.i) (MuuJ wwtM)] + J - Ti*i,d ^T] j 4-1 + [(i T?) MuujJM + rfQ,M2] [(Ti(TtiQM] +T t7"2*?,.'0 + (ah Tiali) f i- To (*2); Applying the substitution z = z Zi, we have 2i+l -2* fc+l J (QIL&>1L?hj)KNdz> where now (/i := hj+i) and v> h z hi := * Therefore, ILrij . = [lwT + r(7 lwT)] Mej + ^ [r<7t(7 lu/r) + cralwT] (h z) e_j K, = ^ [lwT + r(7 lwT)] M ij^j + ^ [r 90 so that J(tlJL^lL^^dz o h [lwT + t(I lwT)] M ^ ^ ^ [lwT + t(I lwT)] Me^ +y [r +y [JmT + T'U lwT)] M (r V>() i [r +y -i^-T) + o'alw1"] t_v ^ [r<7t(J- lwT) + o-alwT] +y [r = x (Q [m^T + t2m(/-^ M (t- -&) .s#)^ [ + ^ (fi KiwT + T2r - ,5#')^ +4 (si KlMT + *VU luT)] (t + &) , Consider all possible j and recall that h = h8-+1 and that (cra,at,T) are the corresponding values in cell i + 1, denoted by (ca,i+i, the following contribution from (*2): 91 (tt- K1 T?+i) MuultM + r?+1QM2] + ^ [('r/+l<7*,i+l O'a.i+i) (MywT + uwJM) 27f+1o-M+1ftM] + ^Yi W+lO'M+l^ + K,i+1 T?+1Or2<+1) WWT] | £. + (T- [ft+i -1)M<^TM 7i2+1fiM2] [(r2+1trtii+i o-a.i+i) (Mwut wwtM)] +^r1 N+i^+i0 + Ki+1 ^+1^+1) ^T] } £+! To (*3): We have where ^(z) = Sjv£(z) = IwT£(z) + r(7 lwT)g(z), / \ (*) := : \ q(z,Vn) ) Therefore, Zi / (nt.-Oa.u)*, * Zt-l *i = j (filwT£, [hjT + r(I lwT)] Me^)^ dz Zi-1 + j (filwTg, [Tat(I lwT) + Zi- 1 + J (nT(I-lwT)q,[lur +T(I-lwT)]MejT]rti)JRNdz Zi + J 1wt)£, [r 92 Full Text PAGE 1 LEAST-SQUARES FINITE-ELEMENT SOLUTION OF THE NEUTRON TRANSPORT EQUATION IN DIFFUSIVE REGIMES by KLAUS JURGEN RESSEL B. S. (Math), Universitat zu Koln, 1985 B. S. (Physics), Universitat zu Koln, 1986 M. S. (Math), Universitiit zu Koln, 1991 A thesis submitted to the Faculty of the Graduate School of the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Doctor of Philosophy Applied Mathematics 1994 PAGE 2 This thesis for the Doctor of Philosophy degree by Klaus J iirgen Ressel has been approved for the Graduate School by Thomas A. Manteuffel / I j_f 7 Date PAGE 3 Ressel, Klaus Jiirgen (Ph.D., Applied Mathematics) Least-Squares Finite-Element Solution of the Neutron Transport Equation in Diffusive Regimes Thesis directed by Professor Thomas A. Manteuffel ABSTRACT A systematic solution approach for the neutron transport equation is considered that is based on a Least-Squares variational formulation and includes theory for the existence and uniqueness of the analytical as well as for the discrete solution, bounds for the discretization error and guidance for the development of an efficient solver for the resulting discrete system. In particular, the solution of the transport equation for diffusive regimes is studied. In these regimes the transport equation is nearly singular and its solution becomes a solution of a diffusion equation. Therefore, to guarantee an accurate discrete solution, a discretization of the transport operator is needed that is at the same time a good approximation of a diffusion operator in diffusive regimes. Only few discretizations are known that have this property. Also, a Least-Squares discretization with piecewise linear elements in space fails to be accurate in diffusive regimes, which is shown by means of an asymptotic expansion. For this reason a scaling transformation is developed that is applied to the transport operator prior to the discretization in order to increase the weight for the important components of the solution in the Least-Squares functional. Not only for slab geometry but also for x-y-z geornetry it is proven that the resulting Least-Squares bilinear form is contin uous and V -elliptic with constants independent of the total cross section and the scattering cross section. For a variety of discrete spaces this leads to bounds for the discretization error that stay also valid in diffusive regimes. Thus, Least-Squares approach in cornbination with the scaling transformation represents a general framework for the construction of discretizations that are accurate in diffusive regimes. For the discretization with piecewise linear elements in space a multigrid solver in space was developed that gives V-cycle convergence rates in the order of 0.1 independent of the size of the total cross section, so that one full multigrid cycle of this algorithm computes a solution with an error in the order of the discretization error. This abstract accurately represents the COiltent/ publication. PAGE 4 Acknowledgements List of Notation . CHAPTER CONTENTS 1 INTRODUCTION AND PRELIMINARIES 1.1 Introduction and Outline 1.1.1 Opening Remarks .. 1.1.2 Outline 1.2 Neutron Transport Equation and Diffusion Limit 1.2.1 Neutron Transport Equation 1.2.2 Diffusion Limit 1.3 Previous Work on Numerical Solution 1.4 Least-Squares Approach 2 SLAB GEOMETRY .. 2.1 Problems with Direct Least-Squares Approach 2.2 Scaling Transformation ... 2.3 Error Bounds for Nondiffusive Regimes 2.3.1 Continuity and V-ellipticity 2.3.2 Error Bounds ...... 2.4 Continuity and V -ellipticity with respect to a scaled norm 2.5 Error Bounds for Diffusive Regimes 3 X-Y-Z GEOMETRY 3.1 Continuity and V -ellipticity 3.2 Spherical Harmonics 3.3 Error Bounds 4 MULTIGRID SOLVER AND NUMERICAL RESULTS 4.1 SN-Flux and PN-1-ZVloment Equations 4.1.1 SN-Flux Equations 4.1.2 Moment Equations .... 4.1.3 Least-Squares Discretization of the Flux and Moment Equations 4.2 Properties of the Least-Squares Discretization 4.3 Multigrid Solver 4.3.1 SN Flux Equations 4.3.2 Moment Equations 5 CONCLUSIONS 5.1 Summary of Results 5.2 Recommendations for Future Work BIBLIOGRAPHY APPENDIX A FLUX STENCIL B MOMENT STENCIL v Vl 1 1 1 2 2 6 7 9 13 14 17 20 20 27 30 38 46 47 51 54 60 60 60 62 64 66 75 75 76 81 81 82 83 87 98 PAGE 5 ACKNOWLEDGEMENTS I wish to thank, first and foremost, my advisor, Prof. Torn Manteufl'el, for his academic and financial support, which made this thesis possible. His ready availability to discuss problems in deep detail, along w.ith his mathematical insight and creativity resulted always in helpful answers and hints, which formed the foundation of this thesis. Secondly, I would like to express my appreciations to Prof. Steve McCormick, who has been a consultant to this work since its beginning and whose expertise in multilevel algorithrns has proved invaluable. I wish also to thank the remainder of my committee, Professors Jan Tvlandel, Jim Morel and Tom Russell. Jan Mandel and Tom Russell taught me in their claBses the mathematical theory of Finite-Elements. From the many discussions with Jim Morel during my stay of 9 months at Los Alamos National Laboratory I gained a lot of insight into transport problems. I am also grateful to the Center of Nonlinear Studies at Los Alan1os National Laboratory for the financial support and the use of their facilities during this tirne. Particular thanks goes to Debbie Wangerin, who guided me through the jungle of rules imposed by the Graduate School. Moreover, I wish to express my gratitude to Dr. Gerhard Starke, who proofread most parts of this thesis and rnade useful comments. Last, but not least, I would like to thank all members of the Center for Cornputational Mathematics of the University of Colorado at Denver and rny friends Marian Brezina, Dr. Max Lemke, Dr. Jim Otto, Radek Tezaur and Dr. Petr Vanek for their friendship and support. v PAGE 6 LIST OF NOTATION For the most part, the following notational conventions are used in this thesis. Particular usage and exceptions should be clear from the context. Scalars, Vectors and Sets :c, y, z standard space coordinates. zr, Zr left and right boundary of the slab. fJ polar angle with respect to z-axis. rp azimuthal. angle about z-axis. Jl = cos(O). crt total cross section. fTs scattering cross section. (fa = (J't-crs, absorption cross section. s := l/rrt, in parameterization for diffusion limit. a CJ a = ca, in parameterization for diffusion limit. r := s or cV"Ct + c2 scaling pararneter. J-lj Gauss Quadrature points. Wj Gauss Quadrature weights. N number of Legendre Polynomials used in truncated expansion. h mesh size. r. := (x,y,z). dr := dxdydz, incremental spatial element. !} := (n" rly, n,) = (cos( PAGE 7 Multi-index Notation (J lfJI := (fJr, (J,, (J3). := fJr + fJ2 + (J3. .EhEf31EJyf:!28;)i3. Operators I p II, s L\.n IL IM Identity. := f, dfJ (in 1 D), := J8 .an (in 3D). := P + T(IP), Scaling Transformation. = P P). unsealed Transport operator in 1 D or in 3 D. :=sf., scaled Transport operator. := !:SpS = (1c)(Pp + pP) + spl. := 1sns. -Projection onto space of functions1 that have truncated expansion into Legendre polynomials or spherical harmonics. Projection onto space of functions that are piecewise polynomials of degree r on a partition of the slab [ zz 1 Zr]. Projection onto space of functions1 that are piecewise polynomials of degree 1' on a given partition. := d: [ (1 p2 ) d:], Sturm-Liouville operator. Laplacian in polar coordinates on S1 SN Flux operator. Moment operator. Functions 1/!(z, p), 1/!(r.,fl.) Pr(P) PI (I') p,m(p) Y,m(fl.) PAGE 8 Function Spaces IP, (D) polynomials on the domain D with degree smaller or equal to r. IV,.(T,,) piecewise polynomials of degree:::; ron 7J,. C0(D) space of continuous functions on D. c=(D) space of infinitely differentiable functions on D. L2(D) space of square Lebesgue integrable functions on D. V Hilbert space. vh discrete subspace of v. V := s-1(V), where s-1 is the inverse of the scaling transformation. Bilinear Forms {Y., !2.) IfF' standard Euclidean inner product of mn' z,. 1 (u,u) := J J uv dl'dx, (in 1 D). Zl -1 := J J uv' dfJ dr, (in 3 D). := (Cu, Cv). a( u, v) ii( il, ii) := (CSu, CSv), double scaled form. Norms \\u\\ 1\u\\k,o := norm of Hk(Gl) X L2(G,), wbere G1 := [z,,zr] and G2 := [-1, 1] in lD and G1 := 1(. and G, :=51 in 3D. \u\k,O := C[lfl=k \\Dfu\\2 )112 semi-norm of Hk(G,) x L'(G2). \1\\v Norm associated with V. \\ \v Norm associated with V. 11\v\11 = + 1\v\1')112 Matriees "'-:=(w1,:WN)T 1 := (1, ... 1)T, N elements. R := 1' "'T fJ := diag(w1, ... WN ). M :=diag(I'1::1'N). Vlll PAGE 9 CHAPTER 1 INTRODUCTION AND PRELIMINARIES 1.1 Introduction and Outline 1.1.1 Opening Remarks The Least-Squares approach represents a systematic solution technique that includes theory for wellposedness of the continuous and discrete problem, error bounds for discretization error and guidance for the development of an efficient solver for the resulting discrete system. Furthermore1 the Least-Squares approach is a general methodology that can produce a variety of algorithms, depending on the choice of the Lea..-::t-Squares functional, the discrete space and the boundary treatment. For many partial differential equations (PDEs), a straightforward Least-Squares approach is ill-advised, since it requires more smoothness of the solution than a Galer kin ap proach and results in a squared condition number of the discrete operator. l:lowever, problems of first order or elliptic problems with lower order terms are converted by a Least-Squares formulation into a self-adjoint problem. Moreover) by introducing physically meaningful new variables and transfOrming the original proble1n into a system of equations of first order, the s1noothness requirements of the solution can be reduced and squaring the condition number of the discrete problem is avoided. This first-order system Least-Squares (FOSLS) technique has recently been successfully applied to the solution of general convection-diffusion problems (Cai et a!. [10], [11]) and the Stokes equation (Cai eta!. [12]). In combination with a mixed finite element discretization it can even be worthwhile to apply this technique to self-adjoint second-order elliptic problems in order to circumvent the Ladyzhenskaya-BabuSka-Brezzi consistency condition (Pehlivanov et a!. [47]). The subject of this thesis is the extension of the Least-Squares approach to the solution of the neutron transport equation. This equation mathematically describes the migration of neutrons through a host medium and their interaction (absorption or scattering) with the nuclei of the host medium after a collision. The fact that the neutron transport equation is a already first order integra-differential equation rnotivates a Least-Squares approach. However, due to special properties of the transport equation, this is less straightforward than it might appear. In diffusive regimes, where the probability for scattering is very high while that for absorption is very low, the transport equation is nearly singular and its solution is close to a solution of a diffusion equation. To guarantee an accurate discrete solution in these regimes, a discretization of the transport operator is required that becomes a good approxirnation of a diffusion operator in diffusive regimes. Up to the present, only a few special discretizations have this property. Therefore, diffusive transport problems are hard to solve. The principal part of this thesis is devoted to the extension of the Least-Squares approach to transport problems in diffusive regimes. 1.1.2 Outline This thesis is organized as follows: Chapter 1 continues with an introduction to the neutron transport equation in Section 1.2, where also properties of this equation, especially PAGE 10 for diffusive regimes (diffusion limit), are discussed. Section 1.3 provides an overview of previous work on numerical solution of neutron transport problerns. Finally, in Section 1.4 the Least-Squares discretization and its associated standard Finite-Element theory are briefly reviewed. Chapter 2 deals with the application of the Least-Squares approach to one-dirnensional (slab geometry) neutron transport problems. In Section 2.1, we analyze why a Least-Squares discretization directly applied to the transport equation is not accurate in diffusive regimes when combined with simple discrete spaces that use piecewise linear basis functions in space. To cure this problem, in Section 2.2 we intorduce a scaling transformation is introduced, which plays a key role in this thesis and is applied to the transport operator prior to the dis cretization. For the scaled Least-Squares functional, V -ellipticity and continuity are proved with respect to a sin1ple un-sealed norm in Section 2.3. Here we also obtain simple bounds for the discretization, although they are only valid for nondi:ffusive regimes. 'This is a result of the V-ellipticity and continuity constants dependence on the total cross section CJt and the absorption cross section CJ a, which are the coefficients in the transport equation that deter Inine how diffusive a region is (see Section 1.2). For diffusive regimes, O't is very large, which causes the simple error bounds in the unsealed norm to blow up. With respect to a scaled norm, the V -ellipticity and continuity of the Least-Squares bilinear form with constants in dependent of Ut and CJ a is proved in Section 2.4, which is the central part of this thesis. For a variety of discrete spaces, the V -ellipticity and continuity with constants independent of Ut and era are the foundation for the discretization error bounds in Section 2.5, which remain valid in diffusive regimes. The first class of discrete spaces consists of spaces with functions that can be expanded into the first N normalized Legendre polynomials in angle and are piecewise polynomials in space, whereas the second class of spaces is for.med by functions that are piecewise polynomials in space as well as in angle. In Chapter 3 we generalize the theory of Chapter 2 to three-dimensional problems in x-y-z geometry. In Section 3.1 we prove the V -ellipticity and continuity of the Least-Squares bilinear form with constants independent of O't and O'a Section 3.2 gives an introduction on spherical harmonics, which are used as basis functions for the discretization in angle (PN-approximation). Finally, in Section 3.3 we establish error bounds for the Least-Squares discretization using spherical harmonics as basis functions in angle and piecewise polynornials on a triangulation of the spatial domain as basis functions in space. Chapter 4 presents numerical results for slab geometry. In Section 4.1 we introduce the discrete ordinates SN flux and the PN moment equations, which are serni-discrete forms of the transport equation. All numerical results in this chapter are based on a Least-Squares discretization of these forms using piecewise linear or quadratic basis elements in space. In Section 4.2 we summarize some properties of the Least-Squares discretization. In Section 4.3 we describe the components of full multigrid solvers, that were developed for the SN flux as well as for the moment equations and implemented in C++ under use of a special designed array class. We also include convergence rates for the multigrid solvers in this section. In Chapter 5 we summarize our results and give some recommendations for future work. 1.2 Neutron Transport Equation and Diffusion Limit 1.2.1 Neutron Transport Equation Transport theory is the mathematical description of the migration of particles through a host mediurn. Particles will move through a host medium in con1plicated zigzag 2 PAGE 11 paths! due to repeated collisions and interactions with host particles. As a consequence of these collisions and interactions! the particles are transported through the host medimnj which explains the name transport theory. Transport processes can involve a variety of different types of particles such as neu trons, gas rrwlecules, ions, electrons, photons, or even cars, moving through various back ground media. All of these processes can be described by a single unifying theory, since they are all governed by the same type of equation, a Boltzmann transport equation (Duderstadt and Martin [17]). The origin of this theory is the kinetic theory of gases, in which case the transported particles and the host particles are equal to gas molecules. For this situation Boltzmann formulated his famous nonlinear equation in 1877 (Broda [9]). Transport theory currently plays a fundamental role in many areas of science and engineering. For instance, the diffusion of light through stellar atmospheres (radiative transfer) and the penetration of light through planetary atmospheres is fundamental for astrophysics. Moreover, radiation therapy, shielding of satellite electronics, and modeling of semiconductors require transport calculations. Further, the transport of vehicles along highways (traffic flow problems) and the random walk of students during registration can be analyzed by transport theory. The transport of neutrons through matter, which is considered in this thesis, is a major part of transport theory, because of its importance for the design of nuclear reactors and nuclear weapons. In the mathematical description of the transport of neutrons through Inatter three main interactions must be taken into account. After a neutron collides with a host nucleus, it can either get built into the nucleus (absorption), or it can be reflected, so that its travel direction and its energy have changed (scattering), or it can cause fission, that is, the nucleus breaks into two smaller parts and neutrons and energy are released. Since th.e collision of a neutron with a nucleus and its result are uncertain, the mathematical formulation of neutron transport is based on probability. Therefore! the quantity computed in neutron transport is the expected density _iV(r:j 0., E, t) of neutrons at position r: moving in direct;ion l1 with energy E at time t. Although the knowledge of the simple particle density P(L t), which specifies the density of neutrons independent. of their travel direction and their energy, would be sufficient for most applications! there is no equation that adequately describes this quantity. For this reason the density is further subdivided into the density N(.r:, E, t) of neutrons with a specific travel direction and a certain energy. Defining the phase space as the space spanned by the location vector and the velocity vector or equivalently spanned by the location vector, the unit vector describing the travel direction and the energy of a neutron, the density N can also be viewed as t.he phase space density. An equation for N(r:, E, t) may be obtained either in an abstract way from Liou ville's theorem (Cercignani [15]), (Osborn and Yip [46]), or from a simple balance argument, based on the neutron conservation within a small volume element of the phase space (Lewis and Miller [34]). Both ways result in the following balance equation for the expected neutron density N: &N -= -11 v'VN-rr,vN + s. iJt --(1.1) Here, v = [jji_ is the speed of a neutron with mass mN that has energy E. The probability ym:;; that a neutron will collide with a nucleus while traveling along a path of length dl is given by O't dl, where CTt(.I:, E) is the so called total cross section1 Since the neutron density is, in most applications, much smaller than the density of the host nuclei, neutron-neutron 1The reciprocal of the total cross section is called mean free pat.h (mfp), since this is exactly the length that a neutron can travel on the average before encountering a collision. 3 PAGE 12 collisions can be neglected, so that O't is assumed to be independent of the neutron density N. Otherwise, the balance equation (1.1) would be nonlinear. The first term on the rigbt-hand side of (1.1) represents loss of particles due to streaming, the second term represents loss due to collisions, and s ::::: s(r, ?., E\ t) represents both, implicit sources of neutrons due to inscattering and explicit sources. This explains why (1.1) is a balance equation. In the following only steady state problems are considered. Letting 1/;(r_, D., E, t) := vN(r_,D., E, t) be the angular flux of neutrons, the steady state form of (1.1) becomes (1.2) Further, fission as a possible interaction is excluded in the following, so that, in addition to an external source, s includes a term s8 (r, Q, E) that describes the neutrons that are scattered into the direction Q and energy E from some other direction Q' and some other energy E'. The scattering source term can be modeled as (Lewis and Miller [34, p.35]) 00 s, (r_, l., E) := J J o-,(r_, D.' __,D., E' __, E)1/;(r_, D.', E') di:l' dE', 0 S' where S1 denotes the unit sphere in ill3 and the scattering cross section O's is the probability that a neutron will be scattered from D.' -+D. and E'-+ B. In most ca..-,es 0'8 depends only on the angle between D.' and il, so O's::::: O's(r,il' il, E'-+ E). If O's is also independent of' Q, the scattering is said to be isotropic; otherwise, it is called anisotropic. In most applications, it is appropriate to discretize in energy to form what is known as the multi-group-equations (Duderstadt and Martin [17, p. 407]) (l,ewis and Miller [34, p. 61]). After dividing the energy range 2 into subintervals [Ek, E' +dE] and denoting by 1f} the Hux of neutrons with energy in this interval, this discretization results in a set of so called single-group-equations n. \I?J} + "'' E* can be neglected, the energy range can be assumed to be a bounded interval of the form [0, E*]. 4 PAGE 13 which is an equation for the single-group angular flux 1/J(r_, m, to be determined for all points r_ = (x, y, z) in a region n c IR3 and for all travel directions l.l_ on the unit sphere S1 The operator P is defined by P 0 for 11 < 0 P,P(z,p) := j 1/;(z,p') dp'. -1 l (1.6) (1. 7) The boundary conditions in (1.6) specify again the inflow of neutrons into the slab, since at the left end z, of the slab the inflow directions are given by I' > 0, while at the right end z,. the inflow directions are given by J1 < 0. 1 -I I' f Figure 1.1: Computational domain for slab geometry In neutron transport theory it is cornmon to introduce the absorption cross section a a:= O't-a8 Then the transport operator in three-dimensions becomes L:=l.l_ +!7t(I-P)+e7aP, (18) and for slab geometry we have a [.:=I' az + !7t(I-P) + e7aP. (1.9) 5 PAGE 14 1.2.2 Diffusion Limit Simple approximations to the transport operator 1 based either on ad hoc physical assumptions such as Fick's law 3 (Lamarsh [27, pp. 125-137]) or on a P1-approximation (Case and Zweiffel [14, Section 8.3]), which assumes that the flux is has an expansion into the first two Legendre Polynomials, result in a diffusion equation. This indicates that diffusion theory is related to transport theory. Indeed, transport theory transitions into diffusion theory in a certain asymptotic limit, called the diffusion limit. When O't oo and;;:1, equations (1.4) and (1.6) respectively becomes singular. By dividing (1.4) or (1.6) by r71 it easily follows that the limit equation is (IP),P = 0, which is fulfilled for all functions that are independent of direction angle 0. and fl, respectively. Moreover, when cr1-+ oo and 1 in a certain way, the limit solution of (1.4) and (1.6)1 respectively, will be a solution of a diffusion equation. This was discovered more than 20 years ago in the work by Larsen [28], Larsen and Keller [31] and Habet!er and Matkowsky [23]. To be more specific, we consider first the slab geometry case and assume that the slab has length 1, which can be established by a simple transformation (see Chapter 2). Following the recent published summary of Larsen [30], we introduce a small parameter c and scale the cross sections and the source in the following way: q(z,fl) 1 (Tt --7 -j where a: is assumed to be 0(1). Then the transport equation becomes [ i} l l /;(z,p) := f1 az P) + wP 1/J(z,fi) = Eq(z,fl). (1.10) (1.11) In combination with the special scaling (1.10), the diffusion limit is then defined by the limit c-+ 0. The physical meaning of scaling (1.10) is that the total cross section is large, so the syste.m is optically thick, whereas the absorption cross section and the external source arc srnalL We note that there are many scalings, which are capable to express this physical situation. However scaling (1.10) stands out since the diffusion equation d 1 d(z) ---+ O'a(z) = Pq(z,p) dz 30'1 dz (1.12) is invariant under it in the sense that, if the substitutions (1.10) are inserted into (1.12), then the resulting diffusion equation is independent of the scaling parameter c. Since equation (Lll) is of singular perturbation form, general boundary layers can be expected. Therefore, the solution is decomposed as 1/J(z,fl) = 1/JI(Z,fl) + 1/Jn(z,fl), where '!/JJ denotes the interior solution some mean free paths away from the boundary, and j;B denotes the solution near the boundaries. To determine 1.h, the following asymptotic expansion ansatz (Friedrichs [19]) 00 PAGE 15 is inserted into (1.11). By equating the coefficients of different powers of, it can be shown (Larsen [30]) that 1/;0 is independent of angle, so 1j;0(z, p) = 1j;0(z), and that 1j;0 (z) is a solution of the diffusion equation (1.12). To obtain the boundary layer part "IPB, an asymptotic expansion of the boundary layer is perforrned) which is then rnatched with the interior expansion. It follows (Habetler and Matkowsky [23]) that the boundary layer part ,p8 decays exponentially with the distance from the boundary and that its leading tern1 is also independent of the direction angle. Altogether, this results in the following diffusion expansion 1/;(z, p) = q)(z) + s PAGE 16 Because of the property that the analytical solution of the transport equation IS converging in the diffusion limit to a solution of a diffusion equation in the interior of the slab, the discretization of the spatial dependence is much more difficult. For accuracy reasons, the discrete solution must have the same property. Therefore, this requires a discretization of the transport operator that becomes a good approximation of a diffusion operator in diffusive reg1mes. By applying the asymptotic expansion technique introduced in Section 1.2, to the discrete solution, Larsen, Morel, and Miller [32) analyze the behavior of various special discretizations in the diffusion limit. In the discrete case, the mesh size h has to be considered as a second parameter besides the parameter c. Therefore, they define in their work the following two different limits. If, for a fixed mesh size h, the discretization approximates a diffusion operator in the lirnit c -r 0, then the discretization is said to have the correct thick diffusion limit. On the other hand, if the mesh size h varies linearly with c in the limit c -r 0 and this limit results in a consistent discretization of a diffusion operator, then the discretization is said to have the correct intermediate diffusion limit. Since standard finite difference discretizations, such as upwind differencing, fail to have a correct thick diffusion limit, special discretizations have been developed that behave correctly in diffusive regimes. Among them are the Diamond difference scheme and the difference schemes of LundWilson and of Castor) which, according to the analysis of Larsen, Morel, and Miller [32], give the correct thiek diffusion limit for the cell average flux. Moreover) Finite-Element disc.retizations have been applied for spatial discretiza tion of the neutron transport equation in different ways. The direct Galerkin approach to the first order integro-differential form (1.6) of the transport equation, as considered first theoretically by Ukai [50] and numerieally by Martin [40], does not have the correct behav ior in the difl'usion limit, except when special discontinuous finite elements are used. 'l'he use of discontinuous finite elements results, for exarnple, in the Linear Discontinuous (LD) discretization (Alcouffe et al. [2]) and the Modified Linear Discontinuous (MLD) scheme (Larsen and Morel [33]). MLD has the additional property that with a suitable fine spatial mesh, it can resolve boundary layers at exterior boundaries or at interior boundaries between media with different material cross sections. Further, a variety of Ritz variational formulations have been proposed (see Kaplan and Davis [26] for a summary). CI'hey have the self-adjoint second-order even-parity4 form of the transport equation as its Euler equation and lead, independent of the choice of the discrete Finite-Element space, to correct diffusion limit discretizations. However, the evenparity form of the transport equation is only valid for nonvacuurn regions and becomes very tedious for anisotropic scattering or anisotropic sources (Lewis and Miller [34, p. 260]). For the solution of the discrete system, a simple splitting iteration known as source iteration or transport sweep has been used in the past. Because of the slow convergence of this iteration for diffusive regimes (convergence 10( -$$),the Diffusion Synthetic a, Acceleration (DSA) method (Larsen [29]) was developed, which uses a diffusion approxima tion to accelerate the source iteration. By spectral analysis, Faber and Manteuffel [18] have shown why this method is successful for problems with isotropic scattering. However, for problems with anisotropic scattering, DSA is less effective. Tvloreover, rnultigrid methods have been employed for the solution of discrete neutron transport problerns. For the LD scheme, a multigrid algorithm in space was developed by Barnett, Morel and Harris [4] which proved to be effective even for highly anisotropic 4It can be shown [35] by a certain transformation that the even-parity form is closely related to the Least-Squares formulation considered here. 8 PAGE 17 problems. For isotropic problems, this algorithm is competitive with DSA, although it uses an expensive block-smoothing. The multigrid algorithm in space of Manteuffel, McCorrnick, Morel, Oliveira and Yang [36] for isotropic problems, discretized in space by the MLD scheme, employs a special operator induced interpolation and has been ported very efficiently to a parallel architecture [37]. For anisotropic problems a technique called multigrid-in-angle was developed by Morel and Manteufl'el [44]. This scheme involves a shifted transport sweep to attenuate the error in the upper half of the moments, so that the remaining error can be approximated by the solution of a problem discretized in angle based on only the lower half of the rnoments. Recursive application of this procedure leads to an isotropic problem on the coarsest level, which can be solved by a multigrid method in space. For higher dimensional problems, the discretization of the angle dependence also becomes a problem. For problems with isolated sources in a strongly absorbing medium, anomalies in the flux distribution, called ray effects (Lewis and Miller [34, p. 194]), are likely to arise in combination with a discrete ordinates (SN) discretization. The SN discretization causes a loss of rotational in variance, since this discretization transforms the fully rotational invariant transport equation into a set of coupled equations that are at most invariant under few discrete rotations. Thus: an azimuthally uniform flux, for example, is approximated by a set of 8-functions at discrete angles, which can be very poor if the number of discrete angles is not sufficiently large. One potential remedy is a PN discretization) which is a spectral Galerkin method using spherical harmonics as basis functions. This discretization results in a fully rotational invariant discrete problem. However) the coupling of the discrete equations is complicated and the treatment of boundary conditions is less straight forward. As in the one-dimensional case) for higher dimensions the discretization in space must have the correct behavior in the diffusion limit in order to obtain accurate discrete solutions for diffusive regimes. The direct extension of the appropriate one-dimensional disc.retizations is complicated, however. BOgers, Larsen and Ada1ns [7] have shown that the linear discontinuous (LD) finite element discretization on rectangles does not yield a correct diffusion limit discretization, whereas the MLD discretization does. However, the efficient solution of the discrete system resulting from the MLD discretization is an open problem. Applying a similar multigrid algorithm, which was developed by Manteuffel et a!. [36] for the one-dimensional case, would require the extension of the operator induced interpolation to higher dimensions, which is complicated. Morel et al. are in the process of developing a method for three space dimensions based on the even-parity form of the transport equation and using a PN discretization in angle. We conclude that an arsenal of highly specialized computational methods exists) whose design is adapted for particular transport problems. However, there is lack of a general systematic solution approach that includes existence theory of the analytic and discrete solution, error bounds for the error of discretization and guidance for the development of an efficient solver of the resulting discrete problem. Especially for higher dimensional problems, such an approach seems to be needed. 1.4 Least-Squares Approach In this section) we introduce a systematic solution approach to the neutron transport equation that relies on a Least-Squares self-adjoint variatio.nal formulation of (1.4), and we summarize the associated standard FiniteElement theory. The Least-Squares approach can be considered as a systematic solution approach: since it includes theory for the existence and uniqueness of the analytical and discrete problem: as well as bounds of the discretization error for a whole class of Finite-Element spaces. Furthermore, this approach will guide the 9 PAGE 18 development of a Multilevel Projection Method (McCormick [43]) for the efficient solution of the resulting discrete system. A Least-Squares Finite-Element discretization with piecewise linear basis functions in space directly applied to (1.4) does not have the correct behavior in the diffusion limit (see Section 2.1). For this reason, the scaling-transformation [P + r(l-P)], with parameter r E IR+ specified later, is applied to the transport operator prior to the discretization: C [P + r(I-P)J[l.J. \7 + o-,1-o-,P] = P(l.J. \7) + r(IP)(l.J. \7) + ro-1(1-P) + (o-to-,)P P(l.J. \7) + r(IP)(l.J. \7) + ro-,(1-P) +o-aF, (1.17) where in the last equation ( O'"t-cr s) was substituted by the absorption scattering cross section era. In this transforrned operator, the Least-Squares variational formulation of (1.4) is given by min F(w), EV with F(,P) := J J [/;(:c,l.J.)-q,(z:,l.J.)]2 drldr, (118) n S' with q, =Sq. The Hilbert space V with underlying norm II llv will be specified later. A necessary condition for 1/; E V to be a minimizer of the functional F in (1.18) is that the first variation (Gateaux derivative) ofF vanishes at 1j; for all admissible v E 'V, resulting in the problem: find lj; E V such that a(,P,v):= j jcwCvdrldr = j jq,CvdOdr VvEV. (1.19) n s1 n s1 The essential part of the theory is to show that the symmetric bilinear form a(,) in (1.19) is V-elliptic, i.e., there exists a constant Ce > 0 such that, for all v E 'V, a(v,v)?: C, (120) and continuous, i.e., there exists a constant Cc > 0 such that, for every u, v E V, la(u,v)l :": C, llullv llvllv(121) The proof of the continuity is straightforward, but the proof of the V-ellipticity is difficult and tricky. Denote the standard inner product and associated norm of L2('R x 5'1 ) by (u, v) := ./ J u v' dOdr; n S' I \fu, v E L2(1?. x 51), where v' is the complex conjugate of v. Using (1.20), (1.21) and the assumption that q,(z:,l.J.) E L2(R x 51), which ensures that the functional l: V ([); l(v) := J J q, Cv drldr n s' is bounded (!l(v)l :": c;1'llq,llllvllv ), then the Lax-Milgram Lemma (Ciarlet and Lions [16, p. 29]) can be applied. It follows that problem (1.19) is well posed in the sense that its solution exists, is unique and depends continuously on the data q8 The latter follows from 10 PAGE 19 so For the Least-Squares Finite-Element discretization of (1.19), the Hilbert space V is replaced by a finite-dimensional subspace V" c V, and (1.19) becomes: find >!Jh E Vh such that (1.22) The existence and uniqueness of a solution 1/Jh E Vh of the discrete problem (1.22) follows again from the Lemma of Lax-Milgram since, as a finite-dimensional space, Vh is a closed subspace of the Hilbert space V and is, therefore, also a Hilbert space with respect to the inner product of v restricted to v". By subtracting ( 1.22) from ( 1.19), it follows immediately that the error is orthogonal to vh with respect to the bilinear form a(-, l (1.23) The Cauchy-Schwarz inequality and (1.23) lead directly to Cea's Lemma (Brenner and Scott [8, p.62]): a( 1/J ,Ph, 1/J 1/Jh) :<; a( 1/J -vh, ,P -vh) I lib->!Jhllv :<; (C;c' min 111/J-vhllv, v c: vhEVh (1.24) with the use of the V-ellipticity (1.20) and the continuity (1.21). By (1.24), the problem of finding an estimate oftbe error is therefore reduced to estimating min II-1/JhllvThese vhEVh kinds of estimates are pl'ovided by approximation theory for a wide class of spaces Vh. For example, when we consider for simplicity only a semi-discretization in space where Vh is formed by piecewise polynomials of degree r, v = Hm(n) X 2(S1 ), vh c v, and the exact solution is in JfC+1(n) x L2(S1 ), it can be proved (Ciarlet and Lions [16, Theorem 16.2]) that (1.25) where h is the maxirnurn mesh size of the triangulation of n used and llvllm,O := [ L j j ID"vl'dlldr] 112 lo:l::;m R 51 denote the standard Sobolev norm and semi-norm (Adams [1]), respectively. Here, we use the standard multi-index notation 3 IPI = LiJ; i::::::l for ;3 := (;J1, ;3,, ;Js). 11 PAGE 20 For Vh forn1ed by piecewise polynomials of degree 1"1 the combination of (1.24) and (1.25) results in the overall error bound (1.26) The crucial point here is that we have shown V-ellipticity (1.20) and continuity (1.21) with respect to a weighted norm with constants Ce and Cc independent of crt and Ua1 so that an error bound similar to (1.26) for a discretization in space and in angle holds independent of the size of crt and cr a I-Ience1 the Least-Squares Finite-Element discretization of the scaled transport operator with piecewise polynomials of degree r 1 as basis functions yields an accurate discrete solution even in the diffusion limit. 12 PAGE 21 CHAPTER 2 SLAB GEOMETRY The Least-Squares approach is applied in this chapter to the one dimensional (slab geometry) neutron transport equation (1.6). Throughout this chapter, we assume without loss of generality the following: 1) The total scattering cross section is constant in space, so O"t(z) :.:::: CTt. This can be established by the transformation fO't(s)ds z' = "----. (2.1) O't(s) ds ,, The transport equation then becomes [11 + O';(I-P) + P] 1/;(z', 11) = q' (z', 11), with z, z, ' 0'; = J O't(s) ds, = ::i:i J O't(s)ds, '(' ) q(z,/1)1 ()d q z ,/1 = O't(z) O't s s. ,, Zt ,, 2) The slab has length 1, so (z,z,) = 1. If the transformation (2.1) was already applied, this is directly fulfilled; otherwise, this can be established with the simple transformation z" = -( ) This changes O"t, 17a, and q to Zr-Zl 3) We impose homogeneous (vacuum) boundary conditions, so m(l') =: 0 and g,(l') =: 0 in (1.6). This can be done in the following way. Define { g1(11) for 11?:: 0 1/!&(Z,/1) := g,(!') for 11 < 0 Then, clearly, 1/J,(z,/1) E H1([z1,z,]) x L2([-1, 1]), so that L1j;, is well defined and we can solve the problem l1fo = q -l?j;b with homogeneous boundary conditions. The original solution is then given by PAGE 22 2.1 Problems with Direct Least-Squares Approach In the following, we give an explanation as to why a Least-Squares Finite-Element discretization applied to (1.6) using piecewise linear basis functions in space does not, in general, yield a correct diffusion limit discretization. We recall that the Least-Squares vari ational formulation of (1.6) is given by and mm F(,P); with '" 1 F(1/J) := j j [l1/J(z,p) -Eq(z,p)]' dpdx, (2.2) 2! -1 V := { v(z, p) E L2(D): E I}(D), v(zl,!J) = 0 for I'> 0, v(zr, p) = 0 for I'< 0}. In (2.2) we used the parameterized form (Lll) of the transport equation, since it is better suited for a diffusion limit analysis. For the discretization of (2.2), the rninimization of the Least-Squares functional is restricted to a finite-dimensional subspace Vh c V. Without loss of generality, in the follow ing analysis for the discretization in angle we use a P1 approxi1nation) which assumes that the angle dependence of the solution has an expansion into the first two Legendre Polyno mials. One reason for this is that a semi discretization only in angle by a P1 approximation results in a diffusion equation [14) Section 8.3). Second) the behavior of the discretization in diffusive regimes, where according to (1.3) the exact solution is nearly independent of angle, is analyzed here; thus, a P1 approximation allowing a linear dependence in angle is sufficient. For the discretization in space, we use piecewise polynomials on a partition Th of the slab. Altogether, this results in the discrete space Vh := {vh EC0(D) :vh(z,p)=o(z)+tJ(x), where o,1 ElP,('TJ,); vh(z,,p) = 0 for I'> 0, vh(z,p) = 0 for I'< 0}, (2.3) where IPr(Th) denotes the space of piecewise polynomials of degree ::; ron the partition Tj1 of the slab. By the asymptotic expansion introduced in Section 1.2, the minimizer of the Least Squares functional can be characterized as follows. Theorem 2.1 (Characterization of Least-Squares minhnizer) Let the Least-Squares functional P and the discrete space Vh be given as defined in (2.2) and (2.3) respectively. Suppose V'h E vh minimizes F restricted to vh. Suppose further that E :S 1 and that 1/Jh has the asymptotic expansion in E given by 1/Jh(z, p) = + tJHz), with 00 =I; E"ryv(z); Hz):= I;c"bv(z), /)=::0 v=O where Tj11, 611 E IPr(Th) are independent of parameter c for all v. We then have: (i) b0(z) =e 0. (ii) %(z) = -b1(z). 14 (2.4) PAGE 23 (iii) Let U" := {rJo E IP,(Th): 7/o(z,) = 7/o(z,) = O,ryo fulfills (ii) for some 51 E .IP,(Th)}. Then for all 7/0 E U h: 1,,.1_,' d j'' d :J'lo'lo + aryoryo z = qryo z. zr zr Proof. We first prove (i). Using expansion (2.4) in (1.11) we have and, therefore, F(V;h) = L F,(,Ph), v::::-2 with (2.5) I, J j %(z)bo(z) + bo(z)b,(z) dz, ,, and F,(V;h) independent of for v 0. For :S 1, it is possible to bound F(,Ph) from above independent of E by (2.6) since lPh minimizes F and 0 E Vh Therefore, we must have F_z(lPh) = 0 and F_1(,P) = 0, since otherwise F( 1/Jh) ___,. oo in the limits -r 0, which contradicts (2.6). In combination with (2.5), we conclude that So(z) = 0. To prove (ii), by virtue of (i) we can restrict the minimization ofF to the space where ry,(z), 6v(z) E IP,(T,,) are independent of E for all v. A necessary condition for 1/;h E Wh to minimize F is that the first variation ofF vanishes at 1/Jh for all admissible 'Wh E Wh, that is, (2.7) 15 PAGE 24 For wh E Wh we have Therefore, (2.7) is equivalent to z,, 1 ./ ./ 1-'2 [ + 61 + + 61 <51)] df.'dz + cJ1 + s2 I, + O(s3 ) Zl -1 (2.8) '" 1 = s2 ./ ./ f.'2qb; + aqrJo df.'dz + O(c3), ZJ -1 where ,, 1 I,../ ./ f-'2 [liJ6ry; + l,ry;) + (%<1z + 6,6,) + (i)iryb +iii 0 and for all wh E Wh, in particular for Wh = ,p,, it follows that Thus, z,, 1 0= ./ Zl -1 _, 7 r10 = -u1. dz. ,, Finally, we prove (iii). Because of (ii), we can restrict minimization ofF to the space Wh := {wh E Wh: = -b,(z)}. The choice wh E Wh in (2.7) will zero out the 0(1) and O(s) term in (2.8). Comparing the O(s2 ) term on the left-hand and right-hand sides gives ./,, 2 [(-' -6 ') (-' 6 7 6 ) (7 6' l] 2'' 6' 2 ,_ d 3 + + ry, 2 + u2 2 + CY "1 rJo + TJo 1 + 5"' 1 + a z ,, (2.9) = l q6i + dz 16 PAGE 25 for all ryv E Wh with v 2: 0 and for all Dv E Wh with v > 1. From the choice 6j c= 0, ry0 c= 0, = and 62 = 82, we conclude that ,,, j (if; + 6, )' dz = 0 ==? if; = -6,. (2.10) Substituting (2.10) into (2.9) results in 2 6' 3q 1 + 2aqr}o dz. Choosing 61 = 0, then integration by parts leads to z,, Zr j + 2fr2iforyo dz = 2a j qryo dz, ,, which with (ii) and after division by 2a becomes Jz, 1 d j'' d 3ry0ry0 + aryor}o z = qryo z. (2.11) Because of the choice WhEW", equation (2.11) holds for all ryo E Uh D One major irnplication of Theorem 2.1 is that, when f}v(z) and 6v(z) are continuous piecewise linear functions, (ii) can only be fulfilled if ifo is a linear function. Otherwise, 81 = -TiS is a step function, which would not be continuous. Taking the boundary conditions into account, it fOllows that Tfo:::::: 0. Therefore, U" = {0}, so that (iii) is a vacant statement and does not contradict the fact that fio = 0 is a solution. Consequently) in the diffusion limit c.-+ 0 the discrete minimizer 'h converges to 'h = 0, independent. of the choice of the right hand side q. This shows that the Least-Squares Finite-Element discretization of (1.6) with linear basis elements in space does not give a correct diffusion limit approximation, except in the case q = 0. For a different way of proving this result, we refer to (Manteuffel and Ressel [38]). On the other hand, for piecewise polynomial basis functions of degree 1 > 1, con dition (ii) does not restrict ifo to a linear function. Therefme, Uh contains also nontrivial functions, so that (iii) implies that ?fo is a Galer kin approximation of the clif[usion equation -&4/' +a= q. Thus, the Least-Squares Finite-Element discretization with piecewise poly nomials of degree 1 > 1 yields a correct discrete difrusion limit solution. However, numerical results for a discretization in space by piecewise quadratic basis functions show that applying a scaling transfOrmation (introduced in the next section) prior to the discretization enhances the accuracy. 2.2 Scaling Transformation In this section, we introduce a scaling transformation that is applied to the transport operator prior to the Least-Squares discretization. This scaling transformation plays a key role in this thesis, since it guarantees the accuracy of the Least-Squares discretization in 17 PAGE 26 diffusive regimes even for simple Finite-Element spaces, such as spaces using continuous piecewise linear elements in space (see Section 2.5). To motivate the scaling transformation we introduce the moment reprac;entation of the flux. Let P,(J1) denote the 1-th Legendre polynomial. The normalized Legendre polynomials Pl(Jl.) := J2Y+IP,(Jl.) form an orthonormal basis of L2([-l, 1]): 1 J Pk(Jl.)Pl(Jl.)dJi. = 6kl: -] (2.12) where 8kl denotes Kronecker delta, i.e, 6k1 :::= 1 for k :::= I and 8kl = 0 otherwise. Assuming that 1/!(z, Jl.) E L2([-1, 1]) for all z E [z,, z,], then 1/! has the following expansion (moment representation) in angle: 1/!(z,Jl.) = L . However, the different terms in the operator l, as defined in (1.11), are unbalanced (there are ), 0(1) and O(c) terms), so that different components of the approximation error are weighted differently in The leading terrn of lis P), which means that the error in the higher moments is weighted in this norm very strongly in diffusive regimes (very small c::), even though this part is not important according to the diffusion expansion (1.13). On the contrary, the error in moment zero, which is the important part in diffusive regimes, is hardly measured in the norm since it is weighted by c:. The basic idea is, therefore, to scale equation (1.11), thus changing the weighting in the norm used in the Least-Squares discretization to determine the best approximation to the exact solution in the discrete space. Define for r E m+ the following scaling transformation and its inverse: S := P + r(IP), 1 s-1 = P +-(IP). r (2 15) After applying the scaling transformationS from the left and dividing by s, equation (1.11) becomes 1 1 o PAGE 27 where q, := 5q and 1 a,P 1 Ehf; ,. a,P -51'-= -PI'-+ -(T-P)l'[)z az c [)z Clearly, choosing r = O(c) will increase the weight for moment zero and reduce the weights for the higher mornents. Equation (2.16) can be balanced further by a scaling transformation from the right. Let the domain of operator in (2.16) be the Hilbert space V. Then we define the space V by V:=5-1V, so that (2.17) v = 5-1 v for all v E V and 5v = v for all v E V. Scaling (2.16) also from the rigbt results in 1 a{; r2 .. .. -17/J = ,P = -5!'8-;:;-+-,(I-P)1p + rxP,P = q, [ uz [ (2.18) where 1 l ,-2 -8118 = -(r-r2)(PI' + 11P) + -11I. E E E For r = O(c) we have;' E 0(1), so that in (2.18) tbe derivative of moment zero and one and the moments themselves are weighted equally. Moreover, we point out that the double-scaled operator ,8 can be bounded independent of s. In the Least-Squares context, the additional scaling from the right can be avoided, smce ( ;[;-q,, ;[;-q,) ,PEV (2.19) = min (/J-q, ,/J-q,), >tEV which will simplify the boundary conditions and so also the computations. Further, for slab geometry, because of transformation (2.1) we may assume witlwut loss of generality that CTt and parameters are constant in space. However, fOr higher di1uensional problems, this cannot beAestablished, so that s :::: s(r). For inhomogeneous rnaterial, s(r) is in general discontinuous, so that the scaling parameter r, which was chosen to be O(s), would be discontinuous. To perform the scaling would then require to prescribe jump conditions in the scaled solution V across material interfaces. Therefore, we use the additional scaling from the right only as motivation for the choice of the scaling transformation and as a tool in the theory in Section 2.4, where we exploit the nice form of the double scaled operator (2.18). For another way of motivating the scaling by way of the moment equations, we refer the reader to Manteufl'el and Ressel [38]. As outlined in Section 1.4, a necessary condition for '1/J E V to be a minirnizer of the Least-Squares functional (2.19) is that the first variation vanishes at 1j;, which results in the problem: find ,P E V such tbat a( ,P, v) := (,P,v) = (q, ,[.v) \1 v E V. (2.20) For a discretization of problem (2.20), the bilinear form a(-,) is restricted to a finite dimensional subspace Vh C V. In the remaining of this chapter, we analyze the error of this discretization for various subspaces yh. 19 PAGE 28 2.3 Error Bounds for Nondiffusive Regimes In this section we establish bounds in an unsealed norm for the discretization error of the Least-Squares discretization. However, in this norm it is not possible to prove V ellipticity and continuity of the bilinear form (2.20) with constants independent of parameters and a. In diffusive regimes, where is very small, these bounds blow up and are therefore useless. Nevertheless, the bounds for diffusive regimes that are derived in Section 2.5 are only valid for < 1/v'3, so that the bounds in this section can be used to cover the range [ 2: 1/v'3. As outlined in Section 1.4, the first step on the way to bounding the error is to prove V-ellipticity and continuity of the bilinear form aC, ) in some norm. From the view of standard elliptic boundary value problems, the choice V = H1([z,, z,]) x L2([-1, 1]) (Adams [1]) with the norm 2 ;'';1 (8v)2 2 llvlh_o := [}z + v dpdz Z! -1 seems natural. However) it is easy to see that the bilinear form a(,) cannot be bounded from below in this norm. Let Vk := V2 sin(brz) B(p) with for I' E [-8, 0] for p E [0, 8] otherwise Then, for all k E IN, we have Vk E H1([z,,z,]) X L2([-1, 1]) and llvklh.o = (k7r)2 + l. Some simple calculations show that a(v, v) <; Choosing 8 = -k1 then the bilinear form a(-,) is bounded for all k while lim llvklko = oo. 7r k-HXJ Thus, there is no lower bound for a(-,) in the norm 111 0 The next obvious choice is II [} 112 2 v 2 lllvlll := I' [}z + llvll (2.21) Closure here is with respect to the norm 111, so that Vis a Hilbert space. In the following, we bound the Least-Squares discretization error in norm (2.21) for various Finite-Element spaces. 2.3.1 Continuity and V-ellipticity Before we establish continuity and V-ellipticity of the bilinear form a(,), we summarize some simple properties of our operators. 20 PAGE 29 Lemma 2.2 (Properties of P, Sand pffz) For all 'UJ v E V 1 we have: (i) (Pu, v) = (u, Pv); and ((IP)u, v) = (u, (IP)v); P2 = P; and (IP)2 = (IP). Thus P and (I P) are orthogonal projections; (ii) (Pu, v) = (Pu, Pv); and ((IP)u, v) = ((IP)u, (IP)1;); (iii) II vii:<: II for scaling parameter r = 6 and f :<: 1. (iv) IIPvll' ::0: ( IIPvii-IIP(I-P)vll) 2 ; (v) v) ::0: 0; v z Proof. (i): '' 1 Zr 1 (Pu, v) = J J Pu v dpdz J Pu I v dpdz Zi -1 Z! -1 z ,. z ,. 1 I 2Pu Pv dz = I Pv I u dpdz Zl Z/ -1 '' 1 = I I u Pv dpdz = (u, Pv), Zt -1 and the second identity follows directly from the first. From the definition of P, it is obvious that P2 = P and, therefore, (IPj2 = (IP). (ii): follows immediately from (i). (iii): llvll2 = IIPvW + II(I-P)vll' :<: ;\11Pvll2 +II (IP)vll' = II;Svll', since 6 :<: 1. (iv): ,, l llpvll' =.I .I p2 [Pv +(IP)v]2 dp dz Z/ -1 1 2 j'' /1 2 2 = 3IIPvll + 2 p Pv (IP)v dp dz +lip( IP)vll Zj -] 21 PAGE 30 The mixed term can be bounded by the Holder inequality as follows: ,_ 1 ,_ 1 j j 112Pv (IP)vdpdz :S j IPvl j f1 [fli(I-P)v] d11 dz Zl -1 ZJ -1 :S JI U 1Pvl2 dz) 112 U j lf1(I-P)l/'12 dp dz) 112 1 :S y3 IIPvll llfl(I-P)vll Therefore 2 1 2 2 . 2 llflvll ?: jj11Pvll y3 IIPvllllfl(l-P)vll + llfl(I-P)vll = IIPvll-llfl(I-P)vll) 2 (v): Applying integration by parts with respect to z, we get z,.l 1 Zrl j j f1 v dpdz = j f1 [v2(ze: f!)-v2(z,, 11)] dp/ j f! v dpdz. Zj -1 -1 Zl -1 Taking into account the boundary conditions for v E V 1 it follows that 1 (fl = j f1 [v2(z, fl)-v2(z,, p)) dp -1 0 It now easily follows from the Cauchy-Schwarz inequality that the bilinear form aC, )is continuous in the norm 111, since for any u, u E V ia(u, v)l = II(.Cu,[v)l :S II.Cuiiii.Cvll Here C, := [ ( 2 + ( )'], and we used the discrete Holder inequality in the last step. 22 PAGE 31 We prove now V-ellipticity of the bilinear form a(-,) wben a =F 0. Lemma 2.3 (V-ellipticity for "'=F 0) Suppose o =F 0 and let T = .,ffr. Tben there exists C, > 0 such that, for all v E V, a(v, v) 2: C, lllvlll2 whereCe:=min{f;,a, a2}, Proof. We have a(v,v) (J:.v, J:.v) 1 II iJv II' II iJv II' c' Pp.iJz +a (I-P)p.az + "' II (IP)vll2 + "'2IIPvll' (2.22) +P11-, Pv +-(IP)p.-, (IP)v 2o \ iJv ) 2"' \ iJv ) [ 8z s az The second mixed term can be written using (ii.) of Lernrna 2.2 as According to (v) of Lemma 2.2, the first term here is always positive and the second tern1 cancels with the first mixed term in (2.22), so that a(v, v) = :,IIPp. r +ct II (Ir + "' II(!-P)vll' + a'IIPvll' > C, lllvlll', with Ce := min o:2}, which proves the lemma. 0 For the more difficult task of establishing the V -ellipticity of the bilinear form a(, ) when a= 0, we need the following PoincarC-Friedrichs inequality. Lemma 2.4 (Poincare-EHedrichs Inequality) For any v E V, we have Proof. We have iJv(s, p.) f1 iJs 23 (2.23) for f1 2: 0 = for f1 :S 0 PAGE 32 Jz I" &v(&ss, I') I ds r for I' 2: 0 ,, [!'v( z, p.) I < I ds for I':; 0 z Taking into account that assumption (zr-zi) = 1 implies we obtain the lemma. D We are now in a position to establish Lemma 2.5 (V-ellipticity for a= D) Suppose that a-0 and 0 :S s :S 1, and let Then there exists Ce > 0 such that, for any v E V, a(v, v) 2: C, [llv[[[2 where Ce = Proof. Recall that 1 &v r &v r Lv = E Pf' &z + E(I-P)p &z + (IP)v + rxPv. Because of (i) in Lemma 2.2, we have a(v,v) = (Lv,Lv) r2 . 2r2 I &v ) + s4 ((IP)v, (1 -P)v) + $$IP)p az, (IP)v Analyzing the mixed term and using (i) of Lernma 2.2, we see that I av ) 1 av ) 1 av ) 1 av ) \(IP)p0z, (IP)v = \(IP)t' &z, v = \'' EJz, v -\ Pf' EJz, Pv 24 PAGE 33 The first term is always positive according to (v) of Lemma 2.2. Consider the following arithmetic-geometric inequality: for any r; Em+ and for any a, bE ill, 2ab < 7Ja2 +!C. We can thus bound the second term according to Therefore, the bilinear form a(-, ) can be bounded from below by Defming T2 T2 +-.11(1-P)vll2 -siiPvll' E E 7] 0 -IIPvll2 .llvll' IIPI'8 II' ,._ az II' so that so that ( 1 _b)= II (IP)vll2 II vii' (1 ) = liU--r the above bound thus simplifies to a(v,v) + C, llvll', with cl = 2. (-r-72 rn+r2(1--r)), ,;2 c c, = 2. (r' (16)-72 o) c:2 s2 cr; (2.24) By proper choice of q, we now need only establish that C1 and C2 are positive. Unfortunately, for large enough 6, C2 will be negative, so we will need in this case to readjust the terms in (2.24), which we do by way of the PoincareFriedrichs inequality. Case 1: 6 > Jj: From the PoincareFriedrichs inequality (2.23) and (iii) of Lemma 2.2, we conclude that II' 2: lll'vll2 2: IIPvii-IW-P)vll) 2 Since I' E [-1, 1], tben clearly Ill'( IP)vll :'::II (IP)vll Therefore, 1 1 y'SIIPvll-Ill'( IP)rll 2: y'SIIPvll-II( IP)vll > 0, where the last inequality follows from the assumption 0 ?:: g > i since 1 2 .. 2 1 3 3IIPvll 2: 11(1 P)vll <= 3b 2 (16) <= 6 2 4 25 (2.25) PAGE 34 From (2.25), we get ( 1 )' ( 1 )2 0)11Pvii-IIJ.l(l-P)vll :?: 0)11Pvll-II( I-P)vll It then follows that II' :?: ( IIPviHI(IP)vll)' = v'0=TJ)' llvll' 1 12 1 2 ( 1!;)2 :?: 0) /fl13 II vii Thus, II 8vll' 1 2 J.l (jz :?: 13 llvll We now use (2.26) to rewrite (2.24) as a(v, v) :?: ( C,-;;, ) II' + II' + C, llvll2 :?: ( C,+ ;;, ) II' + + C,) llvll2 Choosing 17 ::::::: 5 } and using the fact that 6 ::=; 1 results in 1 ( r2 2 ( 1 1 ) ) r2 = c' (1 -li) + 7 26 52 :?: 52r2 > O, and c,-;;, = 1 (1 (1 _7 (1+ !;)) + r;) > r;, since r = 1/}2+ ;;, so r2 (1+ 5 2 ) < 1. Case 2: 6 < H= Choosing TJ = 24s, for cl and c, in (2.24) we obtain that and cl = 12 (1(1-24r2 ) +r2(1-1)):?: 2 > 0, E since 24r2 ::::::: < 1 for s < fi. v 22 26 (2.26) PAGE 35 Thus, altogether we have a(v, v) ::0: C, lllvlll2 with =-mm --1-->-mln --C r2 { 1 1 1 } 1 { 1 1 } e 521 21 1 262 54 52 1 26 2 1 which completes the proof. D From continuity and V -ellipticity of the bilinear form a(-, ) it follows directly from the Lax-Milgram Lemma (see Section 1.4) that problem (2.20) and all of its discretizations are well posed. The next step is to obtain discretization error bounds for a variety of discrete subspaces vh' which is done in the next subsection. 2.3.2 Error Bounds As outlined in the introduction, continuity and V-ellipticity of the bilinear form a(-,) lead directly to Cea's Lemma (1.24). Therefore, bounding the discretization error 1111/> -1/>hlll is reduced to the problem of bounding min 1111/>-vhlll, which is a problem of approximation theory and depends on the choice of the finite-dimensional space Vh. Here we consider two main classes of discrete spaces vh. The first consists of spaces with functions that can be expanded into the first normalized Legendre polynomials with respect to the direction angle f-1 and are piecewise polynomials of degree r in zona partition Th of the slab [z1, Zr]. This choice of the finite dimensional space V h corresponds to discretization by a spectral method in angle jJ and a Finite-Element discretization in space. In transport theory the spectral discretization in angle with the first N Legendre Polynomials as basis functions is also called a PN-1 discretization in angle. For any f(z) E Hm([z1, zr]), with 1 <; m <; r+ 1, let IT,f(z) denote the interpolant of f(z) by piecewise polynomials of degree r 2 1 on a partition of [z,, z,]. It then can be shown (Jobnson [25, p. 91]) that llf(z) -II,f(z)llv([' PAGE 36 Then the error of the truncated expansion can be bounded as follows Lemma 2.6 (Truncated expansion into Legendre polynomials) For r?: 0, let g(z, p) E H"([zr, z,]) x H2([-1, 1]) and let llN be defined as in (2.28). Then have: (i) IIIINgll :::; llgll; (ii) llim)(z)ll :'0 r(l!l) \lm:'O rand \lz E [z,,z,]; (iii) For any m r, we have c II amg II < N .Cs azm (2.30) with C independent of g and N. Proof. (i): liN is orthogonal projection with respect to the inner product of 2([-1, 1]), so IIIINgll'([-1,1]) :'0 llgl!p([-1,1]) and hence lll1Ngll :'0 llgll (ii): By definition (2.29) and integration by parts, we have 1 j1 am9 1 II Ergll = 21(1 + 1) .Cs azm PI(!') dp :'0 V'il(l + 1) .Cs azm L'([-1 1]). -] (2.31) Therefore, < 1 ll.c amg II "'1 I(/+ 1) 8 azm (iii): Since the Legendre Polynomials are an orthogonal basis, from (2.31) we obtain ___-II _g = 2 I: 1(m)(zW < .C ___ .Z::: . llam am II' oo II am II' oo 1 i)zm N azm L'([-1,1]) I=N I -S iJzm L'([-1,1]) l=N [/(/ + 1)]2 For l 2: 1 we have f so that the sum can be bounded by 00 1 I: [/(/ + 1 )]' I=N 00 00 1 00 1 ;1 4 4 :'0 4 I: (I+ 1)4 = 4 I: /4 :'0 4 /4dl = 3Ns :'0 3N2' I=N I=N+l N Therefore with C = .ji, which proves the lernrna. 0 28 PAGE 37 Theorem 2.7 (Finite-Element in space, spectral discretization in angle) Suppose that Th is a partition of the slab [zr, Zr] with maximum mesh size h. _Let V be given as defined in (2.21). Define N-1 Vh = L E JP,(T,,) for I= 0, ... ,N -I where 1P r (Th) denotes the space of piecewise polynomials of degree :::; r on the partition Th. Suppose 1 :S m :S r + 1 and let 1/; E V n (W"([z,, z,]) x H2([ -1, 1])) be the solution of (2.20) and 1/Jh E Vh be the solution of (2.20) restricted to Vh Then Proof. From Cea's Lemma (1.24), we have Now note that lllvlll :S llvll1 0 Therefore, by (i) ofLemma2.6, (2.27) and (2.30), we conclude < cl 1' 'II + c hm-l N 1 soc 1,0 2 which proves the theorem. 0 The second n1ain class of finite-dimensional spaces considered here are formed by functions that are piecewise polynomials in space z as well as in angle J-.L This choice corresponds to a FiniteElement discretization in both space and angle with rectangular elernents. Suppose that Th is a partition of the computational domain D = [z,, z,] x [ -1, 1] into rectangles T = [z;, Z;+J] X [l'v, l'v+!] of maximum diameter h. To be able to handle the boundary conditions properly, we assume in addition that (.z:1, 0) and ( Zr, 0) are nodes of the triangulation 'TJ,. By Th we deJine the discrete space: vhiT= L 'iTETh o:;f3,r'!:.r 29 (2.32) PAGE 38 For all v E V, let IIJ,v E V" denote an interpolant1 of v with respect to the partition T,. It can be proved (Ciarlet [16, Theorem 16.2]) that, for v E V n H"+1(D) the following bound for the interpolation error holds: where 0 :'0 rn :'0 k + 1 and llw+'(D) is the semi-norm of Hk+1(D) (Adams [1]). Combining Cea's Lemma and (2.33), we get Theorem 2.8 (Finite-Elements in space and angle) Let V", T,, h be given as defined above. Suppose 1/J E V n H"+1(D) is the solution of (2.20) and let 1j;, E 11" be the solution of (2.20) restricted to V" defmed in (2.32). Then we have: Proof. By Cea's Lernrna, we need only to bound llhD-lh1f;lllNote that, for all v E v n w+1(D), lllvlll :'0 :'0 llviiH'(D)' Thus, using (2.33) with m = 1, it follows that 1111/J-II,1/JIII :'0 111/J-II,1/JIIH'(DJ :'0 C h" 11/Jiw+>(Dl which proves the theorem. D We point out that the error bounds in Theorem 2.7 and Theorem 2.8 depend on the ratio In the V -ellipticity bounds in Lemma 2.3 and Lemma 2.5, the scaling parameter Twas chosen to be O(s). Therefore, when is small, C, is O(a) when a f 0, while C, is 0(1) when o = 0. In addition, for T = 0(), the continuity constant C, is 0(-;), so we which blows up for diffusive regimes, where is very srnall. However, numerical results show that the Least-Squares discretization of the scaled transport equation stays accurate in diffusive regimes. Thus, we conclude that the bounds, derived in this section, are not sharp enough to reflect the aceuracy of the Least-Squares discretization in diffusive regimes. In order to obtain error bounds that do not blow up in diffusive regirnes, it is essential to prove continuity and V -ellipticity of the bilinear form aC )with constants independent of parameters and a. This is done in the next section with respect to a scaled norm. 2.4 Continuity and V-ellipticity with respect to a scaled norm In this section, which is the central part of this thesis, we prove continuity and 11-ellipticity of the form a(-,) in (2.20) with constants independent of parameters f and a. This is the foundation for the bounds in Section 2.5 of the Least-Squares discretization error that do not blow up for diffusive regimes. Throughout this section, we assume that r = f and a:::; 1. In order to obtain continuity and V -ellipticity with constants independent of c and a, we use a scaled norm. To motivate its choice, we look at the double-scaled (from left and right) r 2: 2, there are many different interpolants, depending on the choice of Lhe support absci!lsas and support ordinates on the rectangle, which are not specified here. For an overview of counnonly used lnterpolants for rectangles, we refer the reader to Ciar1eL [16, p. 129]. 30 PAGE 39 transport operator (2.18). Let V denote the domain of the single-scaled (only from the left) transport operator (2.16) and v = s-'v the domain of the double-scaled transport operator (2.18). Defining we see that the norrn 1 Q := -SpS = (1c) (Pp + pP) + Epl, [ 2 v 2 II [) llvllv := Q [)z + llvll for V E V would be a natural choice for bounding the double scaled bilinear form a(u, v) := (Su, SV). (2.34) (2.35) (2.36) However, because of the reasons mentioned in Section 2.2, it is desired to use the single scaled transport operator for the computations. Therefore, u."ling the relation V = s-1 V, we derive from (2.35) the following norm for v E V: llvllt = + 11"112 = \\ 1\' + IIS-1vll2 (2.37) = + IIPv+ P)vll' = II r + II P)vll 2 + IIPvll' = llvllv. We define v-v(z,,p)=Oforp>O; v(z,,p)=Ofor!i. PAGE 40 Employing the discrete HOlder inequality and using the assumption o :::; 1, we obtain !!Cull II + II P)ull +!!Pull V3 + PJul!' + IIPull'r' =V311ullv Thus, for all u, v E V, la(u, v)l C, llullv llvllv with C, = 3. To prove V-ellipticity of the bilinear from a(.,), we exploit the convenient form of the double-scaled transport operator and prove first that the double-scaled bilinear from a(-,) in (2.36) is If-elliptic. The V-ellipticity of the bilinear from a(,) then follows easily as in Corollary 2.12. In order to prove V-ellipticity of the bilinear from a(-,), we need the following lemmas. Lemma 2.9 For all u)v E V, V E V and c: :S 1, we have (i) rll(l-P)i!ll llflvll; (ii) ( ?_ 0; (iii) (Poineare-Friedriehs inequality) Proof. (i): Since Ill"( IP)vll II(I-P)vll and then ,, 1 Zr 1 Zr j j p2 (Pv)2 dpdz = j (Pv)2 / p2 dpdz = j (Pv)2 dz ZJ -1 ZJ -1 Zl ,, 1 = J // (Pi!)2 dpdz = jiiPi!ll', Zj -1 ll11v11 = ll!t[P + r(I-P)Jvll ?_ lli"Pvl!ciii"(I-P)vll ?_ }ei!Pi!llcii(I-P)i!ll (2.39) (ii): From (i) in Lemma 2.2, it follows that (Su, v) = (u, Sv) Vu, v E V. Therefore, using (2.17) leads to IQ!Jv l1s si!v 1 I sav 1 I i!v ) o \ 8z'v = \;; f-1 8z'v = ;;\J-L 8z'Sv = ;;\f-la;,v where the last inequality follows from (v) of Lemma 2.2. 32 PAGE 41 (iii): From the PoincareFriedrichs inequality (2.23) proved in Lemma 2.4, we have llpvll :S II Using (iii) of Lemma 2.2 and the relation (2.17) results in 0 The following technical lemma is tedious but simplifies the proof of the major result, Theorem 2.11. Lemma 2.10 Suppose 0 :S E :S Js Then, for any bE [0, 1], there exists b 2 0 such that H(b,o) = { J(o(1 + jJ&)' + 41i(1-li) + (1-i) &H} < 0988, where s := [o1i2 -cv'3(1-li)1i2 ] 2 In particular, for li < 0.875, we can choose b = 0. Proof. For /j < 0.875, we choose b = 0 and get H(O, li) = { v/4.5-3.52 + o} :S H(O, 0.875) < 0.986, since H(O, 6) is monotone increasing forb E [0, 1]. For 6 > 0.875) using the assumption c:v'3 < 1, we have Suppose we restrict the choice of b to b :S It then follows that since s :S 1. From (2.40), we conclude that Therefore, Simple calculus shows that 3.5 b* := (3 + !3) (3-fJ) < (3 + (i) v --p 4 33 (2.40) PAGE 42 rr1inirnizes ii and that b* > 0 if 8 > 0.875. After tedious but straightforward rnanipulation1 we have that Ti(b*, 6) attains its maximum at 6*"' 0.893 and that H(b*, 6*) :S 0.988. D We are now ready to prove the central result of this section. Theorem 2.11 (V-ellipticity of a(-,)) Let a(-,) and II llv be given as in (2.36) and (2.35). Suppose that 0 :Sa :S 1, 0 :Sf< :To Then there exists a constant Ce > 0 such that, for all V E V, a(ii, ii) = + aPii +(I-P)iill' ?: c, + 11"11') = c, 11"11&. where C, = 0.012, which is independent of s and a. Proof. We have a(ii, ii) + aPv +(I-P)iill' + a'IIPvll' + IIU-P)iill' +2a \ + 2\ (I-P)v) = + a2IIPvll2 + II(I-P)vll2 +2a: \ + 2(1-a)\ (I-P)ii). (2.41) (2.42) For the last term, we may write for any d E [0, 1], by using (ii) of Lemma 2.2 and (ii) of Lemma 2.9: > -d \ PQ Pv) + (1-d)\ (I-P)Q P)v). Substituting this into (2.42) and bounding the fourth term in (2.42) by (ii) of Lemma 2.9, 34 PAGE 43 we get a(v, v) > IIQ 112 + a'IIPilll' +II (I-P)vll2 -2(1 -")d 1 (PQ Pv) 1 2(1 -")(I d) 1 ((I -P)Q g:, (I-P)v) 1 which can be reduced by setting a = 0 to Defming a(v, v) ::: IIQ g: I!'+ 11(1-P)vll' -2d )(PQg:, Pv))-2(1d) )((I-(1-P)v)). 0 -IIPilll2 llilll' IIPQ*II' '!-)IQB'jl2 1 Dz I so that so that ( 1 o) = II (I-P)illl' llvll' 1--II (I-P)Q*II' ( r)-))Qg:ll' and using Cauchy-Schwarz inequality, we conclude from (2.43) that a(v, v) ::: IIQ (1-6)11"11'-2dVt.,f'i IIQ llllilll -2(1-r) (2.43) (2.44) To maximize the lower bound in (2.44), we divide the region (6, r) E [0, 1] x [0, 1] into two triangles and choose d as follows: d { 1 for o + /' :S 1 0 for o + /' ?: 1 Next) we consider these two cases separately. Case 1: o + /' :S 1, d = 1: For any rJ > 0 and any u, v and any norm II II, the arithmetic-geometric mean inequality holds. We thus have 1 2llullllvll < ryllull' + -llvll2 ry II Er II' ( o) a(v, v)::: (1-rry) IQ + 1-o-I Iilii' (2.45) It remains to choose 1) such that the terms (1-rrJ) and ( 1-oin (2.45) can be bounded by a positive constant from below for all possible/', o with/'+ o :S 1. Foro < 0.5, we choose 1) so that (1-r1J)= 35 PAGE 44 which yields b + vo2 +40, ,, = 2-y Applying Lemma 2.10, since -y + o :S 1 and therefore, -y :S 1o, then we have Thus, from (2.45), the V-ellipticity of a(-,) directly follows with c, 2: 0.012. On the other hand, if 8 2: 0.5, the second term in (2.45) can become negative. To keep it positive, we then rewrite (2.45) for any bE [0, 1] as follows Since o 2:0.5 and E < 1/\1'3 we have 0 :S (j,v'b-cv'l=6). We can now use the Poineare Friedrichs inequality (2.39) of Lemma 2.9 and inequality (i) of Lemma 2.9 to bound the second term by which results in (2.46) where s = (Vi-c!3J0=bl) 2 (2.47) Again, we choose rJ so that which yields -(b + s 8) + J (b + s 6) 2 + 40, 2-y Next, to attain a positive constant in the lower bound in .(2.46), we need to show that for all possible 6, 'I with 8 + -y :S 1, b 2: 0.5, a positive b can be selected so that Since G1(b,l5,-y) :S G1(b,6,1-15) for 8+-y :S 1, it then is sufficient to prove V6E[0.5,1]3b>0: G1(b,6,1-li) PAGE 45 Case 2: 6 + 7 2: 1: Setting d = 0 in (2.44) and preceding as in case 1 results in a(11,11)2:(1-(1-7)'7) +(1-6-1 ;6 ) 111111 (2.48) For D :::; 0.5, we choose 6 + J6' + 4(16)(1-7 ) rJ= 2(1-7) so that (1-(17)'7) = ( 1b-1 ; 8). Using Lemma 2.10, since 6 + 1 '2: 1, and therefore, 6 '2: 1-/, we then have (17)'7 = { J62 + 4(16)(1-7 ) + 6} s { J62 + 4(16)6 + 15} = H(o, 15) s o 988. The V-ellipticity of a(-,) then follows with C, = 1-0.988 = 0.012. On the other hand, for 8 2: 0.5, we introduce, as in case 1, a parameter b E [0, 1] and use the PoincareFriedrichs inequality (2.39) to conclude from (2.48) that a(11,11) 2: (1-b-(1-7)'7) + (1-8+js-1 ;8 ) 111111, (2.49) with s as defined in (2.47). Again, we fi.rst choose ry, so that (1-b-(17)'7) = (18 + 1 b$$ 3 1) which yields (b+ .5) + J (b+ &s-8) 2 + 4(16)(1-7) 1)= 2(1-7) In order to attain a positive constant in the lower bound (2.49), we need to show that, for all possible 6, 7 with 8 + 7 2: 1, 6 2: 0.5, a positive b can be selected so that, G,(b, o, 7) := b + (17)'7 = q J (b+ js-0 r + 4(1.5)(1-1) + (b +6-H} < C* <1 Since G,(b, .5, 'I) S G,(b, .5, 1-b) = H(b, 8) for 8+7 2: 1, this follows directly from Lemma 2.10 with C* = 0.988. Finally, from (2.49) with c, := 1C* = 0.012 the V-ellipticity of a(,) follows, which proves the theorem. 0 From the V-ellipticity of the bilinear form a(-,) the V-ellipticity of the bilinear form a(-,) can be proved as follows. Corollary 2.12 ( V -ellipticity of a(-,) ) Let a(,) and llllv be given as in (2.20) and (2.37). Assume that 0 Sa S 1, 0 Sf< Js Then there exists a constant Ce > 0 such that, for all v E V, a(v, v) 2: C, (2.50) 37 PAGE 46 where Ce = 0.012, which is independent of a and c. Proof. By the definition ofthe norm llllv in (2.37) and the relation (2.17) we have llvllv = llvllvTherefore, using (2.41) in Theorem 2.11 we obtain for any v E V Zr 1 Zr 1 a(v,v) j j .Cv .Cv dfldz = j j .CSv .CSv dfldz ZJ -1 Zj -1 which proves the corollary D 2.5 Error Bounds for Diffusive Regimes Using continuity and V-ellipticity of the bilinear from a(,) in the norm llllv with constants independent of ex and c) in the following section we establish discretization error bounds that do not blow up in the limit c _,. 0. we use the same discrete spaces introduced in Section 2.3.2. We first consider discrete spaces with functions that can be expanded into the first N normalized Legendre polynomials with respect to the direction angle fl (PN-1 discretiza tion in angle) and are piecewise polynomials of degree ::; r in zona partition Th of the slab. To combine Cea's Lernrna (1.24) with the interpolation error bounds in (2.27) and (2.30) to obtain an error bound for this class of discrete spaces, we need the following le1nma. Lemma 2.13 (Bound for commutator [IIN.C-.CIIN]) Let .C be the transport operator as defined in (2.16), .Cs the Sturm-Liouville operator as defined in (2.29) and liN the projection operator as defined in (2.28). Suppose N 2: 2, m 2: 0, and v E V n Hm+3(D). Then, there exists a constant C > 0 independent of c and a such that Proof. Recall that 1 a a 1 .C =-PI"-+ (IP)l"+-(IP) +a P, c Dz Dz E and that has the moment expansion (see (2.13) in Section 2.2) Note that and D"'v ) ( ) azm -L 'f'l .z Pr J.L 1:::::0 with I ) }' amv(z,fl) ( ). d 'PI z -2 iJzm PI I" I" -I 38 (2.51) PAGE 47 Therefore) i) i) [liN [-[ IIN] =liN I" {)z-I" IIN (jz" Using the relation (see Chapter 4) with we see that bl := l + 1 d ( ) 0 an P-1 fJ = y'4(1 + 1)'1 a amv IIN __ r &z &zm N "' _,c(m+I) b = L '+'/ l-1PI-1 1::::::0 N-2 + I: fm+1 ) b1PI+1 /::::::0 On the other hand, N-1 =I" I: )m+1 ) PI 1::::::0 N-1 N-1 "' = L., '+'I b1-1PI-1 + I: fm+l) b, PI+! /::::::0 1::::::0 Thus, in combination with (2.52) we have b (AC(m+l) _,c(m+1) ) = N-1 '+'N PN-1 -'+'N-1 PN Now we notice that) for any integers k) l ?: 0) z,. l Z.r llfm+1 lPkll' = / (fm+Jl(z)r / 2/ (fm+ll(z))" dz ;q -l Zt = llr/>fm+l)ll' :'::: 1 [1(1 + l)]' II 0m+lv II' [s iJzm+1 (2.52) (2.53) where the last inequality follows from (ii) of Lemma 2.6. Therefore, (2.53) can be bounded 39 PAGE 48 as follows: smce < 8 1 3 N2 {::} 1 1 8 1 <--,13 3 N2 II [jffi+l 0 vh(z,.,p) = 0 for I'< 0}, where lPr(Th) denotes the space of piecewise polynomials of degree::; ron the partition Th. Suppose 0 :S E < Ja. 0 :Sa :S 1 and 1 :S rn :S r+ I. Let 1j; E VnHm+3(D) be the solution of (2.20) with right-hand side q, E Hm+2(D). Let 1/Jh E V" be the solution of (2.20) restricted to Vh Then < (II 8(1/; 1/Jh) 112 +II P)( 1j; -1/Jhf + IIP(' 8(1/; 1/Jh)) II :S seh, II (IP) (fl8(1j; 1/Jh)) II :S eh, (2.54) Proof. Using Cea's Lemma (1 .24) and the V-ellipticity of the bilinear form a(-,), from 40 PAGE 49 Corollary 2.12 we conclude that since fi,IIN'JI E Vh In order to bound the first term in (2.55), we use (2.51) of Lemma 2.13 and (2.30) of Lemma 2.6 to get IIL PAGE 50 Finally, substituting (2.56) and (2.57) into (2.55) results in (2.54). D Remark 2.15 (Interpretation of error bound) In the following, we interpret the error bound (2.54) more closely. For diffusive regirnes (c < 1), the exact solution of the continuous problem has the diffusion expansion2 (see (1.13) in Section 1.2.2) 1/J(z, f.!)= ;(z) + E;R(z, f.!), while the the right-hand side in this case is assumed to have the form q = qo(z) + EMI(z) + O(s2). Therefore, it follows that Csl/J = O(s) and Csq, = O(t:2), since q_, = 8q = Pq + r(I-P)q. Taking this into account, for the error eh in (2.54) we get 1 (C1 2 C2 ) 1 ( '"llfJ'"q,ll hm l) e" = VG; NO(s ) + N' O(s) + C, C3h f!zm + C4 N' O(s Thus, the error in the zeroth rnoment [[P( 1j; -1/!h)[[ is bounded by 0( h'" )+O(s) and the error in the higher moments [[(IP)( -1/Jh)[[ is bounded by O(sh'") + O(c2). In partieular, for diffusive regimes, where c is very small, convergence of the discrete solution is also assured by the above bound for small N, which is a reasonable choice in this case, since the exact solution is nearly independent of ft. Moreover, the bound in 'rheorern 2.14 directly gives the optimal order of conver gence lor the spatial discretization without the use of Nitsche's trick (Johnson [25, p. 97]). For example, for piecewise linear elements, which means r = 1, we can choose m = 2 and get an O(h2 ) error bound in the L2-norrn under the regularity assumptions on 1./; required in the theorem. On the other hand, if c: is close to )3, so that E > hm and E > 1/ N, then an error bound can be obtained more easily, since Therefore, for any v E V n (Hm+1([zr, Zr] x H2([ -1, 1]), Cea's Lemma (1.24) and the bounds in (2.27) and (2.30) for the interpolation error lead directly to rc: ( 1 \ 1/2 ( c II 8'"1/J II ) :S V Cc l + c2) )r [[Cs PAGE 51 l''or the case c > Js, which is not covered by Theorem 2 .14, the error bounds in Section 2.3.2 can be used instead. D The second main class of finite-dimensional spaces considered here are formed by functions that are piecewise polynomials in space z as well as in angle fl. This choice corresponds to a Finite-Element discretization in both space and angle. Because Section 2.3.2 contains error bounds for this class of discrete spaces that are valid in non-diffusive regimes, we concentrate in the following only on error bounds for diffusive regimes. TherefOre, we assume in the following theorern, which combines Cea's Lemma and (2.33), that the exact solution ha..-; a diffusion expansion, which sim.plifies the proof. Theorem 2.16 (Finite-Elements in space and angle) Suppose that 0 :S a :S 1 and 0 :S c :S }3. Let Th be a partition of the computational domain D = [z,,z,] x [-1,1] into rectangles T= [z;,z;+J] X [!'v,!'v+t] of maximum diameter h. To be able to handle the boundary conditions properly, we assume in addition that ( z1, 0) and (zc, 0) are nodes of the triangulation. Let V be given as in (2.38) and define := { v, E C0(D); vhiT= :z.::: 'ITETh o:::;fJ,r:::;1 vh(z,,rt)=O for rt>O, vh(z,,rt)=O for rtR(z,rt) is valid. Then we have 111/J-q,,liv :S [ C h' (11/Jiw+(D) + 1Riw+(D)), with C independent of c, a, and h. Proof. From Cea's Lemma (1.24), we have 111/J -1/Jhllv :S [ 111/J -llh'-1lhli>)11). 43 (2 59) (2.60) PAGE 52 By (2.33) we now bound any of the above four terms separately and use the fact that IIh is as an interpolation operator linear, so that llhV' = IIh + sfihR Since (z) in the diffusion expansion of 1/J is independent of angle fl, we conclude that IIh(z) is also independent of I' Therefore, Pp'Jt; = 0 = PI'%Jlh, so that (2.61) where the last inequality follows from (2.3:l). Using (2.33) for the sec.ond term in (2.60) results in (2.62) Because and IIh are independent of angle, we have (IP) = 0 = (IP)Uh Therefore, the third term in (2.60) can be bounded by (2.63) since h < (z,-z,) = 1. Similarly, a bound for the last term in (2.60) is given by (2.64) Inserting (2.61), (2.62), (2.63), and (2.64) into (2.60) results in (2.59), whicb proves tbe theorem. 0 Remark 2.17 (V-ellipticity constant C,) Both error bounds (2.54) and (2.59) depend on the reciprocal of the V-ellipticity constant C,. According to Theorem 2.11 and Corollary 2.12, C, = 0.012, so 1/C, = 83.3, which is fairly large. However, we would like to point out that we simplifted the proof of Theorem 2.11 by considering only the worst case a= 0. Without setting a= 0, (2.46) would change to a(v, v) ;;: \\Q (1o) 11"112d(1a)Jbvr \\Q -2(1-d)(l(2.6.5) which clearly shows that the V -ellipticity constant Ce. increases with a. 'To judge the quantitative behavior, we computed Ce for certain values of a using (2.65). The results are plotted in Figure 2.1. Already for a= 0.3, 1/C, drops down to 7.04. D 44 PAGE 53 Ce vs. alpha i 0.8 0.6 0.4 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 /Ce vs. alpha 100 8 60 40 20 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Figure 2.1: V-ellipticity constant C'e and its reciprocal as a function of the absorption parameter a. 45 PAGE 54 CHAPTER 3 X-Y-Z GEOMETRY In this chapter, we generalize the scaling transfOrmation and the error bounds for the Least-Squares Finite-'Elernent discretization, from one-dimensional slab geometry to three dimensions. Since the main focus of this chapter is on diffusive transport problems, in the following we use the parameterized form (1.14) of the transport operator l. In addition, we assume that the total cross section O't is constant in space (O't(r.) = O't), so that the parameter t: is constant on the computational domain D := 'R x 81 where R. C JR:l is a region with sufficiently smooth boundary, for example, of class Cl,l (Grisvard [22, p, 5]), and 51 denotes the unit sphere. Further, we suppose throughout this chapter that C ::; l. Moreover, in the following we restrict our attention without loss of generality to problerns with vacuurn boundary conditions (so g(z:,l) = 0 in (L4)). Problems with inhomogeneous boundary conditions can easily be transformed to problems with homogeneous boundary conditions and different right-hand sides (Oden and Carey [45, p, 27]). As in the one-dimensional case) a scaling transformation is applied to the transport operator prior to the Least-Squares discretization to ensure accuraey of the discrete solution in diffusive regions. In the three-dimensional case) the scaling transformation and its inverse are given by S := P + c(l -P); s-1 := P P). [ (3.1) They have the same form as in the one-dimensional case with the only difference that r = c and that the L2-orthogonal projection P onto the space of fUnctions that are independent of direction vector D. is now defmed by N = j PAGE 55 inner product (u, v)v := Jjl 1 1. 1 ;:SU V'u -'Sn V'v + -(IP)u -(IP)v v [--[ [ "R s< (3.4) + Pu Pv d(ldr, its associated norm llullv = V(u,u)v = V'ull' P)ull" + IIPuli') 112 (3.5) and the space V := { 1; E C00(D); v(r., Q) = 0 for r E 8R, and Q rr(r) < 0}, (3.6) where the closure is with respect to the norm llllv, so that it is a Hilbert space. As mentioned in the introduction) the Least-Squares variational formulation of (3.3) is given by (1.19). Our first goal is to show that the bilinear form a(,), defined in (1.19) is continuous and V-elliptic with constants independent of pararneter E and a. From these results will follow not only the well posedness of problem (1.19), as outlined in the introduction, but also the accuracy of the Least-Squares Finite-Element discretization applied to it in the diflusion limit. 3.1 Continuity and V -ellipticity As in Chapter 2, we conclude from the Cauchy-Schwarz inequality that the bilinear form a(,) is continuous, since ia(u, v)l = I(.Cll, .Cv)l <::; II.Cuiiii.CvllWe now use the discrete Holder inequality to get II.Cull <::; V'ull P)ull + IIPull <::; V3 (II V'ull' + II P)ull' + 11Pull2 ) 112 v'3 llullv, since a: ::; 1 by a,.<;sumption. Thus, for any u, v E V, la(u,v)l <::; C, llullv llvllv, (3.7) with C, = 3. For the more difficult part of the proof of the V-ellipticity of a(,), we proceed in the same way as in the one-dimensional case. We first scale (3.3) in addition from the right by S to get .cs$ = !sos. v;j; +(IP)$+ "'P$ = q, 47
PAGE 56
where -;j; := s-1'1/J. Define the new space V and associated norm by where v := s-1v, 1 Q = -SflS = (1s) (Pfl + flP) + Efl!. -[ Shortly, we will prove that the bilinear form a(u, v) := j j J:Su csv df.ldr 11 il,:,; E v n S' (3.8) (3.9) is V-elliptic. The V-ellipticity of a(.,) in (1.19) will then follow in the same way as in Corollary 2.12. Before we do this, we first establish the following lemmas, which are generalizations of Lemma 2.2, Lemma 2.4, Lemma 2.9, and Lermna 2 .10. Le1nrna 3.1 For all u,v E V, VE V and::; 1, we have (i) (Pu, v) = (u, Pv); and ((IP)u, v) = (u, (I-P)v); p2 = P; and (IP)2 =(IP). Thus P and (I P) are orthogonal projections. (ii) (Pu, v) = (Pu, Pv); and ((IP)u, v) = ((IP)u, (IP)v); (iii) llvll II ; (iv) IIPvll-ell( IP)vll < llvll; (v) (fl 'Vv, v) 2: 0; (vi) (9.. 'Vv, v) 2: o. Proof. The proofs of (i), (ii), (iii), (iv) follow analogously to that in Lemma 2.2 and Lemma 2.9. To prove (v), we apply the fundamental Green's formula (Ciarlet [16, p.34]) to get (fl v) = j j fl 'Vv v df.ldr n S' = -j j v fl \7 v dfldr + j ju' fl 'l dllds. n sl an Sl We therefore have (fl 'Vv, v) = j j v2 fl 'l df.lds ans1 48
PAGE 57
Splitting the boundary an X 51 into the parts r+ := { (r:, m E an X 51 ; g. rr(r:) 2: 0} and r-:= { (r:,!}) E an x 51 ; il 11(1:) < 0}, the boundary integral becomes j / v2l.l = j j v2il 'l dllds + /.I v2il 'l dOds an 51 r+ r= j j v2il 'l d(lds 2: 0, r+ since v(r:,!}) = 0 for (r:,!}) E r-and v E V. Thus, altogether we obtain (il v) 2:0 To prove (vi), we observe from (i) that Sis with respect to(,). Con sequently, we have (Q 'Vv,v) = Gsns-vv,v) = vsv,Sv) = 2: o, where the last inequality follows from (v). D Lemma 3.2 (PoincareFriedrichs Inequality) Suppose E ::; 1 and let n c IR3 be a bounded domain. Then, for any v E v' we have llvll ::; diarn(n) 1151 'Vvll ::; diam(R) IIQ 'Vvll-(3.10) Proof. For L, Lk E 1? let denote the line segment between r..i and r.k. Let arbitrary r.. E R and Q E 51 be given. We define t, min{t E lR: [r:,r:+tQJ C n} t2 max{tElR: [r:,r:+tl.l]cn} r1 r. + ftil; r2 ::::: r.. + Then it is easy to see that .Q_ rr(r:1 ) < 0. Taking into account the boundary conditions for v E V, we therefore have that v(.r:1 ) = 0, hence, av jc v(r n) = ds = n _,_ Os ds, where ds denotes the arc-length differential along the line {r: +til, t E lR}. Therefore, we conclude from HOlder's inequality that ( ) 1/2 lv(r:,il)l :0: J 151 'Vvl ds :0:7151 'Vvl ds :0: diam(n)112 J lil 'Vvl2 ds 49
PAGE 58
Applying Fubini's theorem, it follows that r., J J lv(r, ml' diJdr :S diam(R) j j jIll Vvl2 diJdrds n S' !:.1 R Sl :S diam(R-)2 j jIll Vvl2 dndr. R 51 From the relation v = SV and (iii) of Lemrna 3.1, we thus have II vii :S diam(R) I Ill Vvll = diam(R) I Ill \7 Siill :S diam(R) II Vvll = diam(R) IIQ Viill, which proves the lemma. 0 Lemma 3.3 Suppose 0 :Sf :S 1. Then, for any o E [0, !], there exists b?. 0 such that where s := [o1i2 -s(l 15)112 ] 2 In particular, for .5 < 0.875, we can choose b = 0. ]' Proof. The only difference to the proof of Lemma 2.10 is that now s = [o1i2 -c(l-8)112 instead of s = [8112 -svf:l(1-8)112 ]2 Therefore, when 8 > 0.875, we use the assumption :S 1 to get s ?. 1-2Jd(1-d) =: (3. Everything else is analogous to the proof of Lemma 2.10. D We are now in a position to state the central result of this section. Theorem 3.4 (V-ellipticity of a(-,)) Let a(-,) and llllv be given as in (3.9) and (3.8). Suppose that 0 :S a :S 1, 0 :S f :S 1 and that the diameter diam(R) of the domain1 R is 1. Then there exists a constant C, > 0 such that, for all v E V, a(v, v) = 119. + aPv +(IP)vll' (3.ll) where C'e = 0.012, which is independent of E and 0'. Proof. ln the proof of Theorem 2.11, we replace Q Q_ Vii and for Panda(-,) use the definitions of this chapter. Then the proof of Theorem 3.4 follows exactly as the proof of Theorem 2.11, except that the PoincareFriedrichs inequality (3.10) of Lemma 3.2 and (iv) of Lernma 3.1 are now used to get 1 This can be established by a simple transformation of the space coordinates r_. 50
PAGE 59
Therefore, s in (2.47) is replaced by s = [v'b-cVf=8]' and Lemma 3.3 is applied to bound the functions Gr and G,. D From the V-ellipticity of the bilinear form a(,), the V-ellipticity of the bilinear form aC, )follows immediately as in Chapter 2. 'We summarize this result in the following corollary. Corollary 3.5 ( V-ellipticity of a(-,)) Let a(-,) and llllv be given as defined in (1.19) and (3.5). Suppose that 0 :S <> :S 1, 0 :S E :S 1 and that diam(R) = 1. Then there exists a constant C, > 0 such that, for al111 E V, (3.12) where Ce = 0.012, which is independent of a and s. D 3.2 Spherical Harmonics Since a truncated expansion into spherical harmonics (.PN-a.pproximation) is used throughout this chapter for the the disretization in angle, we introduce here the spherical harmonics and summarize important properties that are needed for the error bounds. Recall that the associated Legendre polynomials are defined for l 2 0 and m = 0, ... I by (Margenau and Murphy [39, p. 106]) (3.13) where Pr(f.l) is the (unnormalized) Legendre polynomial of degree I. By the formula of Rodrigues (Arfken [3, p. 554]) for the Legendre Polynomials given by 1 d1 ( 2 )' P,(p) = 211! dp1 I' 1 this definition becomes (3.14) Expression (3.14) can be used to extend the definition of P,m(p) to negative integer values of m. It follows that P1m(f.l) and P1-'"(p) are related by -m( ) ( )'"(Im)! pm( ) P1 I' = -1 (/ )I 1 1'+m. (3.15) The associated Legendre Polynomials satisfy the following recurrence relations ( Ar fken [3, p. 560]): 1 21 + 1 [(I+ m)P{".1(p) +(1-m+ 1)Pf+1(p)], (3.16) 1 (pm+l( ) pm+l( )] 21 + 1 1+1 I' -1-1 I' (3.17) 1 21 + 1 [(I+ m)(l + m-1)P("_j1(p) -(I-m + 1)(1m + 2)P,'_;:j1(p)]. (3.18) 51
PAGE 60
These recurrence relations) although derived in (Arfken [3]) only for positive integers m) remain valid for negative values of m. This can be easily checked by substituting in the relation (3.15) into the left and right parts of the recurrence relations. Further, the associated Legendre polynomials satisfy the orthogonality relation 1 / P,m(l') P;:' (fl.) dp = -1 2 (i+m)'0 + 1 (1-m)' lk. (3.19) Based on the associated Legendre Polynomials the spherical harmonics are de fined by (Arfken [3, p. 571]) where (21 + 1)(1m)' (I+ m)1 (3.20) Here, 0 denotes the polar angle with .respect to the z-axis, while r.p denotes the azirnuthal angle about the z-axis is. The spherical harmonics form an orthonormal basis of L2(S1): In particular, z, // Y{"(O,"'*(O,
PAGE 61
where Ct:[,rn /l,m (l+m+2)(1+m+l) 4(21 + 1 )(21 + 3) (1-m)(l+m) (211)(21 + 1)' f3z,m 1]/,m (1m)(l-m-1) 4(2/-1)(21 + 1) (/+m+l)(l-m+l) (2/ + 3)(2/ + 1) = Cl,m Using for the first term recurrence relation (3.17) and for the second term relation (3.18), after simple but tedious calculations we obtain the first recurrenee relation. The second recurrence relation follows in a way similar to the first. For the third relation, recurrence relation (3.16) is used. 0 Since the spherical harmonics form an orthonormal basis of L2(S1 ), every v E H'(R.) x L2(S1 ) has an expansion of the form oo I J v(z:,Q) =I: I: l,m(z:lYnQ), with l,m(Z:) = v(z:Jl)Yj"''(Q)dO. (3.25) 1=0 m=-1 51 For any v E H'(R.) x H2(S1 ), we define N-1 I fiNv(z:,Q) := I: I: ,m(z:)Y1 m(ll), l=O m=-l with l,m(z:) = ./ v(z:,Q)Y,m'(ll)dO 5' (3.26) as the truncated expansion of v into spherical harmonics. To bound the error of the truncated expansion, in the following lemma we use the fact that the spherical harmonics are the eigenfunctions of the Laplacian operator on the unit sphere, so (3.27) = -1(1+ 1)Y;"'(Q) for I:?: 0 and m = -I, -I+ 1, ... 0, ... I. Lemma 3.7 (Truncated expansion into spherical harmonics) Let (3 be any multi-index and recall that DDv := Suppose that v(z:,ll) E HIPI(R.) x H2(S1 ) and, for N:?: 2, let fiN be defined as in (3.26). Then IID8,mll < l) IID-nDPvll for I:?: 0; -1 m (3.28) c N IID-nvll, (3.29) 53
PAGE 62
with C independent of v and N. Proof. By the definition of
PAGE 63
with functions that have a truncated expansion into spherical harmonics with respect to direction angle D. and are piecewise polynomials of degree k on a triangulation of the region 1?... into tetrahedrons. This choice of finite-dimensional spaces corresponds to a discretization by a spectral method in direction angle ?. and a Finite-Element discretization in space. In transport theory, the spectral discretization in angle using spherical harmonics as basis functions is also called a PN-approximation. Let Th be a triangulation ofR. into tetrahedrons T of maximum diarneter h. For any v(c:, mE H"+1(R) X 2(51), let lhv denote the interpolant of v by piecewise polynomials of degree ron the triangulation Th Then, similar to (2.27), it can be shown (Ciarlet [16]) that, for 0 :<:; rn :<:; 1, (3.30) where llllm 0 denotes the standard norm of Hm(n) xL2 (51 ) and I lr+!,O denotes the standard semi-norm 'of Hr+1(R) x 2(51 ) (see Section 1.4). In order to combine Cea's Lemma with (2.29) and (3.30) to obtain a discretization error bound, we need the following lemmas. Lemma 3.8 (Bound for commutator [IIN.C-.CITN]) Suppose N 2: 2 and let the operator b.n be defined as in (3.27). Let v E H1(R) x H2(51). Then there exists C > 0 independent of a and E such that Proof. By expansion (3.25), it easily follows that IINPv = PIINv and IINP!J. \lv P!J. \1 !I NV. Therefore, (3.32) Now using the recurrence relation (3.24) in Lemma 3.6, we get = !IN [.;:;-. iJ
PAGE 64
so that N 8 '\' (f3 ym+l 6 ym-1) L 8x N,m N-1 t N,-m N-1 m=-N N-1 I: 8
PAGE 65
and We now continue by bounding the sums in the following way: and N .L "(N,m -N N-l N 2 "'. VN' 2 :S 2N -1 + 2N -1 L-m m=l N 2 < + (N -l)N = N -2N -1 2N-1 N-1 N-1 .L 'IN-l,m = .L N m=-N+l m=-N+l (N + m)(Nm) < _L 'YN,m :S N, (2N + 1)(2N-1) -m=-N so that :z liN vii which proves the lemma. D Lemma 3.9 3/[RT II A ov II < --un-1. N oz Let V and llllv be given a.s defined in (3.8). Then for all v E V n (H1(R.) x L2(S1)): with C independent of and a. Proof. By definition, it follows that ( 2 ')1/2 lliillv :S [(1r) {IIPQ 'Vvll + 110. P'Viill} + c 110. iilll +I Iii II Notice that since lllxl :S 1, lily I :S 1, and Ill, I :S 1. Similarly we have and IIPQ 'Viii I :S 110. 'Villi :S Vslliill,,o 57 (3 34)
PAGE 66
From these bounds, (3.34) follows immediately with C = ( [3V3] 2 + 1 )'12 = y'28. D Now we are in a position to establish the following error bound. Theorem 3.10 (Finite Element in space, PN in angle ) Let Th be a triangulation of R. into tetrahedrons of maximal diameter h. Suppose 0 ::=; a :::; 1, 0 :S E :S J3 and diam(R) :S 1. Let V be given as defined in (3.6) and let V" be defined by where IP r (Th) denotes the space of piecewise polynomials of degree _:::; ron the triangulation Th. Let 1/J E V n (W+l(R) x H2(S1 )) be the solution of (1.19) with q, E U(R) x H2(S1 ) and let 1/Jh E V" be the solution of (1.19) restricted to yh Further assume that 1/J has the diffusion expansion 1/J(r_, ll) = cf(r_) + cn(r_, ll). Then .;c; c, I I I I J 111/J -1/Jhllv :S C, N (I D.nq, I+ D.n1/J 1,0 +if G'2 h' (lcflc+l,O + lc/>nlc+l,O), with cl and c2 independent of Ct' and[. Proof. By Cea's Lemma, we have 111/J -1/J"IIv :S ffj 111/J-liNflh1/JIIv :0: ffj (II= JihiT-;-1/J. Therefore, llliN1/J-n,rrN
PAGE 67
where the last inequality follows from (3.34) in Lemma 3.9. We now use (3.30) and the diffUsion expansion of 1/; to get (3 38) Remark 3.11 (Nondiffusive regimes) In Theorem 3.10, it is assumed that the analytical solution has a diffusion expansion in order to get an error bound in (3.38) with a constant that is independent of parameter c. For regimes where the diffusion expansion is not valid, is of rnoderate size, so that there is no need for an error bound that is independent of s. Therefore, in this ca..'Se, (3.38) can simply be bounded by :S Chr ( 1 + j,Pjr+l,O > so that the overall bound becomes 0 59
PAGE 68
CHAPTER 4 MULTIGRID SOLVER AND NUMERICAL RESULTS According to the theory derived in our earlier chapters, the Least-Squares approach yields accurate discrete solutions, even for diffusive regimes. In this chapter we confirm this result by numerical tests and demonstrate that the resulting discrete system can be solved very efficiently by a full multigrid solver. The following tests are restricted to the one-dimensional transport problem (1.6). For the discretization in angle, a PH-approximation is used, which is a spectral method using the first N Legendre polynomials as basis functions. For the discretization in space, we employ a Finite-Element discretization with linear or quadratic basis functions. To be more precise, we recall that the analytical solution has the rnoment expansion 00 1/J(z, I')= L
PAGE 69
Recall that, for a function v E VN we can use a Gauss quadrature formula with N support abscissas {,ttl, ... f-LN} and weights {w1, ... w N} to write 1 N Pv = j v(z,J1)dJ1 = Lw; v(z,J1;), -1 J=l ( 4.3) since this quadrature with N support abscissas is exact for polynomials of degree ::; 2N -1 (Stoer and Bulirsch [49, p. 153]). In the SN-cliscretization, the flux is only cornputed for the discrete set of angles {t-tl, ... t-tN }, so that the unknowns are given by the vector By collocating at the Gauss points and approximating the operator P by the sum in ( 4.3), the following SN-flux equations for 1 can be derived from tbe transport equation (1.6): where q .-( q(z,J1t) ) ( 1 ) q(z,l'N) l.= 1 M := diag(l't, ... J1N ), R := lw T Further, for v E V N we note that the scaling transformation S notation becomes ( 4.4) P + r(I-P) in this with IN denoting the N x N identity matrix. Therefore, the scaled SN-ftux equations are given by (4.5) with 'L := SN'l_ The boundary conditions for (4.4) and (4.5) are given by and ( 4.6) respectively, where I!::!.. denotes the !;f x f identity matrix. 61
PAGE 70
4.1.2 Moment Equations In order to derive the moment equations from the :flux equations, we note that, for 1/J E VN, N-l 1/! = L
PAGE 71
Proof. N 1 (i): ""T l = I: w; = I 1 dfJ = 1. Therefore, R2 = lw T lw T = l("" Tl)"" T = 1w T = R. j::::::l -1 (ii): RTO=wlT'l=wwT=D1wT=OR. (iii): (iv ): ( v ): N 1 (Tm'T)1 .=I: Pi-1(Jlk)w,p;-1(Jlk) Pi-1(JJ)P;-1(JJ)dJJ = 6;; ,J k::::::t -1 Therefore, TOTT = I, so TT nonsingular =;. 3C such that TT C = I =;. T(lTT C = TD =;. C =TO=;. TTT(l =I. N N 1 (7'"")1 =I: Pi-1(JJ;)w; =I: Pi-1(/;j)w;Po(JJ;) = [ Pi-1(JJ)po(JJ)dJJ = 6;1. j::::l j:;;;;l -1 The unnormalized Legendre polynomials P,(JJ) satisfy the recursion (Arfken [3, p. 540]) (4.10) Since the normalized Legendre polynomials are given by pz(tt) := .J2l+TP,(JJ), from (4.10) we have fJPl(JJ) = bz-1P1-1(11) + blPI+1(J1) with bz-1 := .;4/,_1 Using (4.11), we then have Therefore1 k::::::l 1 1 (4.11) = j Pi-r(JJ) P;-AJJ) dr + b;-1 / Pi-1(!)P;(JJ) dfJ -1 -1 (vi): T(JRTT = (1'rll) ("" TTT) = T"" ""TTT = (1, 0, ... 0) T (1, 0, ... 0), where we used (iv) in the last step. D Multiplying the unsealed flux operator iL by TT from the right and by TO from 63
PAGE 72
the left and using Lemma 4.1 gives TOfiTT = TOMTT :z + cr, (nliNTT-TORTT) + craTORTT 0 : ll [ : 0 : l 0 0 +"a 0 0 ( 4.12) era l 8 "' j =B-+ -. IM, az "' where IM is the unsealed moment operator. Therefore, multiplying ( 4.5) by TO fi-om the left and using ( 4.9) results in the following scaled moment equations: Trl1L1/:_ = TrlSNTTnlfiTT '!!_ = (TOR:fT + rT(lTT-rTOR7T) .IM '!!_ (4.13) Again using relation ( 4.9), it follows from ( 4.6) that the boundary conditions for the moment equations are given by (4.14) We conclude that the SN-flux equations and the PN-moment equations are equivalent serni-discrete forms of the transport equation. The difference between these two sets of equations is that the non-derivative part in the flux equations is fully coupled, while the derivative part is decoupled. For the moment equations, the reverse is true. 4.1.3 Least-Squares Discretization of the Flux and Moment Equations After deriving the SN-ftux and moment equations we return to the discrete PH problem (4.2). Using a Gauss quadrature formula with weights {w1, .. ,wN} and points 64
PAGE 73
{Jt1 1 1 J.lN} to approximate the integration over angle J.l results in Zr N ./ f;; Wj [,h,(z, !')] (Jlj) [bk,i(z, l')](l'j) dz ,, ( 4.15) z, N = ./ L_wjq,(z, l'j) [bk,l(Z,!L)](!lj) dz. Zl J .::::;1 We note that on the left-hand side1 the approximation of the integration over angle p by the Gauss quadrature formula is exact as long as l < N1: since then .Cbk,l is a polynomial in I' of degree I+ I, while /;h is a polynomial in I" of degree N, then the product is a polynomial in I' of degree S 2N I, for which the Gauss quadrature formula with N support abscissas is exact. Therefore1 only in the equations for { bk ,N -11 k = 01 1 m} must we introduce an error on the left-hand side by approximating the integration over angle. On the other hand, the same argument shows that the right-hand side is represented exactly by the Gauss quadrature formula as long as q, (z, I') haB an expansion into the first N -2 Legendre polynomials. With the notation introduced in Section 4.1.1 we have ( [/;;,(z,p)] (!'r) ) [/Jh(z, !')] (J"N) and where t41 denotes the (I+ I )-nth column of the matrix TT defined in Section 4.1.2. Denoting by(, )JRN the standard Euclidean inner product of IRN, (4.15) then becomes ,, ./ \rliL1j;_h,ILryk(z)t4r)JRN dz (4.16) z, = .// f:lq ILryk(z)tf+r) dz \ -s JRN for all k E {0, I, ... m} and IE {0, I, ... N-1}. Since the columns of TT span IRN, then we can substitute {:t.J, ... 1 t"J,;.} by the canonical basis {1 ... fiN} of IRN and we recognize that ( 4.16) is a Least-Squares discretization of the 8N-flux. equations using the discrete space (4.17) This is the space of N-vector functions whose components are piecewise linear (for r = 1) or piecewise quadratic (for r = 2) polynomials on the partition Th of the slab. 65
PAGE 74
Using (4.9) and (iii) of Lemma 4.1, we can rewrite (4.16) as ' J (neTT _h= '. ,q\f(z)=I:>k,l1)k(z),fori=O, ... ,N-1 q\)\,_I(z) k=O (4.19) All computations in the following sections are based either on discrete problem (4.16) or (4.18). 4.2 Properties of the Least-Squares Discretization In this section 1 we use the results of numerical experiments to observe properties of the Least-Squares discretization. The results plotted in Figure 4.1 and Figure 4.2 demonstrate the accuracy of the Least-Squares discretization in combination with the scaling transformation for diffusive regimes. The test problem we chose here is the same one used by Larsen et al. in [32]. The exact solution of the corresponding diffusion equation is q\(z) = -3/2z2 + 15z, which is plotted in solid in Figure 4.1 and Figure 4.2. The scalar flux .Po := Pl/Jh of the solution ljJ, of the Least-Squares discretization of the scaled transport equation using piecewise linear elements in space is shown by the crosses. For the problem in Figure 4.1, where the absorption cross section is zero, we used r = 1/a-; = c2 as the scaling parameter, which gives a higher accuracy than the scaling with r =c. An explanation of this result is given in the analysis presented in (Manteufl"el and Ressel [38]). For the test problem in Figure 4.2, where
PAGE 75
with piecewise linear basis elements in space is a straight line connecting the values at the boundary in diffusive regimes. Moreover, the asymptotic analysis in Theorem 2.1 asserts that the Least-Squares discretization of the unsealed transport equation using piecewise polynomials of degree 2: 2 in space has the correct diffusion limit. This too is supported by the observed maximum errors for a Least-Squares discretization of the unsealed transport equation with piecewise quadratic elements in space, which we list in Table 4.1. However, using the scaling transformation in combination with the Least-Squares discretization with quadratic elements in space achieves dramatically better accuracy in the discrete solution. For piecewise linear elements in space, the error bound in Theorem 2.14 indicates an 0( h2 ) behavior of the Least-Squares discretization error for a sufficient srnooth solutions. To analyze the order of the Least-Squares discretization numerically, we used a problem with smooth exact solution sin(1rz). We then computed the discrete L2-error of the Least-Squares discretization with linear elements in space for a sequence of grids that were created from the coarsest grid by halving the mesh size from one to another grid. Table 4.2 depicts the ratio of these errors for each two consecutive grids. The value of approximately 4 of this quotient confirms numerically an O(h2 ) behavior of the discretization error for linear elements. The solution of the transport equation is physically a density distribution and should therefore always be positive. The Least-Squares discretization has the drawback that it does not in general guarantee a positive solution. This is shown by the example in Figure 4.3, where the exact solution of the corresponding diffusion equation is again plotted as a solid line and the discrete Least-Squares solution is depicted by the crosses. Of course, this boundary layer can be resolved by refinement of the mesh, as shown in Figure 4.4. IIowever, in the region [2, 10], the solution is nearly constant, so that a refinement makes sense only in the region around the boundary layer. Therefore, the aim is to use adaptive refinement, which can be combined very naturally with a full multigrid solver (McCormick [42]). One easy criterion for determining the area of further refinement would then be to check where the solution is negative. Of course1 this has to be combined with more sophisticated criteria that compare the solution of consecutive grids, for example. Besides having the correct diffusion limit, a discretization for transport problems must satisfy the extra condition to resolve, with a suitable fine spatial mesh, interior bound ary layers between media with different material cross sections. To test numerically if the Least-Squares discretization meets these extra conditions, we used the test problem from (Larsen and Morel [33]), which is given in Figure 4.5. The solid solution plotted in Fig ure 4.5 is computed by a Least-Squares discretization using 50 cells in both [0, 1] and [1, 11]. This solution approximates the exact solution plotted in (Larsen and Morel [33]) fairly well. We see further that the boundary layer is not resolved fully when the mesh spacing for the Least-Squares discretization is too coarse (crosses in Figure 4.5). In addition, the Least Squares solution itself indicates an euor by becoming negative. Again adaptive refinement would be an appropriate remedy. 67
PAGE 76
Scalar Flux 40 35 30 25 20 15 10 0 2 3 4 5 6 7 8 9 10 Test problem: { 81/! } Paz+ 100(1-P)
O V;(lO,p) = 0 for p
PAGE 77
35 30 25 20 15 10 Scalar Flux 2 3 4 5 6 7 Test problem: { 1' + lOO,P-99.99Ptp = 0 01(1 + ) } 1/;(0,1') = 0 for I'> 0 ,P(lO, I')= 0 for I'< 0 (c = 0.01, "= 1.0) 8 9 Solution of corresponding diffusion equation: (x) = -3/2 x2 + 15x. 1 I I l 10 Figure 4.2: Scalar Flux of exact (solid) and Least-Squares solution with scaling transformation (crosses) and without (asteriks). 69
PAGE 78
Table 4.1: Comparison of maximum error for scaled and unsealed Lea.c;t-Squares discretization with piecewise quadratic elements in space. Ci = 0.0 = 1.0 1/ h sca.led unsealed scaled unsealed 4 1.8-4 5.2-2 1.2-4 3.2-2 8 1.2-5 1.2-lo-2 7.7-6 7.7-3 16 7.7-7 3.2-3 4.8-7 1.9-3 32 4.8-8 7.8-4 3.0-8 4.6-4 64 3.0-9 1.8.10-4 1.8-9 1.1-4 128 1.8-10 3.8-lo-5 1.1-10 2.2-5 256 1.3-11 6.3-6 7.9-12 3.8-6 Test problem: { [Pf, + (1,p) = q for (z,l') E [0, 1] x [-1, 1] } = 0 for fl.> 0 = 0 for p < 0 where at= 1000.0) a a=...) q := p7rcos(1rz) +a a sin(1rz)) a, Exact Solution: t/>(z, p) = sin(1rz), Number of Moments: N = 2. 70
PAGE 79
Table 4 2 Order of Least Squares discretization for linear elements in space -, 1 h 4 8 16 32 64 128 256 512 1024 2048 4096 8192 E = 1.0 E = 0.001 0! = 0.0 0! = 1.0 0! = 0.0 0! = 1.0 /lehll2 llehll2 llehll2 I' llehll2 8.5 2 2.6 2 1.5 2 1.2 2 2.1-2 4.0 6. no-3 3.9 3.8-3 3.9 2.9-3 5.4-3 3.9 1. no-3 3.9 9.7-4 3.9 7.5-4 1.3-3 4.1 4.2-4 4.0 2.4-4 4.0 1.8-lo-4 3.4-4 3.8 1.0-4 4.2 6.1-5 3.9 4.7-5 8.5-5 4.0 2.6 10-5 3.8 1.5-5 4.0 1.2-5 I 2.1-5 4.0 6.6-6 3.9 3.8-6 3.9 2.9-6 5.3-6 3.9 1.6-6 4.1 9.5-7 4.0 7.3-7 1.3-6 4.0 4.1-7 3.9 2.4-7 3.9 1.8-7 3.3-7 3.9 1.0-7 4.1 6.0-8 4.0 4.6-8 8.2-8 4.0 2.5-8 4.0 1.4-10-8 4.2 1.).10-8 2,0-8 4.1 6.5-9 3.8 3.5-9 4.0 2.8-9 Test problem: { [Pt, + O"t (IP) + O"aP],P(z,p) 1/;(0,p) 1/;(l,p) = q for (z, p) E [0, 1] x [-1, 1] } = 0 for p > 0 = 0 for p < 0 where Ut = 1\ eTa= q := jl/ifcos(7rz) +eTa sin(1rz)\ o, Exact Solution: 1/J(z,p) = sin(rrz), Number of Moments: N = 4. 71 4.1 3.9 4.1 3.8 3.9 4.1 3.9 4.0 3.9 4.2 3.9 I
PAGE 80
Scalar Flux 0.8 0.6 0.4 0.2 0 0 2 3 4 5 6 7 8 9 10 Test problem: { f.l + 100;&-99.9P,P = 0.0 } ;&(O,J.l)=1 for J.L>O (b(lO,f.l) = 0 for J.l < 0 (c = 0.01, "= 10.0) Solution of corresponding diffusion equation: Mesh size: h = \s0 1 Moments: N = 4. }t'igure 4.3: Example of violation of the positivity property by the Least-Squares discretization 72
PAGE 81
1, 0.9 0.8' 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 2 Scalar Flux 3 4 5 6 7 Test problem: { J1 + 100<,0-99.9P,P = 0.0 } <,0(0, p) = 1 for I'> 0 <,0(10,p)=0 for p
PAGE 82
Scalar Flux 0.15 0.1 0.05 0 X ... x. -0.05 -0.10 2 3 4 5 6 7 8 9 10 Test problem: { 81/; } !' oz + O 1/;(ll,f.l)=O for f.l < 0
PAGE 83
4.3 Multigrid Solver In this section we describe the multigrid solvers, that were developed for solving the problems resulting from a Least-Squares discretization of SN-flux (4.16) and moment (4.18) equations with piecewise linear elements in space. We refer the reader who is not familiar with multigrid methods to (Briggs [6]) for an introduction and to (Hack busch [24]) and (McCormick [41]) for more advanced topics. Essential for the efficiency of a multigrid solver is the proper choice of its cornpo nents, mainly the intergrid transfer operators, coarse grid problems, and relaxation schernes. The choice of the -first two components is naturally given by the Least-Squares variational formulation: the sequence of discrete spaces vl. c v2 c ... c Vi = yh determines the coarse grid problems since they are just the restriction of the variational problem to these discrete subspaces; the prolongation operator: which is a mapping from a coarse grid to the next finer grid in the grid sequence: is formed directly by composing the isomorphisms between the discrete spaces and their corresponding coordinate spaces with the injection mapping between Vk-1 and Vk (Bramble [5]), (McCormick [43]); and the restriction operators, which are mappings from a finer grid to the next coarser grid, are just the adjoints of the prolongation operators. Therefore, the only rnultigrid components that need to be chosen here are the sequence of discrete spaces and the relaxation. No relaxation scheme is currently in use for transport problems that smooths the error in angle and in space simultaneously. 'I.'hus, instaed of devising a :rnultigrid scheme that coarsens simultaneously in space and in angle, we consider first applying the multilevel-in angle technique of (Morel and Manteuffel [44]), which is based on a shifted source relaxation scheme. After reducing the degrees of freedom in angle, a multigrid method in space is used to solve the remaining discrete problem. Thus, here we consider only the development of a rnultigrid solver in space. For the discrete subspaces, we use the Finite-Element spaces with linear basis elements on increasingly finer partitions (halving the cells) of the slab. 4.3.1 SN Flux Equations The stencil that results from a Least-Squares discretization of the SN flux equations (4.16) with these Finite-Element spaces is given in Appendix A and shows full coupling in angle. This suggests the use of a line relaxation in angle, which updates all angles for a given spatial point simultaneously. The matrix that must be inverted for each spatial point for this scheme is of the form (see Appendix A) The first part is diagonal, and the second has the rank 2 factorization ) Thus, Ai can be cheaply inverted by the Sherman-Morrison formula (Golub and Van Loan [20, p. 51]). Our computational tests showed essentially no differences in the error reduction and smoothing properties of this line relaxation seheme for various different orderings of the spatial points. To save computational, we thus use this line relaxation scheme in a red-black fashion, since then the residual after one relaxation sweep is 7,ero at the black points and 75
PAGE 84
not need not to be computed for the restriction to the next coarser grid. This scheme is also more amenable to advanced computer aechitecture efficiency. The convergence factors for this rnultigrid algorithm 1 listed in Table 4.3, are computed in the following way. A problem with zero source term and and whose exact solution equal is zero is used in combination with a randomly generated initial iterate. Then 30 rnulti grid cycles are performed and the convergence rate is computed from the geometric average of the per-cycle reduction factors of the last 20 cycles. We thus reduce the influence of the initial iterate on convergence and observe what tends to be the worst-case factors. Here we study the (1, 1)-V-cycle, which uses one relaxation before and one after coarse grid correc tion. Observed factors for crt S 106 are on the order of 0.1 for all values of the absorption coefficient cx. Factors for (2,1)V-cycles are also included. Such factors are sufficient to get a solution with an error on the order of the discretization error by one full multigrid cycle, as demonstrated by the results in Table 4.4. The additional V cycle on the finest level 10, performed subsequent to the full multigrid cycle, is reducing the error only by a small amount. Thus, we can conclude, that the error after the full multigrid cycle is completed is already on the order of the discretization error. 4.3.2 Moment Equations The stencil for the Least-Squares discretization of the moment equation ( 4.18) is given in Appendix B. In the interior of the computational domain, it is a 15-point stencil that connects the neighboring spatial points and the two higher and two lower mmnents. At the spatial boundary, however, the stencil couples all mornents. Therefore, we use a line morr.tent relaxation, that updates all rnoments simultaneously for a given spatial point all mornents simultaneously. Since the efficiency of the smoothing again is observed to be independent from the relaxation ordering, as in the SN flux case, we use a red-black ordering of the lines. The convergence factors for this multigrid algorithm, are listed in Table 4.5. For very large values of at, this multigrid solver is more stable with regard to roundoff errors than the multigrid solver for the SN flux equations. Even for values of O't 2:: 106 we get (1, 1)V-cycle convergence factors of order 0.1. Again, these convergence factors are sufficient to get a solution with an error on the order of the discretization error by one single full multigrid cycle, as demonstrated in Table 4.6. 1This algorithm was implemented in C++ and a special array class was designed for this purpose. 76
PAGE 85
Table 4.3: Multigrid convergence factors for solving the flux equations. ( 1,1 )V -cycle ITt = 1.0 "= 0.5 "= 0.25 "= 0.1 "= 0.0 10 0.088 0.085 0.087 0.118 0.169 101 0.082 0.083 0.083 0.110 0.136 102 0.052 0.052 0.053 0.106 o.1:Jo 103 0.088 0.091 0.088 0.105 0.130 101 0.091 0.091 0.091 0.105 0.130 105 0.092 0.092 0.092 0.105 0.130 106 0.090 0.092 0.092 0.102 0.133 (2,1)-V-cycle ITt 1.0 a-0.5 "-0.25 0.1 (Y-0.0 10 0.053 0.050 0.053 0.105 0.155 101 0.047 0.047 0.047 0.082 0.104 102 0.019 0.024 0.024 0.077 0.097 103 0.020 0.021 0.021 0.076 0.096 104 0.020 0.022 0.022 0.076 0.096 105 0.020 0.011 0.023 0.076 0.096 10' 0.019 0.023 0.018 0.077 0.099 Test problem: { [l'iz +ITt (I-P) + rraP] 1/J(z,JL) 1/J(O, JL) 1{;(1,;<) = 0 for (z,JL) E [0, 1] x [-1, 1] } = 0 for Ji > 0 = 0 for Ji < 0 where era::::::: Exact Solution: 1/J(z,JL) = 0. Initial Iterate: randomly generated grid function. Mesh size: h::::::: Number of Moments: N = 8. 77
PAGE 86
Table 4.4: Full Multigrid (1,1)-V-Cycle convergence factors for solving the SwRux equa tions. ITt 1.0 Level t Ci-1.0 a-0.5 a-0.0 lhll2 k llehllz k lieh 112 k 0 2 1.06. 10 ., 2.96. 10 3.65"' 1 4 3.90. 10-2 2.7 8.85 to-3 3.0 1.79 10-1 2.1 2 8 1.04. 10-2 3.7 2.88 to-s 3.0 6.32. L0-2 2.8 3 16 2.68. to-s 3.9 7.47. 10-4 3.8 I 1.92. 10-2 3.3 4 32 6.80. 1o-" 3.9 1.88. 10-4 3.9 5.36 to-s 3.6 5 64 1.71. 10-4 3.9 4.72. 10-5 4.0 1.43. 10-3 3.7 6 128 4.29. to-s 4.0 1.18 to-' 4.0 3.72. w-" 3.8 7 256 1.07. 10-5 4.0 2.95. to-s 4.0 9.52. to-5 3.9 8 512 2.69 w-s 4.0 7.38. 10-7 4.0 2.41 to-5 3.9 9 1024 6.72. w-7 4.0 1.84. w-7 4.0 6.08 lo-s 4.0 10 2048 1.68. 10-7 4.0 4.60 lo-s 4.0 1.52. w-6 4.0 10 2048 1.15. l0-7 1.4 2.88 lo-s 1.6 5.75. 10-7 2.6 ITt 1000.0 Level .l a-1.0 Ci-0.5 a-0.0 llehllz ,, llehll2 lieh 112 0 2 4.98. 10 "2 2.77. 10 2 5.62. 10 _, 1 4 2.17. to-2 2.3 1.25. 10-2 2.2 4.12. w-2 1.4 2 8 6.40. 10-3 3.4 3.76. 10-3 3.3 1.66. 10-2 2.5 3 16 1.67. w-3 3.8 9.92. 10-4 3.8 5.36. w-3 3.1 4 32 4.27. w-4 3.9 2.52. 10-4 :l.9 1.52. 10-3 3.5 5 64 1.07. 10-4 4.0 6.34. w-s 4.0 4.09. to-4 3.7 6 128 2.69. 10-5 4.0 1.59. lo-s 4.0 1.06 10-4 3.8 7 256 6.75. 10-6 4.0 3.98. 10-6 4.0 2.10 1o-s 39 8 512 1.69. w-s 4.0 1.00 10-6 4.0 6.81. 10-6 4.0 9 1024 4.23. 10-7 4.0 12.52. 10-7 4.0 1.71. 10-6 4.0 I 10 2048 1.08. 10-7 4.0 7.42. w-s 3.4 4.27. 10-7 4.0 10 2048 7.28 lo-s 1.5 1 4.77. to-s 1.5 1.34. 10-7 3.2 Test problem: = q for (z, p) E [0, 1] x [-1, 1] } = 0 for JJ > 0 = 0 for I'< 0 where ITa=;:,, q := p1rcos(ou) +1Tasin(1rz), Exact Solution: lj;(z,p) = sin(1rz), Number of Moments: N = 4. 78
PAGE 87
T bl 4" M If "d a e ,.iJ: u 1gn convergence ac ors .or so vmg f t f I th t nations. e mornen eq (1,1)-V-cycle O"t 0! = 1.0 0! = 0.5 = 0.25 0! = 0.1 "'= 0.0 16 0.052 0.086 0.083 0.118 0.169 101 0.091 0.092 0.091 0.117 0.136 102 0.056 0.056 0.071 0.106 0.131 103 0.092 0.093 0.092 0.105 0.127 104 0.095 0 094 0.094 0.106 0.129 105 0.095 0.094 0.093 0.107 0.130 106 0.095 0.092 0.092 0.107 0.130 107 0.095 0.092 0.092 0.107 0.130 108 0.095 0.092 0.092 0.107 0.130 109 0.095 0.094 0.092 0.107 0.130 1010 0.095 0.094 0.092 0.106 0.130 (2,1)-V-cycle O"t ( = 1.0 ( = 0.5 ( = 0.25 ( = 0.1 ( = 0.0 10 0.074 0.051 0.054 0.105 0.155 101 0.055 0.055 0.055 0.082 0.104 102 0.02[1 0.025 0.039 0.077 0.097 103 0.023 0.026 0.042 0.076 0.096 104 0.023 0.023 0.042 0.076 0.096 105 0.023 0.023 0.042 0.076 0.096 106 0.023 0.023 0.042 0.076 0.095 107 0.023 0.023 0.042 0.076 0.095 108 0.023 0.023 0.042 0.076 0.095 109 0.023 0.023 0.042 0.076 0.095 1010 0.023 0.023 0.042 0.076 0.095 Test problem: = 0 for (z,l') E [0, 1] x [-1, 1] } = 0 for I'> 0 = 0 for I'< 0 where era= ..Q... o, Exact Solution: 1/J(z, !') = 0. Initial Iterate: randomly generated grid function. Mesh size: h = 1t8 Number of Moments: N = 8. 79
PAGE 88
Table 4.6: Full Multigrid (1,1)-V-Cycle convergence factors for solving the moment equa tions. IJt = 1.0 Level t = 1.0 "= 0.5 "= 0.0 llehll2 llehll2 lleh112 0 2 1.02 10 -l 2.45. 10 _, 2.84-10'1 1 4 3.71 10-2 2.7 7.66. 10-3 3.2 1.48-1 1.9 2 8 9.52. 10-3 3.9 2.22. Io-s 3.4 5.24. 10-2 2.8 3 16 2.38. 10-3 4.0 5.75. Io-4 3.8 1.55. 10-2 3.3 4 32 5.69 10-4 3.9 1.45 10-4 3.9 4.32. 10-3 2.0 5 64 1.49 10-4 4.0 3.62 10-5 4.0 1.15. 10-3 3.7 6 128 3.73. 10-5 3.9 9.07 Io-s 3.9 2.98. 10-4 3.8 7 256 9.33. 10-6 3.9 2.26 .. 10-6 4.0 7.62. 10-5 3.9 8 512 2.33. 10-6 4.0 5.66. 10-7 4.0 1.93. 10-5 :l.9 9 1024 5.83. 10-7 3.9 1.47. 10-7 4.0 4.86. 10-6 3.9 10 2048 1.45. 10-7 3.9 3.53. 10-8 4.0 1.22. 10-6 4.0 10 2048 1.00 10-7 1.4 2.68. 10-8 1.3 4.61. 10-7 2.6 IJt = 1000.0 Level -:r a= 1.0 "= 0.5 "-0.0 llehll2 llehll2 I, llehll2 0 2 4.31. 10-2 3.29. 10-3 4.74-10 2 1 4 1.83-2 2.3 4.09. 10-4 8.0 3.27. 10-2 1.4 2 8 4.93. 10-3 3.7 2.85. 1()-4 1.4 1.24. 10-2 2.6 3 16 1.23. 10-3 4.0 9.77. w-5 2.9 3.85. 10-3 3.2 4 32 3.09. 10-4 3.9 2.64. lo-s 3.7 1.07. 10-3 3.6 5 64 7.73. 10-5 3.9 6.78. 10-6 3.8 2.85 10-4 3.7 6 128 1.93 lo-s 4.0 1.71 10-6 3.9 7.36 10-5 3.8 7 256 4.83. 10-6 3.9 4.31. 10-7 3.9 1.87. 10-5 3.9 8 512 1.20. 10-6 4.0 1.08. Io-7 3.9 4.72-10-6 3.9 9 1024 3.02. 10-7 3.9 2.69. 10-8 4.0 1.18 lo-s 4.0 10 2048 7.56. 10-8 3.9 6.55. 10-9 4.1 2.97. 10-7 3.9 10 2048 4.48. 10-8 1.6 2.34. 10-9 2.8 9.32. 1o-s 3.1 Test problem: { [l't,+iJ,(l-P)+iJaP]if;(z,Jl) if;(O, I') if;(l, I') = q for (z,Jl) E [0, 1] X [-1, 1] } = 0 for I'> 0 = 0 for I'< 0 where I! a= 2.., q := vrrcos(7rZ) + o-asin(rrz), Exact Solution: ,P(z,Jl) = sin(1rz), Number of u, Moments: N = 4. 80
PAGE 89
CHAPTER 5 CONCLUSIONS 5.1 Summary of Results In this thesis, we have studied a systematic Least-Squares approach to the neutron transport equatio. The Least-Squares formulation converts the first-order transport problern into a self-adjoint variational form, which makes it accessible to the standard Finite-Element theory. Essential for this theory is the V -ellipticity and the continuity of the variational form, which leads directly to the existence and uniqueness of the analytic and discrete solutions and to bounds for the discretization error for a variety of different discrete space'S. Moreover, the variational formulation guides in a natural way the development of a multigrid solver for the resulting discrete problem. However) due to special properties of the transport equation, the Least-Squares approach is less straightforward than it first appears. In this thesis, we focused on neutron transport problems in diffusive regimes. In these regimes, the transport equation is singularly perturbed and its solution tends to a solution of a diffusion equation. Therefore, to guarantee an accurate discrete solution, a discretization for the transport operator is needed, that becomes a good approximation of the diffusion operator in diffusive regimes. Only a few conventional discretization schemes are known to have this property. By an asymptotic expansion, we show in Theorem 2.1 for slab geon1etry that a Least-Squares discretization with piecewise linear elements in space fails to be accurate in diffusive regimes. The choice of linear elements in space will for any right-hand side always result in a straight line connecting the prescribed values at the boundary for the principal part of the solution, which is independent of direction angle {t. Nurnerical tests confirm this behavior. On the other hand, we prove in Theorem 2.1 that, if piecewise polynomials of degree :2: 2 are used, then the principal part of the discrete Least-Squares solution becomes a Galer kin approximation to the correct diffusion equation in diffusive regimes. This means that the Least-Squares discretization will be accurate in this case. Numerical tests with piecewise quadratic elements again confirm this result. Because of Cea1s Lemma, the Least-Squares discretization can be viewed as the best approximation to the exact solution in the discrete space with respect to the Least-Squares nonn llfll, where f denotes the transport operator. In diffusive regimes, the different terms in the transport operator become totally unbalanced, which means that different parts of the solution are weighted much differently by the Least-Squares norm. With P denoting the L2-orthogonal projection onto the space of functions that are independent of direction angle, it is clear that the Least-Squares norm in diffusive regimes hardly measures the components P1p of the solution 1/;, although this is the main component for these regimes. The idea is therefore to scale the transport operator prior to the Least-Squares discretization, with the effect of changing the weighting in the Least-Squares norm_. Clearly, the scaling from the left by S = P + r(I-P) with r = 0(1/o-1 ) increases the weight for the important solution component P'lj;. Numerical tests show that a Least-Squares discretization of the scaled transport equation, even for piecewise linear elements in space, yields an accurate solution in diffusive regimes. Moreover 1 they show for piecewise quadratic elements that the
PAGE 90
scaling transformation dramatically increases accuracy. The major part of this thesis is devoted to proving that the Least-Squares dis cretization in combination with the scaling transformation S gives for a variety of simple Finite-Element spaces always accurate discrete solutions, even in diffusive regimes. As mentioned above, essential for bounding the error is the V -ellipticity and the continuity of the Least-Squares form with respect to some norm. [t is easy to show that the scaled Least Squares form cannot be bounded from below by a standard Sobolev norm. Therefore, the first obvious choice in the one-dimensional case is the norm (liP g, 112 + II II') 112. With respect to this norm, we prove V -ellipticity and continuity of the scaled Least-Squares bilinear form and derive error bounds for discrete spaces that use piecewise polynomials in space and piecewise polynomials or Legendre polynomials in angle as basis functions. However, since the V-ellipticity and the continuity constants for this norm depend on cr1 and era, these bounds blow up in diffusive regimes. To prove the V -ellipticity and continuity with constants independent of crt and cr a, we use a scaled norm. Based on the V -ellipticity and continuity with constants independent of CTt and cr a, we obtain discretization error bounds for the same discrete spaces mentioned above, with constants independent of crt and era. Thus, these bounds stay valid also in diffusive regimes. 'rhis result is generalized to three-dimensional x-y-z geometry for discrete space that use piecewise polynomials as basis functions in space and spherical harmonics as basis functions in angle. We conclude that the Least-Squares approach in combination with the scaling transformation represents a general framework for finding discretizations for the transport equa tion that are accurate in diffusive regimes. Further, it naturally guides naturally the development of an efficient rrmltigrid solver for the resulting discrete system. This is demonstrated in this thesis for slab geometry and piecewise linear elements. The developed multigrid solver for this discrete problem has convergence factors on the order of 0.1) so that one full multigrid cycle of this algorith1n computes a solution with an error on the order of the discretization error. 5.2 Recommendations for Future Work Our numerical results show that1 when simple discrete spaces in space are used1 refinement is needed in order to resolve boundary layers. 'I'herefore1 the aim for the future would be to combine the full multigrid solver with adaptive refinement. On the other hand, with the V -ellipticity and the continuity given, it seems fairly straightforward to establish error bounds for more complicated discrete spaces that can better resolve boundary layers1 including those of exponential or hierarchical type. Furthermore1 generalization of the scaling technique to anisotropic transport prob lems suggests itself. 82
PAGE 91
BIBLIOGRAPHY [1] R.A. ADAMS, Sobole1! Spaces, Academic Press, 1975. [2] R.E. ALCOUFFE, E.W. LARSEN, W.F. MILLER AND B.R. WIENKE, Computational bjjiciency of Numerical Methods for the Nfultigroup, Discre-te Ordinates Neutron Trans port Equations: The Slab Geomdry Case, Nuclear Science and Engineering 71, pp. 111-127, 1979. [3] G.B. ARFKEN, Mathematical Methods for Physicists, second edition, Academy Press, New York, 1971. [4] A. BARNETT, .J.E. MOREL AND D.R. HARRIS, A Multigrid Acceleration Method for the One-Dimensional SN .Equations with Anisotropic Scattering) Nuclear Science and Engineering 102, pp. 1-21, 1989. [5] J.H. BRAMBLE, Mttltigrid Methods, Pitman Research Notes in Mathematics Series 294, Longman Scientific and Technical, 1993. [6] W.L. BRIGGS, A Multigrid Tutorial, SIAM, Philadelphia, 1987. [7] C. BoRGERS, E.W. LARSEN AND M. L. ADAMS, The Asymptotic Diffusion Limit of a Linear Discontinuous Discretization of a Two-Dimensional Linear Transport Eq'uation, Journal of Computational Physics 98, pp. 285-300, 1992. [8] S.C. BRENNER, L.R. ScoTT, The Mathematical Theory of Pinite Element Methods, Texts in applied mathematics, Springer Verlag Inc., New York, 1994. [9] E. BRODA, Ludwig Boltzmann. JI!Ienseh. Physiker. Philosoph., Franz Deuticke Verlags gesellschaft m.b.H., Wien, 1986. [10] z. CAI, R. LAZAROV, T.A. MANTEUF'FF:L AND S.F. McCORMICK, Pirst-OrderSystem Least Squares for Partial Differential Equations: Part I, SIAM J. Numer. Anal., Vol. 31, 1994. [11] z. CAI, T.A. MANTEUFFEL AND S.F. McCoRMICK, Pirst-Order System Least Squares for Partial Differential Equations: Part II, submitted to SIAM J. Numer. Anal., March 1994. [12] Z. CAl, T.A. MANTEUFFEL AND S.F. McCoRMICK, Pirst-OrderSystem Least-Squares for the Stokes Equation, submitted to SIAM J. Numer. Anal., June 1994.
PAGE 92
[13] B.G. CARLSON AND K.D. LATHROP, Transport Theory-The Method of Discrete Ordirw-tes, in Cornputing Methods in Reactor Physics, (H. Greenspan, C.N. Kelber, and D. Okrent, eds.), Gordon and Breach, New York, p. 166, 1968. [14] K.M. CASE AND P.F. ZWEIFFEL, Linear Transport Theory, Addison-Wesley Publishing Company, Reading, Massachusetts, 1967. [15] C. CERCIGNANI, The Boltzmann Equation and Its Applications, Applied Mathematical Sciences, Vol. 67, Springer-Verlag, New York, 1988. [16] P.G. C!ARLET AND J.L. LIONS, Handbook of Numerical Analysis, v. II, Finite Element Methods, Elsevier Science Publishers B. V. North-Holland, Amsterdam, 1991. [17] J.J. DUDERSTADT AND W.R MARTIN, Transport Theory, John Wiley & Sons, New York, 1978. [18] V. FABER AND 'r.A. MANTEUFFEL, Neutron Transport from the Viewpoint of Linear Algebra, Transport Theory, Invariant Imbedding and Integral Equations, (Nelson, Faber, Manteuffel, Seth, and White, eels.), Lecture Notes in Pure and Applied Mathematics, 115, pp. 37-61, Marcel-Decker, April 1989. [19] K.O. FRIEDRICHS, Asymptotic Phenomena in Mathematical Physics, Bull. Am. Math. Soc., 61, pp. 485-504, 1955. [20] G.H. GoLUB AND C.F. VAN LOAN, Matrix Computations, second edition, The Johns Hopkins University Press, Baltimore, 1989. [21] D. GOTTLIEB AND S.A. ORSZAG, Numerical Analysis of Spectral Methods: Theory and Applications, Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, 1977. [22] P. GR!SVARD, Elliptic Problems in Nonsmooth Domains, Pitman Advanced Publishing Program, Boston, 1985. [23] G.J. HABETLER AND B.J. MATKOWSKY, Uniform Asymptotic Expansion in Transport Theory with Small Free Paths, and the Diffusion Approximation, Journal of Mathemat ical Physics 16, No. 4, pp. 846-854, Aprill975. [24] W. HACKBUSCH, Multi-Grid Methods and Applications, Springer, Berlin, 1985. [25] C. JOHNSON, Numerical Solution of Partial Differential Equations by the Fin-ite Element Method, Cambridge University Press, Cambridge, 1990. [26] S. KAPLAN AND J .A. DAVIS, Canonical and Involutory Transformations of the Varia tional Problems of Transport Theory, Nucl. Sci. Eng., 28, pp. 166-176, 1967. [27] J .R. LAMARSR, Introduction to Nuclear Reactor Theory, Addison-Wesley Publishing Company, Inc., Reading, Massachusetts, 1965. 84
PAGE 93
[28] E.W. LARSEN, Diffusion Theory as an Asymptotic Limit of Transport Theory for Nearly Critical Systems with Small Mean Free Path, Annals of Nuclear Energy, Vol. 7, pp. 249255. [29] E. W. LARSEN, Diffusion-Synthetic Acceleration Method for Discrete Ordinates Prob lems, Transport Theory and Statistical Physics, 13, pp. 107-126, 1984. [30] E. W. LARSEN, The Asymptotic Diffusion Limit of Discretized Transport Problems, Nuclear Science and Engineering 112, pp. 336-346, 1992. [31] E. W. LARSEN AND J .B. KELLER, Asymptotic Solution of Neutron Transport Problems for Small Mean Free Paths, J. Math. Phys., Vol. 15, No.1, pp. 75-81, January 1974. [32] E.W. LARSEN, J.E. MoREL, AND W.F. MILLER, Asymptotic Solmions of Numerical Transport Problems in Optically Thick, Diffusive Re.qimes, J. Comp. Phys., 69, pp. 283-324, 1987. [33] E.W. LARSEN AND J.E. MoREL, Asymptotic Solutions of Numerical Transport Prob lems in Optically Thick Diffusive Regimes Il, J. Comp. Phys. 83, (1989), p. 212. [34] E. E. LEWIS AND W.F. MILLER, Computational Methods of Neutron Transport, John Wiley & Sons, New York, 1984. [35] T.A. MANTEUFFEL, unpublished personal notes on even-parity. [36] T.A. MANTEUFFEL, S.F. McCoRMICK, J.E. MoREL, S. OLIVEIRA AND G. YANG, A Fast Multigrid Solver for Isotropic Transport Problems, submitted to SIAM J. Sci. Comp., to appear. [37] T.A MANTEUFFEL, S.F. McCORMICK, J.E. MOREL, S. OLIVEIRA AND G. YANG, A parallel Version of a M-uUigrid Algorithm for Isotropic Transport Equations, submitted to SIAM J. Sci. and Stat. Comp. 15, No 2, pp. 474-493, March 1994. [38] T.A. MANTEUFFEL AND K.J. RESSEL, Multilevel Methods for Transport Equations in Diffusive Regimes, Proceedings of the Copper Mountain Conference on Multigrid Methods, April 5-9, 1993. [39] H. MARCENAU AND G.M. MuRPHY, The Mathematics of Physics and Chemistry, sec ond edition, D. Van Nostrand Company, Inc., Princeton, 1968. [40] W.R. MARTIN, The Application of the Finite Element Method to the Neutron Transport Equation, Ph.D. Thesis, Nuclear Engineering Department, The University of Michigan, Ann Arbor, Michigan, 1976. [41] S.F. McCORMICK, Mu/tigrid Methods, Frontiers in Applied Mathematics 3, SIAM, Philadelphia, 1987. 85
PAGE 94
[42] S.F. McCoRMICK, Multilevel Adaptive Methods for Partial Differential Equations, Frontiers in Applied Mathematics, SIAM, Philadelphia, 1989. [43] S.F. McCoRMICK, Multilevel Projection Methods for Partial Differential Equations, SIAM, Philadelphia, 1992. [44] J.E. MoREL AND T.A. MANTEUFFEL, An Angular Multigrid Acceleration Technique for the SN Equations with Highly Forward-Peaked Scattering, Nuclear Science and Engineering, 107, pp. 330-342, 1991. [45] J.T. ODEN AND G.F. CAREY, Finite Elements, Mathematical Aspects, Volume IV, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1983. [46] ILK. OSBORN, S. YIP, The Foundations of Neutron Transport Theory, Gordon and Breach, Science Publishers, Inc., New York, 1966. [47] A.L PEHLIVANOV, G.F. CAREY AND R.D. LAZAROV, Least-Squares Mixed Finite Elements for Second-order Elliptic Problems, SIAM J. Nurner. Anal., Vol. 31, No.5, pp 1368-1377, October 1994. [48] G.C. PoMRANING, Diffusive Limits for Linear Transport Equations, Nuclear Science and Engineering 112, pp. 239-255, 1992. [49] J. STOER AND R. BULIRSCH, Introduction to Numerical Analysis, second edition, Texts in applied mathematics1 Springer Verlag1 New York1 1993. [50] S. UKAI, Solution of Multi-dimensional Neutron Transport Equations by Finite Element Methods, Journal of Nuclear Science and Technology, 9(6), pp. 366-373, 1972. [51] G.M.WING, An Introdaction to Transport Theory, John Wiley and Sons, Inc., New York, 1962. 86
PAGE 95
APPENDIX A FLUX STENCIL In this section, we derive the stencil for the Least-Squares discretization of the SN fiux equations (4.16) for piecewise linear elements. VVe assume that the slab is partitioned into ZJ = z0 < z1 < < Zm = Zr and denote by hk := Zk -Zk-1 the cell width of cell k. We are then looking for a discrete solution in the form m N m
PAGE 96
,, Zi+l J (niL<, IL'l,,;,j) JRN dz +j ( niL,P, !Lry1 +1 ) dz -,1 ,J JRN Zi-1 ,, (*1) (*2) (A.3) Zi+l J ( Oq !Lry ) dz -S -r,t,) JRN +[ (nq ,IL'L 1 ) dz -s ,z+ ,J JRN Zi-1 ,, (*B) (*4) In the following, we consider the terms (*1) to (*4) separately. To (*1): Applying the substitution z = zZi-11 we have !,, I nJL,;;, ILr1 .. ) dz = jh' I niL{/;, ILr1 . ) dz \ -r,z,J JRN \ --'4,) JRN z,_l 0 where h = hi and z !lr,j hf..j, with 11 := 1/:k_1 and 1, := <,. Then it follows that fL1J.,,j [1w T + T(J -1w T)] Mr; + [m,(l1w T) + O"alw TJ Zfj !L< [1w T + T(J-1w T)j M (', -<1 ) + [m,(llw T) + o-,lw TJ (<1(hz) + J:,z) 88
PAGE 97
so that h J (mL;p, JL,7 .) dz -r,J fffN 0 + ( [r
PAGE 98
get the following contribution from (*1): h; [ 2 2 n ( 2 2 2 ) T] } +6 71: O't,iH + CJa,i 7i crt,i ww '!fi-l h; [ 2 2 ( 2 2 2 ) Tl} +TiCTti0+ (Jai-TiCTti WW 1/J .. 3 ) -t To (*2): Applying the substitution z = z -Zi, we have h;+l J ( 11ILio_, lL']_1J JRN dz, 0 where now ( h := h.;+!) and h-z '!L,j -h-f.j Therefore, ILry, ,J [1w T + r(Ilw T)] lvifj + [nr,(I -1w T) +
PAGE 99
so that h J I rw>0 ,ILr;, ) dz \ ,J JRN 0 + ( [ m,(I lw T) + oalw TJ 1/;_1 [ro-r(I lw T) + oa lw TJ 'Cj) JRN + ( [rcr,(Ilw T) + D"alW TJ 1/;_,., [m,(J-lw T) + D"alW TJ fj) JRN = --,1 (o [MlwT +r2M(I -lwT)] M (p_, -p_,) \0 (tTaMlw T + r2o-,M(Jlw T)j (p_, + 1/;_,) fj) JRN Consider all possible j and recall that h = hi+! and that (a-a, tTt, r) are the corresponding values in cell i + 1, denoted by ( oa i+l, a-, i+l, Ti+!), and that 1p1 = 1/J ., 1j; = 1j; We get , --t -'-T -t+l the following contribution from (*2): 91
PAGE 100
To (*3): We have q (z) = SNq(z) = lw T q(z) + r(IlwT)q(z), --where Therefore, Jz; / rlq .. ) dz \ fftN Zi-1 ,, = j (mw T 'l_, [ll;;_T + r(Ilw T)] MjrJ;,,)n 1 N dz z, + j (mw T 'l_, [nr,(Jlw T) + o-alw TJ f.j'lr,i) JRN dz X; + j (nr(I -lw T)'l_, [lw T + r(Ilw Tl] dz Xi-1. '' + j (rlr(I -lw T)'l_, [w,(I -lw T) + o-alw TJ f.jr),,i)JRN dz. 92
PAGE 101
Considering all possible j and recalling that r, crt, era are the corresponding values in cell i, we get with z; j 'J/}c,idz '= Zi-1 ( J:.'_, q(z, l'lh,;(z)dz ) !,','_, q(z, the following contribution from (*3): z; [(1r;') + rlOM] J 'l_'l;,idz Zi-1 + [rlo-t,iO + (o-a,irlo-t,i) ww T] j 'l_''lc,idz. To (*4): Similarly it follows that z,+l J (Oq ILry, '+l ) dz ,J JRN Z,+l j (mwTg_,[lwT +r(I-lwT)]Mf;'l;,i+l)11wdz Zt+l+ J (mw T '1_, [ro-,(J-Jw T) + O"alW Tl fj'!l,i+l)JRN dz z,+l + j (Or (Ilw T)g_, [lw T + r(Ilw T)] Mr;'l:,i+l) JRN dz z,+l + j (Or (Ilw T)g_, [ ro-,(Ilw T) + o-a lw TJ r;r/l,i+l) JRN dz. Considering all possible j and recalling that r, crt, era are the corresponding values in cell 93
PAGE 102
i + 1, we get with Zt+l j lj_'ll,i+l dz ,, the following contribution from (*4): Zi+l j lj_'l;,i+l dz ,, Zi+l + + (aa,i+l-rl+10"t,i+1) ww TJ j 9_1JI,i+ldz. ', 94
PAGE 103
Equations for an interior node Putting everything together, we get the following equations for an interior node i: + { [(r;2+1 -1) Mww T Mr;'+10M2 ) + [(r;2 + 1
PAGE 104
Equations for the left boundary For the left boundary ( i = 0 ), we have only contributions from part (*2) and part (*4) and, in addition, j runs only from + 1, ... N, since the values for the positive angles are given by the boundary conditions. With MN/2 := diag(p,, ... !'N/2); nN/2 := diag(w,, ... ,WNj2); iNj2 := (w,, .. 'WNj2) T and with for the left boundary the following equations we get { [(1r1 2 ) Jvrww T M + rfnM2 ] h1 [ 2 2 ,1 ( 2 2 2 ) Tl} ,1, +3 + O'"a,l-710"t,l La { 1 [( 2 ) T 2n-M'] + -r1 -1 M ww M r1 h, h1 [ 2 2 ,1 ( 2 2 2 ) Tl} .1. +6 + O'a,l-710''1,1 1'..1 ,, = [(1r?) M-ww T + r{W M] j g_ry;,1dz '" + [rf,.,,,n+ (,.a,l-r{,.,,,) iN;ziT] j 'l_ry1,1dz. 96
PAGE 105
Equations for the right boundary For the right boundary ( i-m) we have only contributions from part (kl) and part (*3) and, in addition, j runs only from 1, ... If, since the values for the negative angles are given by the boundary conditions. With n+ := ( nN/2 0 ) we get the following equations for the right boundary: hm [ 2 2 n+ ( 2 2 2 ) T l } /, +3 rmcrt,m + 17a,m-7ml7t,m :t.m ''" [(1r,;,) M+ww T + r,;,n+ M] j 'I'I;,mdz Zrn-1 + [r,;,,,-,,mn+ + (o-,,m-r,;,o-,,m)\!I.N;2"'T] l f'lc,mdz. Zm-1 97
PAGE 106
APPENDIX B MOMENT STENCIL In this section, we list the stencil for the Least-Squares discretization of the moment equations (4.18) for piecewise linear basis functions. Assuming that each moment tPz(z) is of the form and defining m
PAGE 107
Stencil for Stencil for 1'k,l with l > 1: [ 2 -]] h-2h-1 r, h;:
PAGE 108
we can write all the above equations as a system of the form Ao,o Ao,l ( Al,O A1,1 A1,2 'lo ) A =: q. (B.l) Am-l,m-2 Am-l,m-1 Am-1,m !]_rn A.m,m-1 Am,m J So far we have not taken into consideration the boundary conditions (4.14), which can be written as J=(t)=g_, (B.2) with [ [h, 0]1'T 0 0 ] ( gl(/h) ) ( g,(I'J;j-H) ) J := 0 0 [0 IN ]TT ; ':l, := : ; 'l, := : 0 Nx(N(m+l)) m(I'J;j-) g,(!'N) We have therefore to minimize the Least-Squares functional F( ) subject to the constraint (B.2). This can be done by making the total derivative ofF zero, which we write as [iJ] iJ \7 <;F( ) = 0, (B.3) where denotes the m.atrix '] .. Because of the constraint (B.2), the F( ) = 0 results in the discrete system (B. 1 ). Forming derivatives /! of the constraint o/k,! (B.2) gives [iJ] T J iJ = 0; therefore, all rows of are in the nullspace N(J), which has dimension Nm. Suppose we find an ( N m) x N matrix J J. of rank N m such that J ( J J.) T = 0. 'I' his means then that the rows of J J.span N( J). Therefore, there exists a matrix D with D [ = J J.. Multiplying ( 8.3) by D results in J J. \7 .,F( ) = 0, which together with the constraint becomes the closed system { ;:cv,F(). = o } J = H_ Since TT TO = IN, then we can choose J J.. in the following way: with 0 IN(m-1) 0 ] G, [ J N l c, := nl (J j 100 (B.4)
PAGE 109
Taking into account that 'Vi!>F(A= 'I' we see that (B.4) can be written as Co,o Co,l qo Al,O A1,1 A,,, <1>= (B.5) Arn-l,m-2 Am-l,m-1 A.m-l,m l[m-1 Cm,m-1 Cm,m ijm with [ [I1,0]TT] I 0 ] [ !!., l Co,o Co,t l G"[ Ao,J ij_o J G[ Ao,o GJ:l [ GJAm,m ] [ cr A:,m-1 ] [ Cm,rn .Cm,m-1 -[O,h]TT 101 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9539991617202759, "perplexity": 2784.7929722474823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695375.98/warc/CC-MAIN-20170926085159-20170926105159-00558.warc.gz"} |
http://www.intmath.com/plane-analytic-geometry/6-hyperbola.php | # 6. The Hyperbola
Cooling towers for a nuclear power plant have a hyperbolic cross-section.
[Image source.]
A hyperbola is a pair of symmetrical open curves. It is what we get when we slice a pair of vertical joined cones with a vertical plane.
### How do we create a hyperbola?
Take 2 fixed points A and B and let them be 4a units apart. Now, take half of that distance (i.e. 2a units).
Now, move along a curve such that from any point on the curve,
(distance to A) − (distance to B) = 2a units.
The curve that results is called a hyperbola. There are two parts to the curve.
Let's see how this works with some examples.
### Example 1
Let the distance between our points A and B be 4 cm. For convenience in our first example, let's place our fixed points A and B on the number line at (0, 2) and (0, −2), so they are 4 units apart. In this case, a = 1 cm and 2a = 2 cm.
The Equation of a Hyperbola
East-West Opening Hyperbola
Definition of a Hyperbola
More Forms of the Equation of a Hyperbola
Even More Forms of the Equation of a Hyperbola
### Applications of Hyperbolas
• Navigation: Ship's navigators can plot their position by comparing GPS signals from different satellites. The technique involves hyperbolas.
• Physics: The movement of objects in space and of subatomic particles trace out hyperbolas in certain situations.
• Sundials: Historically, sundials made use of hyperbolas. Place a stick in the ground and trace out the path made by the shadow of the tip, and you'll get a hyperbola.
• Construction: Nuclear power plant smoke stacks have a hyperbolic cross section as illustrated above. Such 3-dimensional objects are called hyperboloids.
Now we start tracing out a curve such that P is a point on the curve, and:
distance PB − distance PA = 2 cm.
We start at (0, 1).
Shown below is one of the points P, such that PB − PA = 2.
If we continue, we obtain the blue curve:
Now, continuing our curve on the left side of the axis gives us the following:
We also have another part of the hyperbola on the opposite side of the x-axis, this time using:
distance PA − distance PB = 2
Once again a typical point P is shown, and we can see from the lengths given that PA − PB = 2.
We observe that the curves become almost straight near the extremities. In fact, the lines y=x/sqrt3 and y=-x/sqrt3 (the red dotted lines below) are asymptotes:
[An asymptote is a line that forms a "barrier" to a curve. The curve gets closer and closer to an asymptote, but does not touch it.]
In Example 1, the points (0, 1) and (0, -1) are called the vertices of the hyperbola, while the points (0, 2) and (0, -2) are the foci (or focuses) of the hyperbola.
### The equation of our hyperbola
For the hyperbola with a = 1 that we graphed above in Example 1, the equation is given by:
y^2-x^2/3=1
Notice that it is not a function, since for each x-value, there are two y-values.
We call this example a "north-south" opening hyperbola.
### Where did this hyperbola equation come from?
The equation follows from the distance formula and the requirement (in this example) that distance PB − distance PA = 2.
Here's the proof.
### Interactive graph
See an interactive graph of this example, and the others on this page, here:
### General Equation of North-South Hyperbola
For the hyperbola with focal distance 4a (distance between the 2 foci), and passing through the y-axis at (0, c) and (0, −c), we define
b2 = c2a2
Applying the distance formula for the general case, in a similar fashion to the above example, we obtain the general form for a north-south hyperbola:
y^2/a^2-x^2/b^2=1
Continues below
### Example 2
Here's another example of an "north-south" hyperbola.
It's equation is:
y2x2 = 1
Similar to Example 1, this hyperbola passes through 1 and −1 on the y-axis, but it has a different equation and a slightly different shape (and different asymptotes). Where are the 2 foci for this hyperbola? We need to find the value of c.
By inspection (of the equation of this hyperbola), we can see a = 1 and b = 1. Using the formula given above, we have:
b2 = c2a2
So
12 = c2 − 12
c2 = 2
c = ±√2
So the points A and B (the foci) for this hyperbola are at A (0, √2) and B (0, −√2).
## East-West Opening Hyperbola
By reversing the x- and y-variables in our second example above, we obtain the following equation.
### Example 3
x2y2 = 1
This gives us an "East-West" opening hyperbola, as follows. Our curve passes through -1 and 1 on the x-axis and once again, the asymptotes are the lines y = x and y = −x.
The general formula for an East-West hyperbola is given by:
x^2/a^2-y^2/b^2=1
Note the x and y are reversed, compared the formula for the North-South hyperbola.
Don't miss the interactive graph of this example, and the others on this page, here:
## Technical Definition of a Hyperbola
A hyperbola is the locus of points where the difference in the distance to two fixed foci is constant.
This technical definition is one way of describing what we were doing in Example 1, above.
### Hyperbolas in Nature
Throw 2 stones in a pond. The resulting concentric ripples meet in a hyperbola shape.
## More Forms of the Equation of a Hyperbola
There are a few different formulas for a hyperbola.
Considering the hyperbola with centre (0, 0), the equation is either:
1. For a north-south opening hyperbola:
y^2/a^2-x^2/b^2=1
The slopes of the asymptotes are given by:
+-a/b
2. For an east-west opening hyperbola:
x^2/a^2-y^2/b^2=1
The slopes of the asymptotes are given by:
+-b/a
In Examples 2 & 3 given above, both a and b were equal to 1, so the slopes of the asymptotes were simply ± 1 and our asymptotes were the lines y = x and y = −x.
What effect does it have if we change a and b?
### Example 4
Sketch the hyperbola
y^2/25-x^2/4=1
### Interactive graph
See an interactive graph of this example, and the others on this page, here:
## Even More Forms of the Equation of a Hyperbola
(1) Possibly the simplest equation of a hyperbola is given in the following example.
### Example 5 - Equilateral Hyperbola
xy = 1
This is known as the equilateral or rectangular hyperbola.
Notice that this hyperbola is a "north-east, south-west" opening hyperbola. Compared to the other hyperbolas we have seen so far, the axes of the hyperbola have been rotated by 45°. Also, the asymptotes are the x- and y-axes.
### Hyperbola with axis not at the Origin
(2) Our hyperbola may not be centred on (0, 0). In this case, we use the following formulas:
For a "north-south" opening hyperbola with centre (h, k), we have:
((y-k)^2)/a^2-((x-h)^2)/b^2=1
For an "east-west" opening hyperbola with centre (h, k), we have:
((x-h)^2)/a^2-((y-k)^2)/b^2=1
### Example 6 - Hyperbola with Axes Shifted
Sketch the hyperbola
((x-2)^2)/36-((y+3)^2)/64=1
3. We could expand our equations for the hyperbola into the following form:
Ax^2+ Bxy + Cy^2+ Dx + Ey + F = 0 (such that B^2>4AC)
In the earlier examples on this page, there was no xy-term involved. As we saw in Example 5, if we do have an xy-term, it has the effect of rotating the axes. We no longer have "north-south" or "east-west" opening arms - they could open in any direction.
### Example 7 - Hyperbola with Shifted and Rotated Axes
The graph of the hyperbola x2 + 5xy − 2y2 + 3x + 2y + 1 = 0 is as follows:
We see that the axes of the hyperbola have been rotated and have been shifted from (0, 0).
[Further analysis is beyond the scope of this section. ]
### Exercise
Sketch the hyperbola
x^2/9-y^2/16=1
### Conic section: Hyperbola
How can we obtain a hyperbola from slicing a cone?
We start with a double cone (2 right circular cones placed apex to apex):
When we slice the 2 cones vertically, we get a hyperbola, as shown.
top
### Online Algebra Solver
This algebra solver can solve a wide range of math problems.
### Math Lessons on DVD
Easy to understand math lessons on DVD. See samples before you commit. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724249601364136, "perplexity": 885.5442479507767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719784.62/warc/CC-MAIN-20161020183839-00001-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/313780/orthogonal-basis-of-polynomials | # Orthogonal basis of polynomials?
Let us define the basis of polynomials given by: $$\begin{array}\ P_0=1, \\ P_1=x, \\ P_2=x(x-1), \\ P_3=x(x-1)(x-2), \\ P_4=x(x-1)(x-2)(x-3), \ldots\\ \end{array}$$ I would like to know if this basis is orthogonal with respect to some measure. Thank you very much!
• Well, you could define an ad hoc inner product by saying that if $p(x) = \sum a_ip_i$ and $q(x)=\sum b_ip_i$, then $\langle p,q\rangle = \sum a_ib_i$, which would make it an orthogonal (even orthonormal) basis. But presumably you are looking for more than just "some" measure? – Arturo Magidin Oct 25 '18 at 21:50
• Ps: I was thinking about an inner product of the form $\int dx P_i(x) P_j(x) \mu(x)$ for some measure $\mu(x)$ – fernando Oct 25 '18 at 22:22
• You should explain more clearly what kind of measure you are asking? A real measure on the real line? A complex measure on a subset of the complex plane? – Alexandre Eremenko Oct 25 '18 at 23:50
If a sequence of monic polynomials is orthogonal with respect a measure, it satisfies a three-term recurrence $p_{n+1}(t) = (t-a_n)p_n(t) - b_n p_{n-1}(t)$ where $$b_n>0$$. From this it follows that consecutive terms in the sequence cannot have a common zero. Your sequence fails badly on this test.
• Thank you very much! Indeed you are right. But can we define something similar to a orthogonality condition? e.g. we can also consider the family of polynomials P_n(x) = x^n, which is a complete basis, but in this case we can define, for instance $<P_n P_m> = \int \frac{dz}{z} P_n(x) P_{-m}(x)$, where the integral is a contour integral around zero. – fernando Oct 25 '18 at 22:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981845498085022, "perplexity": 155.24631415121442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890273.42/warc/CC-MAIN-20201026031408-20201026061408-00527.warc.gz"} |
http://mres.uni-potsdam.de/index.php/2017/02/28/matlab-based-simulation-of-bioturbation/ | # MATLAB-Based Simulation of Bioturbation, Part 1
Bioturbation (or benthic mixing) causes significant distortions in marine stable isotope signals and other palaeoceanographic records. My doctoral project at the University of Kiel between 1992-1995 aimed to model, to quantify and deconvolve the effect of bioturbation in deep-sea sediments.
The TURBO bioturbation algorithm published in 1998 (Trauth, 1998) can be used to simulate the effects of benthic mixing on individual sediment particles such as foraminifera tests. The advantage of this model is that it allows users of the program to study the effect of bioturbation on isotopic signals from stratigraphic carriers such as foraminifera. It is also able to simulate the effect that a small sample size (i.e. a small number of foraminifera tests with isotopic measurements) has on the noise level of an isotope record.
The disadvantage of TURBO, however, is that it was written in FORTRAN77. This programming language is very inefficient in daily use compared to increasingly popular numerical computer programming languages such as MATLAB. The software has therefore been chosen as an appropriate programming language for a complete rewrite of TURBO, to be known as TURBO2. TURBO2 provides a tool for time-variant bioturbation modeling of signal carriers, such as foraminifera carrying an isotope signal (Trauth, 2013).
The TURBO2 MATLAB program consists of only ∼50 lines of computer code; the script to import the synthetic data to be mixed, to run TURBO2, and to display the results consists of another ∼50 lines of MATLAB code. In contrast, the original FORTRAN77 TURBO program consisted of ∼900 lines of code, not including any algorithm for the graphical display of results, in contrast to the MATLAB-based TURBO2. The MATLAB code of TURBO2 including example data is available for download.
### References
Trauth, M.H. (2013) TURBO2: A MATLAB simulation to study the effects of bioturbation on paleoceanographic time series. Computers and Geosciences, 61, 1-10.
Trauth, M.H. (1998) TURBO: a dynamic-probabilistic simulation to study the effects of bioturbation on paleoceanographic time series. Computers and Geosciences, 24(5), 433-441. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180638551712036, "perplexity": 2872.0552499863165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00322.warc.gz"} |
https://wiki.math.ubc.ca/mathbook/M102/Midterm_2_2012_answers | 1. $\sin(\arctan(a)=\dfrac{a}{\sqrt{1+a^2}}$.
3. $x=3$ only. $x=-2$ does not solve the given equation.
4. $y'=\frac{3}{2}$.
1. $T(t) = \dfrac{25^4}{16^3} \left(\dfrac{16}{25} \right)^\frac{t}{8}$
2. $t=27$
1. $d=100 \sqrt{2}$ m
2. $x(t)= 100-25 t$
3. $\theta'=\dfrac{1}{4}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597986936569214, "perplexity": 2260.3361169586624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/144330-lin-alg-proofs-counterexamples-print.html | # Lin Alg Proofs and Counterexamples
• May 2nd 2010, 02:13 PM
dwsmith
1 Attachment(s)
Lin Alg Proofs and Counterexamples
I have compiled 57 prove or disprove lin alg questions for my final; however, these may be useful to all.
Contributors to some of the solutions are HallsofIvy, Failure, Tikoloshe, jakncoke, tonio, and Defunkt.
If you discover any errors in one of the solutions, then feel free to reply with the number and correction.
Moderator Edit:
1. If you want to thank dwsmith, please click on the Thanks button (do NOT post replies here unless you have a suggestion or erratum).
2. The original thread can be viewed at http://www.mathhelpforum.com/math-he...tml#post511378.
• May 17th 2010, 01:13 PM
dwsmith
Here is a general proof for a Vector Space that shows when $k+1$ will be lin. ind. and lin. dep.
Let $\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k$ be lin. ind. vectors in V. If we add a vector $\mathbf{x}_{k+1}$, do we still have a set of lin. ind. vectors?
(i) Assume $\mathbf{x}_{k+1}\in$ $Span (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k)$
$\displaystyle\mathbf{x}_{k+1}=c_1\mathbf{x}_{1}+.. .+c_k\mathbf{x}_{k}$
$\displaystyle c_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{k}+c_{k+1}\ma thbf{x}_{k+1}=0$
$\displaystyle c_{k+1}=-1$
$\displaystyle -1\neq 0$; therefore, $(\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k, \mathbf{x}_{k+1})$ are lin. dep.
(ii) Assume $\mathbf{x}_{k+1}\notin$ $Span (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k)$
$\displaystylec_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{ k}+c_{k+1}\mathbf{x}_{k+1}=0$
$\displaystyle c_{k+1}=0$ otherwise $\displaystyle \mathbf{x}_{k+1}=\frac{-c_1}{c_{k+1}}\mathbf{x}+....+\frac{-c_k}{c_{k+1}}\mathbf{x}_k$ which is a contradiction.
$\displaystyle c_1\mathbf{x}_{1}+...+c_k\mathbf{x}_{k}+c_{k+1}\ma thbf{x}_{k+1}=0$. Since $(\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_k, \mathbf{x}_{k+1})$ are lin ind., $c_1=...c_{k+1}=0$.
• May 17th 2010, 08:56 PM
dwsmith
A is matrix n x n over field F, similar to an upper triangular matrix iff. the characteristic polynomial can be factored into an expression of the form $\displaystyle (\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)$
$\displaystyle det(A)=(a_{11}-\lambda)A_{11}+\sum_{i=2}^{n}a_{i1}A_{i1}$
$\displaystyle (a_{11}-\lambda)A_{11}=(a_{11}-\lambda)(a_{22}-\lambda)...(a_{nn}-\lambda)$
$\displaystyle =(-1)^n\lambda^n+...+(-1)^{n-1}\lambda^{n-1}$
$\displaystyle p(0)=det(A)=\lambda_1\lambda_2...\lambda_n$
$\displaystyle (-1)^{n-1}=tr(A)=\sum_{i=1}^{n}\lambda_i$
$\displaystyle p(\lambda)=0$ has exactly n solutions $\lambda_1,...,\lambda_n$
$\displaystyle p(\lambda)=(\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)$
$\displaystyle p(0)=(\lambda_1)(\lambda_2)...(\lambda_n)=det(A)$
• May 18th 2010, 11:01 PM
dwsmith
Let be $A$ be a nxn matrix and $B=I-2A+A^2$. Show that if $\mathbf{x}$ is an eigenvector of A belonging to an eigenvalue $\lambda$ of A, then $\mathbf{x}$ is also an eigenvector of B belonging to an eigenvalue $\mu$ of B.
$B\mathbf{x}=(I-2A+A^2)\mathbf{x}=\mathbf{x}-2A\mathbf{x}+A^2\mathbf{x}=\mathbf{x}-2\lambda\mathbf{x}+A(\lambda\mathbf{x})=\mathbf{x}-2\lambda\mathbf{x}+(A\mathbf{x})\lambda$ $=\mathbf{x}-2\lambda\mathbf{x}+\lambda^2\mathbf{x}=(1-2\lambda+\lambda^2)\mathbf{x}=\mu\mathbf{x}$
Hence, $\mu=(1-2\lambda+\lambda^2)$
• May 20th 2010, 06:49 PM
dwsmith
Let $\lambda$ be an eigenvalue $A$ and let $\mathbf{x}$ be an eigenvector belonging to $\lambda$. Use math induction to show that, for $m\geq 1$, $\lambda^m$ is an eigenvalue of $A^m$ and $\mathbf{x}$ is an eigenvector of $A^m$ belonging to $\lambda^m$.
$A\mathbf{x}=\lambda\mathbf{x}$
$p(k):=A^k\mathbf{x}=\lambda^k\mathbf{x}$
$p(1):=A\mathbf{x}=\lambda\mathbf{x}$
$p(k+1):=A^{k+1}\mathbf{x}=\lambda^{k+1}\mathbf{x}$
Assume $p(k)$ is true.
Since $p(k)$ is true, then $p(k+1):=A^{k+1}\mathbf{x}=\lambda^{k+1}\mathbf{x}$.
$A^{k+1}\mathbf{x}=A^kA\mathbf{x}=A^k(\lambda\mathb f{x})=\lambda(A^k\mathbf{x})=\lambda\lambda^k\math bf{x}=\lambda^{k+1}\mathbf{x}$
By induction, $A^k\mathbf{x}=\lambda^k\mathbf{x}$.
• Feb 4th 2011, 06:37 PM
Ackbeet
1. From Test 5, Problem 4, on page 4. I would say more than eigenvectors must be nonzero, by definition. It's not that the zero eigenvector case is trivial: it's that it's not allowed.
2. Page 6, Problem 8: typo in problem statement. Change "I of -I" to "I or -I".
3. Page 8, Problem 21: the answer is correct, but the reasoning is incorrect. It is not true that $\mathbf{x}$ and $\mathbf{y}$ are linearly independent if and only if $|\mathbf{x}^{T}\mathbf{y}|=0.$ That is the condition for orthogonality, which is a stronger condition than linear independence. Counterexample: $\mathbf{x}=(\sqrt{2}/2)(1,1),$ and $\mathbf{y}=(1,0).$ Both are unit vectors, as stipulated. We have that $|\mathbf{x}^{T}\mathbf{y}|=\sqrt{2}/2\not=0,$ and yet
$a\mathbf{x}+b\mathbf{y}=\mathbf{0}$ requires $a=b=0,$ which implies linear independence.
Instead, the argument should just produce a simple counterexample, such as $\mathbf{x}=\mathbf{y}=(1,0)$.
Good work, though!
• May 27th 2011, 02:33 AM
sorv1986
Quote:
Originally Posted by dwsmith
A is matrix n x n over field F, similar to an upper triangular matrix iff. the characteristic polynomial can be factored into an expression of the form $\displaystyle (\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)$
$\displaystyle det(A)=(a_{11}-\lambda)A_{11}+\sum_{i=2}^{n}a_{i1}A_{i1}$
$\displaystyle (a_{11}-\lambda)A_{11}=(a_{11}-\lambda)(a_{22}-\lambda)...(a_{nn}-\lambda)$
$\displaystyle =(-1)^n\lambda^n+...+(-1)^{n-1}\lambda^{n-1}$
$\displaystyle p(0)=det(A)=\lambda_1\lambda_2...\lambda_n$
$\displaystyle (-1)^{n-1}=tr(A)=\sum_{i=1}^{n}\lambda_i$
$\displaystyle p(\lambda)=0$ has exactly n solutions $\lambda_1,...,\lambda_n$
$\displaystyle p(\lambda)=(\lambda_1-\lambda)(\lambda_2-\lambda)...(\lambda_n-\lambda)$
$\displaystyle p(0)=(\lambda_1)(\lambda_2)...(\lambda_n)=det(A)$
A is matrix n x n
over field F, similar
to an upper
triangular matrix
iff. the
minimal
polynomial is a product of linear factors
• Nov 6th 2011, 04:41 PM
carlosgrahm
Re: Lin Alg Proofs and Counterexamples
Great post thank you
• Dec 15th 2011, 06:58 PM
dwsmith
2 Attachment(s)
I have a workbook of 100 proofs I am typing up for Linear Alg. I am not sure if all the solutions are correct. However, I will post what I have done 15 problems so far for review and correct errors as they are found. Once they are all done, I will add them to other sticky of proofs I have up already.
Deveno, Drexel, Pickslides, FernandoRevilla, and Ackbeet have helped with some of the problems already.
Updated pdf with more solutions
Updated pdf file. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 75, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433143138885498, "perplexity": 798.939687228879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00124-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/tricky-impossible-integral.210667/ | # Tricky (impossible?) Integral
1. ### KingOrdo
119
Anyone know how to do this integral? I can't figure it out analytically, nor do I find it in my tables. Thanks!
$$\int x^{(x-1)}dx$$
2. ### Defennder
2,616
I used the Integrator http://integrals.wolfram.com/index.jsp and got the following, which implies no elementary solutions exist:
3. ### HallsofIvy
40,201
Staff Emeritus
Do you have any reason to believe that such a formula does exist? "Almost all" such functions do not have an anti-derivative in any simple form.
4. ### KingOrdo
119
Nope. I never claimed such, nor did I specify that I would only accept a solution in a "simple form".
5. ### HallsofIvy
40,201
Staff Emeritus
Ah, well, if you don't require it in "simple form", I can definitely say that
$$\int x^{x-1} dx= Ivy(x)+ C[/itex] where I have defined "Ivy(x)" to be the anti-derivative of [tex]\int x^{x-1}dx$$
such that Ivy(0)= 0!
6. ### KingOrdo
119
Jokes aside, again: It need not be in "simple form"--however such a term is defined in detail--but it must be a solution (in the sense that it must help me calculate, in principle at least, the integral).
7. ### ice109
do you the antiderivative or the definite integral?
8. ### arildno
12,015
Why do you need an anti-derivative in order to get good value estimates of definite integrals?
9. ### KingOrdo
119
I can't answer this until I know what verb you left out of your question.
You don't. No one claimed such a thing.
10. ### arildno
12,015
What do you mean by "must be a solution"?
The integrand can be shown to be integrable, that's all you need to check.
11. ### KingOrdo
119
I mean it must not be trivial (cf. HallsofIvy's "solution").
I am asking how to integrate it.
12. ### morphism
2,020
The answer probably is "no one knows".
13. ### Gib Z
3,348
HallsofIvy's solution is the best you will get, and theres actually nothing wrong with it either.
Here I quote Courant from page 242, Volume 1;
After stating that in the 19th century it was proven certain elementary integrands did not have elementary antiderivatives-
14. ### ice109
do you need the antiderivative or the definite integral over some interval
15. ### Gib Z
3,348
O ice's post reminds me; if your a desperate little one, less than a year ago I remember a paper of arVix or however you spell that site, its well known, about using hyper geometric series in a method of "approximate solutions for antiderivatives". I am not sure about the credibility of the paper, and when I skimmed through it, it seemed very elaborated. Obviously it didn't get much attention from the mathematical community, as it didn't really have any real use. Which once again brings us back to the fact that Hall's solution really actually is the best you can do in this case.
If its a definite integral, nothings stopping you from getting any degree of accuracy you want using numerical methods.
16. ### KingOrdo
119
HallsofIvy's "solution" is in fact worse than incorrect, because it is useless.
Antiderivative.
Thanks; I'll take a look at the arXiv. Any other suggestions along these lines would be appreciated.
17. ### Gib Z
3,348
Ok. Lets go back to the days when the natural logarithm was not yet defined, and some poor souls ran into $$\int^x_1 \frac{1}{t} dt$$.
Yes, most just complained that it didn't have any nice anti derivative at all! How could they cope?
But some of the bright ones said, "well nothings going to stop me from defining a new function, and showing it has these nice properties, like f(ab) = f(a) + f(b), etc etc. Ooh, hold on a second, these properties are the same ones the inverse of the exponential function must have! Hooray!"
Point is, you don't need a nice analytical solution to everything to work out somethings properties and actually still achieve a "solution".
It may be a nice exercise for you to prove $$\int^{ab}_1 \frac{1}{t} dt = \int^b_1 \frac{1}{t} dt + \int^a_1 \frac{1}{t} dt$$. =]
18. ### Defennder
2,616
Here's what Wikipedia says:
In other words, you don't need to know the anti-derivate to approximate the definite integral, which is pretty much what the others have said. That function doesn't have an elementary function for an anti-derivative.
19. ### KingOrdo
119
Gib Z, I do not require a "nice analytical solution". But I do require a non-trivial one.
And I do not want to know the definite integral. Had I wanted the definite integral, I would have asked for it. When I say, 'What is A?', I want to know what A is--not what B is.
20. ### Ben Niehoff
1,663
KO, you must understand that a great many people come to this board asking the wrong questions, or using the wrong terminology, because they don't understand what they are asking about. It is only natural for us to ask for clarifications and to clear up potential misunderstandings. That we have done so here should give you no cause to be an ass.
Anyway, there is a very easy way for you to answer your own question. First, assume that your integral has an antiderivative, and call it f:
$$f(x) = \int_A^x t^{t-1} dt$$
for some constant A. Now, suppose that f(x) has a power series representation:
$$f(x) = \sum_{k=0}^{\infty} C_k (x-x_0)^k$$
Now, you need to find some x_0 about which to expand the power series. The function x^(x-1) is undefined at x=0, so the integral might not exist there.
At any rate, once you choose an appropriate x_0, you can begin by taking derivatives of f(x), and evaluating them at x=x_0. You should then find an easy way to get all of the coefficients C_k.
Have fun. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593899011611938, "perplexity": 876.007142554684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461266.22/warc/CC-MAIN-20150226074101-00106-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://export.arxiv.org/abs/2101.07230 | hep-lat
(what is this?)
# Title: Contribution of the QCD $Θ$-term to nucleon electric dipole moment
Abstract: We present a calculation of the contribution of the $\Theta$-term to the neutron and proton electric dipole moments using seven 2+1+1-flavor HISQ ensembles. We also estimate the topological susceptibility for the 2+1+1 theory to be $\chi_Q = (66(9)(4) \rm MeV)^4$ in the continuum limit at $M_\pi = 135$ MeV. The calculation of the nucleon three-point function is done using Wilson-clover valence quarks. The CP-violating form factor $F_3$ is calculated by expanding in small $\Theta$. We show that lattice artifacts introduce a term proportional to $a$ that does not vanish in the chiral limit, and we include this in our chiral-continuum fits. A chiral perturbation theory analysis shows that the $N(0) \pi(0)$ state should provide the leading excited state contribution, and we study the effect of such a state. Detailed analysis of the contributions to the neutron and proton electric dipole moment using two strategies for removing excited state contamination are presented. Using the excited state spectrum from fits to the two-point function, we find $d_n^\Theta$ is small, $|d_n^\Theta| \lesssim 0.01 \overline \Theta e$ fm, whereas for the proton we get $|d_p^\Theta| \sim 0.02 \overline \Theta e$ fm. On the other hand, if the dominant excited-state contribution is from the $N \pi$ state, then $|d_n^\Theta|$ could be as large as $0.05 \overline \Theta e$ fm and $|d_p^\Theta| \sim 0.07 \overline \Theta e$ fm. Our overall conclusion is that present lattice QCD calculations do not provide a reliable estimate of the contribution of the $\Theta$-term to the nucleon electric dipole moments, and a factor of ten higher statistics data are needed to get better control over the systematics and possibly a $3\sigma$ result.
Comments: 27 pages, 21 figures Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph) Report number: LA-UR-20-30515 Cite as: arXiv:2101.07230 [hep-lat] (or arXiv:2101.07230v1 [hep-lat] for this version)
## Submission history
From: Tanmoy Bhattacharya [view email]
[v1] Mon, 18 Jan 2021 18:30:01 GMT (721kb,D)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8705118298530579, "perplexity": 969.770687204529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00262.warc.gz"} |
https://selfstudypoint.in/chemical-equation/ | # Chemical Equation
0
129
## What is a Chemical Equation:
A chemical equation is the symbolic representation of a chemical reaction in the form of symbols , where the reactants are given on the left-hand side and the products on the right-hand side. The coefficients next to the symbols of entities are the absolute values of the stoichiometric numbers.
2HCl + 2Na → 2NaCl + H2
This equation indicates that sodium and HCl react to produce NaCl and H2. It also indicates that two sodium molecules are required for every two hydrochloric acid molecules and the reaction will form two sodium chloride molecules and one molecule of hydrogen gas.
## Reactants and Products:
Reactant are the substances present at the beginning of a chemical reaction. In the burning of natural gas, for example, methane (CH4) and oxygen (O2) are the reactants in the chemical reaction.
Products are the substances formed by a chemical reaction. In the burning of natural gas, carbon dioxide (CO2) and water (H2O) are the products formed by the reaction.
## Balanced Chemical Equation:
According to the law of conservation of mass, when a chemical reaction occurs, the mass of the products should be equal to the mass of the reactants. Therefore, the amount of the atoms in each element does not change in the chemical reaction. As a result, the chemical equation that shows the chemical reaction needs to be balanced. A balanced chemical equation occurs when the number of the atoms involved in the reactants side is equal to the number of atoms in the products side.
## Steps to Balance an Equation
To balance an equation, here are the things we need to do:
• Count the atoms of each element in the reactants and the products.
• Use coefficients; place them in front of the compounds as needed.
Let’s take a example equation to balance:
HCl + Na → NaCl + H2
Reaction above is a chemical equation but not balanced. As amount molecules of elements are different in reactant and product.
As hydrogen(H) have molecule in product side. So multiply HCl by 2.
2HCl + Na → NaCl + H2
Now chlorine(Cl) have 2 molecules in reactant side. So multiply NaCl by 2 to balance Cl.
2HCl + Na → 2NaCl + H2
This time sodium(Na) have 2 molecules in product side. So multiply Na by 2.
2HCl + 2Na → 2NaCl + H2
This is a balanced chemical equation. As number of molecule in of each element is same both sides.
SHARE
Next articleTypes Of Chemical Reactions | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8730517029762268, "perplexity": 1449.018444333571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864848.47/warc/CC-MAIN-20180623000334-20180623020334-00103.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-5-section-5-3-special-factoring-5-3-exercises-page-343/41 | ## Intermediate Algebra (12th Edition)
$(x+4)(x^2-4x+16)$
$\bf{\text{Solution Outline:}}$ To factor the given expression, $x^3+64 ,$ use the factoring of the sum/difference of $2$ cubes. $\bf{\text{Solution Details:}}$ Using $(a\pm b)(a^2\mp ab+b^2)$ or the factoring of the sum/difference of $2$ cubes, the expression above is equivalent to \begin{array}{l}\require{cancel} (x+4)[(x)^2-x(4)+(4)^2] \\\\= (x+4)(x^2-4x+16) .\end{array} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327287077903748, "perplexity": 1953.6721326447584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867949.10/warc/CC-MAIN-20180526225551-20180527005551-00511.warc.gz"} |
https://www.octopus-code.org/documentation/main/developers/code_documentation/multisystem_framework/energies/ | # Calculating energies
In the multisysten framework, the total energy (total_energy) of a calculation consists of various contributions, where we follow the standard definitions of thermodynamics (see here)
• The kinetic energy (kinetic_energy)
• The internal energy (internal_energy)
• The potential energy (potential_energy)
### Kinetic energy
We use the term kinetic energy in the usual sense. The only slight exception is for container systems:
multisystem_update_kinetic_energy()
Specific systems need their specific routines to calculate the kinetic energy.
### Interaction energies
Everything else is treated as an interaction energy. Here we have to distinguish between the interactions between two different systems (which can be of the same type), and the interaction of a system with itself (intra-interaction).
#### inter-interactions
The interactions between two different systems do not require any further thought, as by definition no physical self-interaction can occur. As the method to calculate the interactions are part of the interaction class, it is independent of the actual system and can be implemented at the level of the classsystem_t.
system_update_potential_energy()
The exception are containers. Here we need to loop over the constituents. In order to distinguish inter- from intra-interactions, we need to query each interaction for its interaction partner, and skip the interaction, if the partner is part of the container.
multisystem_update_potential_energy()
#### intra-interactions
Systems may contain more than one physical particle (e.g. the electrons, a set of ions or container systems). In order to account for the interaction of these particles with other particles of the same system, we decided to treat this case as a system interacting with itself, which we call intra-interaction.
In some cases, such as a single particle, this intra interaction has to be zero, while in other cases with many particles, the interactions have to be calculated, where – of course – the interaction of one particle with itself has to be removed (at least approximatively).
Another important aspect of the implementation is that Octopus deals with one-sided interactions. This has the implication that there is no double counting when calculating the interaction energies. Both contributions have to be counted: for each system, we add the the energy of the system in the field of the partner.
system_update_internal_energy()
multisystem_update_internal_energy()
### Total energy
The total energy of the whole simulation (top level container) is clearly defined. It contains the sum of all internal and interaction energies.
For a specific system (e.g. electrons), the total energy also is the sum of the internal energy and the interaction energies (including the intra interaction energy).
system_update_total_energy() | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780123591423035, "perplexity": 484.29826880488804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00406.warc.gz"} |
https://annals.math.princeton.edu/2007/166-2/p02 | # Lehmer’s problem for polynomials with odd coefficients
### Abstract
We prove that if $f(x)=\sum_{k=0}^{n-1} a_k x^k$ is a polynomial with no cyclotomic factors whose coefficients satisfy $a_k\equiv1$ mod 2 for $0\leq k\lt n$, then Mahler’s measure of $f$ satisfies $\log {\rm M}(f) \geq \frac{\log 5}{4}\left(1-\frac{1}{n}\right).$ This resolves a problem of D. H. Lehmer [12] for the class of polynomials with odd coefficients. We also prove that if $f$ has odd coefficients, degree $n-1$, and at least one noncyclotomic factor, then at least one root $\alpha$ of $f$ satisfies $\left\lvert\alpha\right\rvert > 1 + \frac{\log3}{2n},$ resolving a conjecture of Schinzel and Zassenhaus [21] for this class of polynomials. More generally, we solve the problems of Lehmer and Schinzel and Zassenhaus for the class of polynomials where each coefficient satisfies $a_k\equiv1$ mod $m$ for a fixed integer $m\geq2$. We also characterize the polynomials that appear as the noncyclotomic part of a polynomial whose coefficients satisfy $a_k\equiv1$ mod $p$ for each $k$, for a fixed prime $p$. Last, we prove that the smallest Pisot number whose minimal polynomial has odd coefficients is a limit point, from both sides, of Salem [19] numbers whose minimal polynomials have coefficients in $\{-1,1\}$.
## Authors
Peter Borwein
Department of Mathematics, Simon Fraser University, Burnaby BC V5A 1S6, Canada
Edward Dobrowolski
Department of Mathematics, College of New Caledonia, Prince George, B.C. V2N 1P8, Canada
Michael J. Mossinghoff
Department of Mathematics, Davidson College, Davidson, NC 28035, United States | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958629846572876, "perplexity": 556.1036187914427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00699.warc.gz"} |
https://www.maplesoft.com/support/help/maple/view.aspx?path=MTM/quorem | quorem - Maple Help
MTM
quorem
polynomial quotient and remainder
Calling Sequence q, r := quorem(A, B) q, r := quorem(A, B, x)
Parameters
A - expression or array B - expression or array q - variable r - variable x - (optional) variable
Description
• The quorem(A,B,x) function computes the element-wise quotient and remainder of A and B. Each expression in A and B is interpreted as a polynomial of x.
• If the optional argument x is omitted, then x is equal to findsym(A,1) if findsym(A,1) is not empty. Otherwise, x is equal to findsym(B,1) if findsym(B,1) is not empty. Otherwise, each expression in A and B must evaluate to an integer. In this last case, the quorem(A,B) function computes the element-wise integer quotient and remainder of A and B.
• If A is a scalar, then A is divided by each element of B.
• If B is a scalar, then each element of A is divided by B.
• When both and A and B are non-scalar, they must be the same size.
Examples
> $\mathrm{with}\left(\mathrm{MTM}\right):$
> $A≔\mathrm{Matrix}\left(2,3,'\mathrm{fill}'=108xy{z}^{2}+{y}^{3}\right):$
> $B≔\mathrm{Matrix}\left(2,3,'\mathrm{fill}'=27xy\right):$
> $\mathrm{quorem}\left(A,B\right)$
$\left[\begin{array}{ccc}{4}{}{{z}}^{{2}}& {4}{}{{z}}^{{2}}& {4}{}{{z}}^{{2}}\\ {4}{}{{z}}^{{2}}& {4}{}{{z}}^{{2}}& {4}{}{{z}}^{{2}}\end{array}\right]$ (1)
> $q,r≔\mathrm{quorem}\left(⟨{x}^{3}-10{x}^{2}+31x-30,{x}^{2}-1⟩,⟨{x}^{3}-12{x}^{2}+41x-42,{x}^{2}+2x+1⟩\right):$
> $q$
$\left[\begin{array}{c}{1}\\ {1}\end{array}\right]$ (2)
> $r$
$\left[\begin{array}{c}{2}{}{{x}}^{{2}}{-}{10}{}{x}{+}{12}\\ {-}{2}{}{x}{-}{2}\end{array}\right]$ (3)
> $q,r≔\mathrm{quorem}\left(⟨⟨56,23⟩|⟨45,24⟩⟩,⟨⟨2,7⟩|⟨5,0⟩⟩\right)$
${q}{,}{r}{≔}\left[\begin{array}{cc}{28}& {9}\\ {3}& {\mathrm{\infty }}\end{array}\right]{,}\left[\begin{array}{cc}{0}& {0}\\ {2}& {\mathrm{undefined}}\end{array}\right]$ (4)
> $q$
$\left[\begin{array}{cc}{28}& {9}\\ {3}& {\mathrm{\infty }}\end{array}\right]$ (5)
> $r$
$\left[\begin{array}{cc}{0}& {0}\\ {2}& {\mathrm{undefined}}\end{array}\right]$ (6)
> $q,r≔\mathrm{quorem}\left({x}^{2}+y,{y}^{2}+x,x\right):$
> $q$
${-}{{y}}^{{2}}{+}{x}$ (7)
> $r$
${{y}}^{{4}}{+}{y}$ (8)
> $q,r≔\mathrm{quorem}\left({x}^{2}+y,{y}^{2}+x,y\right):$
> $q$
${0}$ (9)
> $r$
${{x}}^{{2}}{+}{y}$ (10) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9441676139831543, "perplexity": 1794.2949173320494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271763.15/warc/CC-MAIN-20220626161834-20220626191834-00560.warc.gz"} |
http://math.stackexchange.com/questions/219629/how-do-we-find-the-length-of-the-line-parametric-curve | # How do we find the length of the line (parametric curve)?
A curve in the $xy$-plane is given parametrically by $$x(t) = e^{2t}, \quad y(t) = e^{2t} \sin(2t), \quad t \in [0, \pi/2].$$ What is the length of this curve?
Ok, actually I know what to do, but I don't know how to do it because I can't get rid of the trigonometric term.
If it were $x(t) = e^{2t}\cos(t)$, I could have done it, but it's not so I can't get rid of the trigonometric term and integrate the expression.
-
You've gotten some pretty solid answers to some of your questions. I would strongly suggest looking them over and accepting them if they meet your needs. – user17794 Oct 23 '12 at 20:26
I agree with Duff. For this question, can you put the work you've done so far? It sounds like you have some idea about how to start but can't complete the argument. – Eric Stucky Oct 23 '12 at 20:29
I erased everything I've done. I spent like 15 min trying to figure out what to do. – Gladstone Asder Oct 23 '12 at 21:09
With $x=e^{2t} \Rightarrow \dot{x} = 2 x$ and $y = e^{2t} \sin 2t = x \sin 2t \Rightarrow \dot{y} = \dot{x} \sin 2 t + 2 x \cos 2 t = 2 x \left(\sin 2 t + \cos 2 t\right)$, we want to calculate $$\begin{eqnarray} \int_0^{\pi/2} dt \left({\dot{x}}^2 + {\dot{y}}^2\right)^{1/2} &=& \int_0^{\pi/2} dt \left[\left(2x\right)^2 + \left(2 x\right)^2 \left(\sin 2 t + \cos 2 t\right)^2\right]^{1/2} \\ &=& 2 \int_0^{\pi/2} dt \ e^{2t} \left[1 + \left(\sin 2 t + \cos 2 t\right)^2\right]^{1/2} \\ &=& \sqrt{2} \int_0^{\pi} du \ e^u \left(1 + \sin u \cos u\right)^{1/2} \end{eqnarray}$$ Wolfram cannot do that integral analytically. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455942273139954, "perplexity": 157.20722428303608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464052868853.18/warc/CC-MAIN-20160524012108-00158-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://wiki.zcubes.com/index.php?title=Manuals/calci/CHITEST&oldid=213850 | # Manuals/calci/CHITEST
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
CHITEST (ActualRange,ExpectedRange)
• is the array of observed values.
• is the array of expected values.
• CHITEST(), returns the test for independence.
## Description
• It is a test for independence.
• This function gives the value from the chi-squared distribution and the appropriate degrees of freedom i.e it calculates statistic and degrees of freedom, then calls CHIDIST.
The conditions of test is
The table should be 2x2 or more than 2x2
Each observations should not be dependent
All expected values should be 10 or greater.
Each cell has an expected frequency of at least five.
• The test first calculates a statistic using the formula:
• is the array of the observed values in a given set of values
• observed and expected must have the same number of rows and columns and there must be atleast 2 values in each.
• A low result of is an indicator of independence.
• From the formula of we will get is always positive or 0.
• 0 only if for each and .
• CHITEST uses the distribution with the number of Degrees of Freedom df.
• where and .
• If and , then or if and , then .
If then this function will give the error result
• The obtained result is entered in the Chi square distribution table with the obtained degrees of freedom.
• This returns the test for independence (probability).
## ZOS
• The syntax is to calculate CHITEST in ZOS is .
• where is the array of observed values.
• is the array of expected values.
• For e.g;CHITEST([60,72,86,45],[57.08,75.10,87.1,42.45])
Chi-Squared Test
## Examples
A student investigated the chance of getting viral fever in a school for a period that took vitamin tablets every day. The total number of students 880. In that 639 students didn't get viral fever and 241 students got fever .But the expected ratio is 1:3
• If the ratio is 1:3 and the total number of observed individuals is 880, then the expected numerical values should be: 660 will not get fever and 220 students will get fever.
No Fever Get Fever
Observed Values 639 241
Expected Values 660 220
0.668 2
• The value is 2.668
• Now
• From the Chi Squared Distribution probability table with is 1, the value of 2.668 is 0.10.
CHITEST(or,er) = 0.10
Chi Square Test | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9754689335823059, "perplexity": 1797.9669198286379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00730.warc.gz"} |
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2989 | ## Residual Demand Modeling and Application to Electricity Pricing
• Worldwide the installed capacity of renewable technologies for electricity production is rising tremendously. The German market is particularly progressive and its regulatory rules imply that production from renewables is decoupled from market prices and electricity demand. Conventional generation technologies are to cover the residual demand (defined as total demand minus production from renewables) but set the price at the exchange. Existing electricity price models do not account for the new risks introduced by the volatile production of renewables and their effects on the conventional demand curve. A model for residual demand is proposed, which is used as an extension of supply/demand electricity price models to account for renewable infeed in the market. Infeed from wind and solar (photovoltaics) is modeled explicitly and withdrawn from total demand. The methodology separates the impact of weather and capacity. Efficiency is transformed on the real line using the logit-transformation and modeled as a stochastic process. Installed capacity is assumed a deterministic function of time. In a case study the residual demand model is applied to the German day-ahead market using a supply/demand model with a deterministic supply-side representation. Price trajectories are simulated and the results are compared to market future and option prices. The trajectories show typical features seen in market prices in recent years and the model is able to closely reproduce the structure and magnitude of market prices. Using the simulated prices it is found that renewable infeed increases the volatility of forward prices in times of low demand, but can reduce volatility in peak hours. Prices for different scenarios of installed wind and solar capacity are compared and the meritorder effect of increased wind and solar capacity is calculated. It is found that wind has a stronger overall effect than solar, but both are even in peak hours. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417033553123474, "perplexity": 1143.2152467176572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207146.96/warc/CC-MAIN-20190327000624-20190327022624-00003.warc.gz"} |
https://math.stackexchange.com/questions/2863332/different-eigenvalues-of-the-same-linear-transformation-according-different-base | # Different eigenvalues of the same linear transformation according different bases
I have a question about linear transformation and eigenvalues.
My question:
Given a linear transformation $T:R^3 \to R^3$, And: $E$ is the standard basis of $R^3$, $B$ is another basis of $R^3$.
Let's denote: $A=[T]^B_B$ .
Let's assume that after gaussian elimination process on $A$ we get a matrix $M$ with one row of $0$'s, and now we calculate the eigenvalues of $M$.
• Are the eigenvalues of $M$ also the eigenvalues of the transformation $T?$
I think yes, because the eigenvalues don't change when you change basis, but the correct answer is no, can someone explain to me why?
By the way, Is it correct to say that we always must to work only with the standard basis of $R^3$ to find the eigenvalues of $T?$ (i.e the eigenvalues of $T$ are the roots of the caracteristic polynomial $P_A = Det(A- \lambda \cdot I)$ where $A=[T]^E_E$ , and $E$ is the standard basis)?
Thanks for help!
• Row operations will change the eigenvalues. Otherwise, every invertible matrix would only have the eigenvalue $1$, since they all can be row-reduced to the identity matrix. – Theo Bendit Jul 26 '18 at 11:39
• In gaussian elimination you change both bases to possibly different ones (e.g. $M = [T]^C_D$, where $C \neq D$). – Stefan Jul 26 '18 at 11:40
For your first question, I want to cite @Theo Bendit's comment and generally add: Gaussian elimination changes many properties of matrices if you're not careful with it, including eigenvalues and e.g. the determinant(at least with in the general way).
For your second question: No, the eigenvalue of the operator $T$ does not depend on the basis chosen, if you calculate the roots of $p_A$ for $A$ being a corresponding representation:
Let $A$ and $B$ be similar, that is they represent the same endomorphism w.r.t. different bases, i.e. $B=CAC^{-1}$ for some invertible $C$(which you might call the change-of-basis matrix). Then
$$B-xI=CAC^{-1}-xI=CAC^{-1}-xCIC^{-1}=C(A-xI)C^{-1}$$
Thus, as the determinant distributes over matrix multiplication, we have
$$p_B=\mathrm{det}(B-xI)=\mathrm{det}(C(A-xI)C^{-1})=\mathrm{det}(C)\cdot\mathrm{det}(A-xI)\cdot\mathrm{det}(C^{-1})=\mathrm{det}(C)\cdot\mathrm{det}(A-xI)\cdot\mathrm{det}(C)^{-1}=\mathrm{det}(A-xI)=p_A$$
The last steps follow from the elementary property of determinants for invertible matrices that $\mathrm{det}(C^{-1})=\mathrm{det}(C)^{-1}$.
EDIT: Note that is makes thus sense to define the characteristic polynomial for an endomorphism, i.e. defining $p_T$ as it made sense to define the determinant for endomorphisms instead of only matrices. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852770566940308, "perplexity": 130.7750752872293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998558.51/warc/CC-MAIN-20190617183209-20190617205209-00045.warc.gz"} |
https://www.kartook.com/applications/2007-m-office-add-in-microsoft-save-as-pdf-or-xps/ | # 2007 M\$ Office Add-in: Microsoft Save as PDF or XPS
This download allows you to export and save to the PDF and XPS formats in eight 2007 Microsoft Office programs. It also allows you to send as e-mail attachment in the PDF and XPS formats in a subset of these programs. Specific features vary by program.
This Microsoft Save as PDF or XPS Add-in for 2007 Microsoft Office programs supplements and is subject to the license terms for the 2007 Microsoft Office system software. You may not use this supplement if you do not have a license for the software.
• Supported Operating Systems: Windows Server 2003; Windows Vista; Windows XP Service Pack 2 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8922784328460693, "perplexity": 4547.133782925092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00514.warc.gz"} |
http://www.thesubversivearchaeologist.com/2012/03/sideways-sunday-einstein-errs.html | ## Sunday, 18 March 2012
### Sideways Sunday: Einstein Errs?
Even I go off the track at times. But this is ridiculous. I've never gotten the whole time dilation 'thing.' As far as I'm concerned, anthropologically speaking [of course], time is something we humans have constructed to talk about processes: things that have a beginning, a middle, and an end. As an archaeologist who deals with immense time depth I can't imagine that the universe gives a flying hooh-hah about its passage. And so the square peg of 'time-space' never made it through the more-or-less circular hole in my head where knowledge usually manages to insert itself. The worst of it is that I'm not a mathematician.
Arithmetic? Fine. Algebra? Barely. Calculus? Forget it. Einstein's math? Only in my wildest dreams. So, it's kind of funny, don't you think, that I should be worried about Special Relativity? [I think it's funny! And perverse, given my thoroughly innumerate self.] And so, what do I do about it? I end up rooting around in the most basic explanation for time dilation due to Special Relativity. And what do I find? Implications that I couldn't, in a million years, make sense of in a way that Einstein's theoretical progeny would understand. It's for that reason that I'm going to impose it on my subversive pals. See what you think.
[Those of you with more than a sprinkling of physics might wish to disabuse me of any misconceptions in what follows. But please do it gently. My ego is, after all, fragile.]
Patent Office Clerk A. Einstein (ca. 1905)
Einstein’s Theory of Special Relativity* is often illustrated using an archetypal cartoon—a clock, attached to a train, traveling at or near the speed of light. This is a kind of ‘thought experiment’, a heuristic device with a long and venerable history in physical investigation. However, as an example of temporal relativity, the ‘moving train’ model can be seen to fall short of the claims that have been made on its merits, and the mathematical constructs that would seem to support it. As I hope to demonstrate in this post, it's because the observations necessary to support the claim would be unobtainable in the physical world (the same world, one presumes, that Einstein was trying to describe and explain with his theory of special relativity). In what follows I examine the assumptions implicit in this classic thought experiment, suggest what an objective observer would really perceive as the train sped past, and with elementary school arithmetic, demonstrate that this attempt to model Special Relativity fails to represent empirical reality.
Imagine that a mag-lev train is traveling from left to right, in a vacuum, in total darkness, on a horizontal, linear path very near the speed of light. The only force acting on the train is that which propels it. The train carries a very precise clock that can emit a continuous stream of photons that exits the train horizontally at 90° to the direction of travel. In Einstein’s thought experiment there is a stationary observer; in my scenario the observer is a closed-coupled device (CCD) that is sensitive enough to discern the first photon emitted and each one thereafter.
The train is traveling exactly 1.0 m/s below the speed of light, or 299,792,457 m/s. Thus, during one microsecond-long interval the train travels 299.792457 m (or about the length of three Canadian football fields**). In the standard story, a stationary observer sees the light traveling further in a given unit of time than would an observer on the train. By the time Einstein was mulling this over, most of the math to support the concept was already in place, in the form of the so-called Lorentz Transformation.*** Einstein’s contribution was to propose that the speed of light is a constant. With that in mind, he inferred that time must therefore be slowed down relative to the stationary observer, even as time passes normally for a passenger aboard the train.
Credit to
Suppose that at the very moment the train passes at 90° to our CCD, the photon stream begins. Because the speed of light is invariable, by definition that photon would take 1.0 µs to cover the 299.792458 m to our CCD. And here is where the experiment suffers its first set-back. By definition we would be unable to begin timing the train’s progress because, simple mechanics tells us, the photon that marked the precise moment the train passed would never impinge on the CCD. Rather, after 1.0 µs the first photon emitted would arrive at a point 299.792457 m to the right and 0.000001 m to the rear of the observation point. Thus, the CCD would not have detected the first photon. Even at this early stage of the experiment one already has difficulty reconciling objective reality with Einstein’s theory, because the experiment’s success depends on an observer seeing the photon stream at the moment the train reaches a point at 90° to the observation point. For any theory to be so at odds with empirical reality, and yet be so universally accepted, strikes me as odd. For, if you can't observe a phenomenon, how on Earth can you claim to understand its behavior?
Even if one were to shorten the distance of the observation point the reality would be the same. A photon emitted as the train passed a CCD only a millionth of a meter less than a light nanosecond away from the train (i.e. 0.299792458 m) would still miss the mark and would therefore be imperceptible. In this case, after a nanosecond, the photon would end up nearer to the CCD than in the previous example, only 0.299792457 m to the right of the observation point (or about the length of a northern European adult male’s foot).
In reality, for the CCD ever to ‘perceive’ a photon emitted at the moment the train passes, the CCD would need to be so close to the source (i.e. one photon’s diameter away) as to render the experiment, to all intents and purposes, meaningless. And, because any photon emitted in the manner described above would continue moving away from the observation point in two directions at or very near the speed of light, our CCD would be in the dark for ever thereafter.
At this point the reader might be tempted to say, “Well, this is just a thought experiment, after all. What matters is the concept.” In response, I would suggest that if the ‘concept’ cannot be replicated in the physical world, even in theory, what possible value could there be in the model as a representation of the claim that time is relative?
Summarizing to this point. The photon emitted exactly at the time that the train passes the observer, and every photon emitted thereafter would be, in theory and in practice, imperceptible to any stationary observer, even a CCD capable of sensing individual photons. For a photon emitted from the train ever to reach the CCD, it would either have to be in two places at once, or be able to exceed the speed of light, perhaps by quite a bit. Thus, Einstein’s illustration fails to provide a compelling case for special relativity in a real world, and the mathematics that describe it must also fail to reproduce reality.
In every practical sense, to be able to track photons emitted from the moving train, our observer would have to be moving with the train, or be in two places at once. Clearly a moving observation point would violate the assumptions of the experiment, and an ability to be in two places at once would violate the laws of nature (Quantum Theory notwithstanding). How much faith or credence can we confidently place in Einstein’s experimental evidence of time dilation if it demands that light, itself, or matter, for that matter, behave contrary to physical limits?
Credit
Another manifestation of Einstein’s thought experiment involves a train traveling at speeds much more amenable to human perception, such as the TGV or the Bullet Train. In this alternative experiment, on board the train a beam of light is emitted from the ceiling, and aimed at the floor, such that it spans a distance of approximately 2.5 m. The observer on the train sees a constant beam of light. On the ground, the theoretical observer would see a blurred line of light that began at a point on the ceiling of the train and ended at the floor some distance to the right of its starting point. As the theory goes, the “distance” covered during the process would then be the square root of the sum of the squares of the horizontal and vertical components of the light’s travel, which is a number greater than the distance from the ceiling to the floor. As the theory of Special Relativity depicts it, the stationary observer sees that light has traveled further than it did aboard the train, because on the train it was vertical. Since the speed of light is a physical constant, and the distance traveled on the train is less than the apparent distance traveled in relation to a stationary observer, on Einstein’s account time aboard the train must have slowed down.
Yet, as I've implied above--in theory based on physical reality--for an observer to 'see' either the photon stream emitted by the passing train, or the photon streamed aimed vertically at the floor of the train from the ceiling, it would need to be on a conveyance of its own, have left a predetermined point to the left at a predetermined time, traveled the same distance at the same speed as the train, and converged at the same end point. In realistic terms, the moving observer would see the first photon emitted by the clock about half way from the photon stream's commencement to the end point of the journey, and the same observer would record the final photon emitted in the moment it converges with the theoretical light source. Notwithstanding the cataclysm that would result in reality when the two trains collided, I think I've made the point well enough. [And, lest you think that such experiments would be unlikely to occur in the real world, think of crash-test dummies and their circumstances, then multiply by infinity (or some factor just this side of it).]If, therefore, one “unpacks” the assumptions associated with Einstein’s thought experiment, it becomes clear that, regardless of the inertial frame of the observer, the “light” could not, and did not travel further in relation to a stationary frame of reference.
There is nothing inherently wrong with performing thought experiments, or with ‘discovering’ properties of nature in one’s dreams, for that matter. It has been known to happen. However it is crucially important that the imagined scenario can be replicated (at least theoretically) in the real world. When Einstein’s mental formulation of relativity is compared with empirical constraints on observation, there is clearly no accord: infeasible observation, coupled with intractable issues of timing and the physical nature of light make this demonstration of Special Relativity appear fanciful, at best, fantastic, at worst. Granted, the existence of a train traveling at the speed of light would also be highly unlikely. However it is, at least, theoretically possible. A light generator on board is also a real possibility. Equally likely is a timepiece that could control photon emission. However, that is where Einstein’s concept and reality finally part company. After comparing the imaginary to the real it is hard to escape the conclusion that Einstein’s model of special relativity presupposes sensory abilities and light behaviors that are physically impossible. I have no idea what implications this has for the theory itself.
However, someone said to me once that he could derive relativity mathematically, and that he didn’t require the cartoon or the thought experiment to help him. To him I would say simply that, while it may be possible to derive relativity mathematically, everyone knows that mathematics is a human invention, and must at times give way before ruthless empirical reality. Indeed, it seems as if one of the only ways to ‘see’ relativity is in the mind’s eye of mathematics. Ask any logical empiricist: the mind’s eye is not an objective source of empirical data.
This is the sort of 'deer caught in the headlights look that I usually get when confronted with a similar array of mathematical notation
All of this might make us a wee bit skeptical of Einstein’s conclusion that time is relative. [It does me, as you can imagine.] And, after a century in Einstein’s sway, it might convince us, once again, to entertain the intuitively satisfying notion that time can neither be speeded up, nor slowed down. This view of time accords much better with the anthropological insight that the consciousness of time is peculiar to humans, and that it is a cultural construct. So much for space-time. Perhaps Time exists, not as Einstein so famously said, so that everything in the universe doesn’t happen at once, but because we humans need to communicate our perception of sequential events in nature that span intervals which are meaningful only to us.
* A. Einstein, Zur Elektrodynamik bewegter Körper. Annalen der Physik 17, 891–921 (1905).
** The length of a “football” field (or pitch) is relative to the side of the Atlantic on which the English-speaking reader resides, and varies according to the rules set by the respective governing bodies of the “football” played there. On the east (or right) side of the Atlantic, the distance is 90–120 m (FIFA). On the west (or left) side of the Atlantic, the distance further depends upon which side of the Canada–U.S. border one lives. South of that line the distance is 91.44 m (NFL). North of the line, the distance is set at 100.584 m (CFL).
*** Which, curiously enough, was an attempt to understand the universe in terms of the aether through which, up until Einstein's time, was theorized to have been the 'medium' through which electromagnetic energy moved (much like sound waves through air and water waves through, well, water).
1. Wow, you have managed to critique relativity on the basis of 1) it doesn't mesh with common sense perception, 2) post-modern relativism (we can't really know anything because there is no objective "truth", and 3)the anthropic principle (whatever exists in the universe is there so we humans can observe it).
However, there is ample real (not thought experiment)experimental evidence of time dilation, and in fact, the GPS system you use depends on it (http://en.wikipedia.org/wiki/Time_dilation)
2. Hi, Unknown,
Re: 1) Sense perception, common or otherwise, is the foundation of knowledge according to the logical empiricists, and of empiricism all the way back to Bacon. Their mantra was that without data that can be sensed (or observed), there can be no scientific knowledge. My problem with Einstein's thought experiment is not that it isn't commonsensical; it's that it doesn't involve sense at all, and is thus at odds with the empirical realm in which I prefer to ground the knowledge that I make and use.
Re: 2) I don't recall saying that we couldn't know anything, much less because there is no objective "truth." In fact I very much adhere to the need to ground knowledge by reference to the empirically knowable. As I've said before on the Subversive Archaeologist, the radical post-modern claim that 'anything goes' only demonstrates that they don't understand the implications of their claim. If, as they'd have us believe, 'anything goes,' why should we listen to them?
Re: 3) I think you've twisted my meaning with regard to our role in the construction of time. In my small world most of what goes on out there in the cosmos is of little concern to me, much less germane to my existence. Whether or not time exists as a physical parameter, the human construction of it would be irrelevant to its physics, in the same way that it was irrelevant to Light that science once thought it was a wave, then a stream of particles, and later something else, or that the moon was made of green cheese, or the earth was flat. Those forces and entities were what they were, regardless of how we viewed them.
As for the ample evidence of time dilation, it matters little to me--although the demise of this thought experiment forces me to think less of it. Nevertheless, in regard to your accusation that the universe is there for our entertainment, I have done nothing of the kind. I have simply questioned the presumably 'commonsense' manner in which time dilation is described to those of us too feeble to understand the truth of the mathematical universe.
Thanks for thinking of me.
Thanks for visiting! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813244104385376, "perplexity": 671.9234455105641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00166.warc.gz"} |
https://www.sudoedu.com/en/multivariable-calculus-lecture-videos/partial-derivatives-and-applications/proof-of-clairauts-theorem-equqlity-of-mixed-derivative/ | # Proof of Clairaut’s Theorem Equqlity of Mixed Derivative
This video provide the proof of Clairaut’s theorem which states that if the mixed partial derivatives are continuous on a domain, then they must equal on the domain. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980100154876709, "perplexity": 326.9673042717328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00268.warc.gz"} |
https://mathstodon.xyz/@vam103/101227798005584190 | The Gramm-Schmit process.... Exactly what you think you do to turn a basis into an orthonormal basis.
A Mastodon instance for maths people. The kind of people who make $$\pi z^2 \times a$$ jokes. Use $$ and $$ for inline LaTeX, and $ and $ for display mode. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797647595405579, "perplexity": 2813.5275682294896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986679439.48/warc/CC-MAIN-20191018081630-20191018105130-00415.warc.gz"} |
https://www.physicsforums.com/threads/trig-function.89034/ | # Trig Function
1. Sep 14, 2005
### cscott
Trig Functions
$$2y - 5 = \sin(144t - 45)$$
How can I find when the object is at equilibrum? I know it's when y = 0, but how do I solve from there? I tried arcsine but it gives me a domain error.
How can I find the minimum in between [itex]0 \le t \le 10[/tex]?
Is the period of oscillation 0.625 degrees?
Last edited: Sep 14, 2005
2. Sep 14, 2005
### Staff: Mentor
What makes you think equilibrium is when y = 0? The midpoint of the motion will be when sin() = 0.
3. Sep 14, 2005
### Jameson
Also, if you solve explicitly for y, you'll see that there is a vertical shift, meaning that the y-axis is not the midpoint of this graph. Use Doc Al's advice.
As for the period, I didn't check your numbers, but remember in a sine graph in the form of $$a \sin{(bx+c)}+d$$ that $$\frac{2\pi}{|b|}$$ is equal to the period. That's in radians of course.
4. Sep 15, 2005
### cscott
Alright, I revised my answers given the replies so far. I think the period is 2.5 degrees, maximum height (the question is about a spring oscillating) is 7.5m and the first equilibrum is at t = 0.3125. Can anyone tell me if I'm correct?
I'm still having trouble with the minimum in between [itex]0 \le t \le 10[/tex]?
Last edited: Sep 15, 2005
5. Sep 15, 2005
### Staff: Mentor
The period should be in seconds, not degrees. (144 is in what units?) Rewrite your expression like this:
$$y = 2.5 + 0.5 \sin(144t - 45)$$
If you understand what this says, you should be able "read off" the equilibrium position, the amplitude, and the maximum and minimum values of y.
6. Sep 15, 2005
### cscott
Sorry, I meant at what times is the function at it's minimum between 0 <= t <= 10
As for the period, is it correct to say 2.5s instead of 2.5 degrees? I used what Jameson gave me: 360/|b| = T
I made the mistake of thinking the amplitude was 5 (no idea where I got that number, I've been juggling questions all night ;)... I see the max height should 3m, correct (assuming it's m vs t)?
7. Sep 16, 2005
### Staff: Mentor
y will be a minimum wherever sin() is at its minimum, which is when sin() = -1.
If the 144 is degrees/sec, then 2.5s is correct.
Right. Since the sin function oscillates between -1 and +1, y will oscillate between 2 and 3.
8. Sep 17, 2005 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922206461429596, "perplexity": 1307.6272073687117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00040-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/87423 | Files in this item
FilesDescriptionFormat
application/pdf
9990170.pdf (3MB)
(no description provided)PDF
Description
Title: Contributions to Estimation in Item Response Theory Author(s): Trachtenberg, Felicia Lynn Doctoral Committee Chair(s): He, Xuming Department / Program: Statistics Discipline: Statistics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Statistics Abstract: In the logistic item response theory models, the number of parameters tends to infinity together with the sample size. Thus, there has been a longstanding question of whether the joint maximum likelihood estimates for these models are consistent. The main contribution of this paper is the study of the asymptotic properties and computation of the joint maximum likelihood estimates, as well as an alternative estimation procedure, one-step estimation. The one-step estimates are much easier to compute, yet are consistent and first-order equivalent to the joint maximum likelihood estimates under certain conditions on the sample sizes if the marginal distribution of the ability parameter is correctly specified. The one-step estimates are also highly robust against modest misspecifications of the ability distribution. We also study the accuracy of variance estimates for the one-step estimates. Finally, we study tests of the goodness of fit for the models. We show that Rao's score test is superior to the existing chi-square tests. Issue Date: 2000 Type: Text Language: English Description: 74 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2000. URI: http://hdl.handle.net/2142/87423 Other Identifier(s): (MiAaPQ)AAI9990170 Date Available in IDEALS: 2015-09-28 Date Deposited: 2000
| {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941661715507507, "perplexity": 1665.1272833228602}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887423.43/warc/CC-MAIN-20180118151122-20180118171122-00792.warc.gz"} |
https://infoscience.epfl.ch/record/219898 | Wind tunnel experiments: cold-air pooling and atmospheric decoupling above a melting snow patch
The longevity of perennial snowfields is not fully understood, but it is known that strong atmospheric stability and thus boundary-layer decoupling limit the amount of (sensible and latent) heat that can be transmitted from the atmosphere to the snow surface. The strong stability is typically caused by two factors, (i) the temperature difference between the (melting) snow surface and the near-surface atmosphere and (ii) cold-air pooling in topographic depressions. These factors are almost always a prerequisite for perennial snowfields to exist. For the first time, this contribution investigates the relative importance of the two factors in a controlled wind tunnel environment. Vertical profiles of sensible heat and momentum fluxes are measured using two-component hot-wire and one-component cold-wire anemometry directly over the melting snow patch. The comparison between a flat snow surface and one that has a depression shows that atmospheric decoupling is strongly increased in the case of topographic sheltering but only for low to moderate wind speeds. For those conditions, the near-surface suppression of turbulent mixing was observed to be strongest, and the ambient flow was decoupled from the surface, enhancing near-surface atmospheric stability over the single snow patch.
Published in:
Cryosphere, 10, 1, 445-458
Year:
2016
Publisher:
Gottingen, Copernicus Gesellschaft Mbh
ISSN:
1994-0416
Laboratories: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475607872009277, "perplexity": 3138.496318958083}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945082.84/warc/CC-MAIN-20180421071203-20180421091203-00102.warc.gz"} |
https://par.nsf.gov/biblio/10304583-spin-detection-micromechanical-trampoline-towards-magnetic-resonance-microscopy-harnessing-cavity-optomechanics | Spin detection with a micromechanical trampoline: towards magnetic resonance microscopy harnessing cavity optomechanics
Abstract
We explore the prospects and benefits of combining the techniques of cavity optomechanics with efforts to image spins using magnetic resonance force microscopy (MRFM). In particular, we focus on a common mechanical resonator used in cavity optomechanics—high-stress stoichiometric silicon nitride (Si3N4) membranes. We present experimental work with a ‘trampoline’ membrane resonator that has a quality factor above 106and an order of magnitude lower mass than a comparable standard membrane resonators. Such high-stress resonators are on a trajectory to reach 0.1$aN/Hz$force sensitivities at MHz frequencies by using techniques such as soft clamping and phononic-crystal control of acoustic radiation in combination with cryogenic cooling. We present a demonstration of force-detected electron spin resonance of an ensemble at room temperature using the trampoline resonators functionalized with a magnetic grain. We discuss prospects for combining such a resonator with an integrated Fabry–Perot cavity readout at cryogenic temperatures, and provide ideas for future impacts of membrane cavity optomechanical devices on MRFM of nuclear spins.
Authors:
; ; ; ; ; ;
Award ID(s):
Publication Date:
NSF-PAR ID:
10304583
Journal Name:
New Journal of Physics
Volume:
21
Issue:
4
Page Range or eLocation-ID:
Article No. 043049
ISSN:
1367-2630
Publisher:
IOP Publishing
National Science Foundation
##### More Like this
1. Abstract
Complete theoretical understanding of the most complex superconductors requires a detailed knowledge of the symmetry of the superconducting energy-gap$${\mathrm{{\Delta}}}_{\mathbf{k}}^\alpha$$${\Delta }_{k}^{\alpha }$, for all momentakon the Fermi surface of every bandα. While there are a variety of techniques for determining$$|{\mathrm{{\Delta}}}_{\mathbf{k}}^\alpha |$$$\mid {\Delta }_{k}^{\alpha }\mid$, no general method existed to measure the signed values of$${\mathrm{{\Delta}}}_{\mathbf{k}}^\alpha$$${\Delta }_{k}^{\alpha }$. Recently, however, a technique based on phase-resolved visualization of superconducting quasiparticle interference (QPI) patterns, centered on a single non-magnetic impurity atom, was introduced. In principle, energy-resolved and phase-resolved Fourier analysis of these images identifies wavevectors connecting allk-space regions where$${\mathrm{{\Delta}}}_{\mathbf{k}}^\alpha$$${\Delta }_{k}^{\alpha }$has the same or opposite sign. But use of a single isolated impurity atom, from whose precise location the spatial phase of the scattering interference pattern must be measured, is technically difficult. Here we introduce a generalization of this approach for use with multiple impurity atoms, and demonstrate its validity by comparing the$${\mathrm{{\Delta}}}_{\mathbf{k}}^\alpha$$${\Delta }_{k}^{\alpha }$it generates to the$${\mathrm{{\Delta}}}_{\mathbf{k}}^\alpha$$${\Delta }_{k}^{\alpha }$determined from single-atom scattering in FeSe where s±energy-gap symmetry is established. Finally, to exemplify utility, we use the multi-atom technique on LiFeAs and find scattering interference between the hole-like and electron-like pockets as predicted for$${\mathrm{{\Delta}}}_{\mathbf{k}}^\alpha$$${\Delta }_{k}^{\alpha }$of opposite sign.
2. Abstract
Stochastic networks for the clock were identified by ensemble methods using genetic algorithms that captured the amplitude and period variation in single cell oscillators ofNeurosporacrassa. The genetic algorithms were at least an order of magnitude faster than ensemble methods using parallel tempering and appeared to provide a globally optimum solution from a random start in the initial guess of model parameters (i.e., rate constants and initial counts of molecules in a cell). The resulting goodness of fit$${x}^{2}$$${x}^{2}$was roughly halved versus solutions produced by ensemble methods using parallel tempering, and the resulting$${x}^{2}$$${x}^{2}$per data point was only$${\chi }^{2}/n$$${\chi }^{2}/n$= 2,708.05/953 = 2.84. The fitted model ensemble was robust to variation in proxies for “cell size”. The fitted neutral models without cellular communication between single cells isolated by microfluidics provided evidence for onlyoneStochastic Resonance at one common level of stochastic intracellular noise across days from 6 to 36 h of light/dark (L/D) or in a D/D experiment. When the light-driven phase synchronization was strong as measured by the Kuramoto (K), there was degradation in the single cell oscillations away from the stochastic resonance. The rate constants for the stochastic clock network are consistent with those determined on a macroscopic scale of 107cells.
3. Abstract
We construct, for the first time, the time-domain gravitational wave strain waveform from the collapse of a strongly gravitating Abelian Higgs cosmic string loop in full general relativity. We show that the strain exhibits a large memory effect during merger, ending with a burst and the characteristic ringdown as a black hole is formed. Furthermore, we investigate the waveform and energy emitted as a function of string width, loop radius and string tension. We find that the mass normalized gravitational wave energy displays a strong dependence on the inverse of the string tensionEGW/M0∝ 1/, with$EGW/M0∼O(1)%$at the percent level, for the regime where≳ 10−3. Conversely, we show that the efficiency is only weakly dependent on the initial string width and initial loop radii. Using these results, we argue that gravitational wave production is dominated by kinematical instead of geometrical considerations.
4. Abstract
Efficient conversion of methane to value-added products such as olefins and aromatics has been in pursuit for the past few decades. The demand has increased further due to the recent discoveries of shale gas reserves. Oxidative and non-oxidative coupling of methane (OCM and NOCM) have been actively researched, although catalysts with commercially viable conversion rates are not yet available. Recently,$${{{{{{{\mathrm{Sr}}}}}}}}_2Fe_{1.5 + 0.075}Mo_{0.5}O_{6 - \delta }$$${\mathrm{Sr}}_{2}F{e}_{1.5+0.075}M{o}_{0.5}{O}_{6-\delta }$(SFMO-075Fe) has been reported to activate methane in an electrochemical OCM (EC-OCM) set up with a C2 selectivity of 82.2%1. However, alkaline earth metal-based materials are known to suffer chemical instability in carbon-rich environments. Hence, here we evaluated the chemical stability of SFMO in carbon-rich conditions with varying oxygen concentrations at temperatures relevant for EC-OCM. SFMO-075Fe showed good methane activation properties especially at low overpotentials but suffered poor chemical stability as observed via thermogravimetric, powder XRD, and XPS measurements where SrCO3was observed to be a major decomposition product along with SrMoO3and MoC. Nevertheless, our study demonstrates that electrochemical methods could be used to selectively activate methane towards partial oxidation products such as ethylene at low overpotentials while higher applied biases result in the complete oxidation of methane to carbon dioxide and water.
5. Abstract
We present a proof of concept for a spectrally selective thermal mid-IR source based on nanopatterned graphene (NPG) with a typical mobility of CVD-grown graphene (up to 3000$$\hbox {cm}^2\,\hbox {V}^{-1}\,\hbox {s}^{-1}$$${\text{cm}}^{2}\phantom{\rule{0ex}{0ex}}{\text{V}}^{-1}\phantom{\rule{0ex}{0ex}}{\text{s}}^{-1}$), ensuring scalability to large areas. For that, we solve the electrostatic problem of a conducting hyperboloid with an elliptical wormhole in the presence of anin-planeelectric field. The localized surface plasmons (LSPs) on the NPG sheet, partially hybridized with graphene phonons and surface phonons of the neighboring materials, allow for the control and tuning of the thermal emission spectrum in the wavelength regime from$$\lambda =3$$$\lambda =3$to 12$$\upmu$$$\mu$m by adjusting the size of and distance between the circular holes in a hexagonal or square lattice structure. Most importantly, the LSPs along with an optical cavity increase the emittance of graphene from about 2.3% for pristine graphene to 80% for NPG, thereby outperforming state-of-the-art pristine graphene light sources operating in the near-infrared by at least a factor of 100. According to our COMSOL calculations, a maximum emission power per area of$$11\times 10^3$$$11×{10}^{3}$W/$$\hbox {m}^2$$${\text{m}}^{2}$at$$T=2000$$$T=2000$K for a bias voltage of$$V=23$$$V=23$V is achieved by controlling the temperature of the hot electrons through the Joule heating. By generalizing Planck’s theory to any grey body and derivingmore » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8353614807128906, "perplexity": 2995.2730072518334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00186.warc.gz"} |
https://www.math.princeton.edu/events/concentration-properties-theta-lifts-2018-02-27t214500 | # Concentration properties of theta lifts
-
Farrell Brumley, IAS
IAS Room S-101
The classical conjectures of Ramanujan-Petersson and Sato-Tate on the Fourier coefficients of modular forms, or more generally on the Satake parameters of automorphic representations, are highly sensitive to questions of functoriality. For example, the coefficients of CM modular forms are equidistributed according to a very different law from that of non-CM forms, and the first historical counter examples to the naive generalization of the Ramanujan conjecture were found amongst the theta lifts on the group Sp4. A more recent analogue of these conjectures looks at the L^p norms of automorphic forms (with p=\infty corresponding to Ramanujan). Their concentration properties, at points or along certain cycles, are of general interest from both an analytic and arithmetic viewpoint. I will describe in this talk a few new results on the subject, joint with Simon Marshall, which attempt to clarify the structure of the problem: the L^p norms of an automorphic form are closely related to the asymptotic size of certain of its periods which in turn reflect the form's functorial origin. In particular, in a work in progress, we show the existence of Maass forms, defined on hyperbolic manifolds and in the image of the theta correspondence from Sp4, which concentrate to some degree along closed geodesics. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471338152885437, "perplexity": 613.2061442641144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863923.6/warc/CC-MAIN-20180521023747-20180521043747-00184.warc.gz"} |
http://math.stackexchange.com/questions/298543/how-to-show-ln-lnk1-ln-lnk-frac1k-lnk-forall-k-in-mathbbn | # How to show $\ln(\ln(k+1))-\ln(\ln(k))<\frac{1}{k \ln(k)},\forall K\in \mathbb{N}, K\geq 2.$
How can I show the inequality $\ln(\ln(k+1))-\ln(\ln(k))<\dfrac{1}{k\ln(k)},\forall K\in \mathbb{N}, K\geq 2.$
-
From Mean value theorem we have, $$\displaystyle\frac{\ln(\ln(k+1))-\ln(\ln(k))}{k+1-k}=\frac{1}{\ln c}.\frac{1}{c},c\in(k,k+1)$$ $$k<c\Rightarrow\frac{1}{\ln c}.\frac{1}{c}<\frac{1}{\ln k}.\frac{1}{k}$$ $$\Rightarrow\displaystyle{\ln(\ln(k+1))-\ln(\ln(k))}=\frac{1}{\ln c}.\frac{1}{c}<\frac{1}{\ln k}.\frac{1}{k}$$ $$\Rightarrow\displaystyle{\ln(\ln(k+1))-\ln(\ln(k))}<\frac{1}{\ln k}.\frac{1}{k}$$ We are done.
By the way is it possible to show $\sum_{k=2}^{\infty}(ln(ln(k+1))-ln(lnk))=\infty$ so that by comparison test $\sum_{K=2}^{\infty}\frac{1}{klnk}$ also divergent? – ftolessa Feb 9 '13 at 8:13
Another test which you can use $\sum a_i$ is convergent if and only if $\sum 2^ka_{2^k}$ is convergent. – Abhra Abir Kundu Feb 9 '13 at 9:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530275464057922, "perplexity": 171.45418097231828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278887.13/warc/CC-MAIN-20160524002118-00187-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-applied-math/89987-mechanics-1-pulley-question.html | Math Help - Mechanics 1 - pulley question.
1. Mechanics 1 - pulley question.
Hi, i don't really get question (e), is there anyone explain question (e)?
Here is the question.
6. Two particles P and Q have mass 0.5 kg and m kg respectively, where m < 0.5. The particles are connected by a light inextensible string which passes over a smooth, fixed pulley. Initially P is 3.15m above horizontal ground. The particles are released from rest with the string taut and the hanging parts of the string vertical. After P has been descending for 1.5 s, it strikes the ground. Particle P reaches the ground before Q has reached the pulley.
acceleration is 2.8 ms^-2
Tension is 3.5 N
and m = 5/18
(e)
When p strikes the ground, P does not rebound and the string becomes slack. Particle Q then moves freely under gravity, without reaching the pulley, until the string becomes taut again.
hence, find the time between the instant when P strikes the ground and the instant when the string becomes taut again.
----------------------------------------------------------------------
Mark scheme says
v = u + at => 4.2 = -4.2 + 9.8t
therefore, t = 6/7
but i dont really get this.. could anyone explain why v is -4.2 too??
Thank you.
2. I haven't carried out the calculation but I assume that the speed of the particle Q at the point P hits the ground is 4.2 $ms^{-1}$ which is in the opposite direction to gravity i.e. upwards. The string will be taut again when the particle reaches this point again and by conservation of energy this means that it will have the same speed but in the opposite direction. Hence:
final velocity, $v = 4.2 ms^{-1}$ (downwards so positive)
initial velocity, $u = -4.2 ms^{-1}$ (upwards so negative).
Does it make sense?
If not let me know.
but i am not sure what this mean...
The string will be taut again when the particle reaches this point again and by conservation of energy this means that it will have the same speed but in the opposite direction
can you elaborate or explain a bit more please..?
sorry... and thank you very much.
4. Sure:
So before P hits the ground the string is taught and so the mass of P is pulling Q upwards.
When P hits the ground it no longer exerts a pull on the string and hence there is no longer any force pulling Q upwards. (There's no tension in the string).
However, at this time Q has speed (and kinetic energy) upwards but the only force acting on it is gravity downwards which is slowing it down.
So from this time on the particle Q is doing much the same as you would expect from a ball that you threw straight up (vertically) in the air. It would travel upwards until it reached its highest point (at which point its speed is 0) then fall back down.
What I meant by conservation of energy is that when P hits the ground Q has a certain amount of kinetic energy which is converted to potential energy as it travels upwards and then back into kinetic energy on the way down. So it must end up with the same kinetic energy at any particular height whether it's going up or down. (The sum of potential and kinetic energy remains a constant and potential energy depends on the height of the particle above ground).
Finally when the string is taut it means it is stretched out and when it is not it is shorter than it's normal full length i.e. it's slack. So we know that after P hit the ground there was a period of time when the string was slack. However, when it got back to the same height above the ground (that it was at when Q hit the ground) then the string would be taut as it is at full length again.
OK, I hope that was more helpful. If that was a longer/ simpler explanation than was required then I'm sorry as I was just trying to be completely clear. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590683341026306, "perplexity": 395.72280217299704}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861305.18/warc/CC-MAIN-20150124161101-00061-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://worldwidescience.org/topicpages/c/chemical+shielding+tensors.html | #### Sample records for chemical shielding tensors
1. /sup 31/P nuclear magnetic resonance chemical shielding tensors of L-0-serine phosphate and 3'-cytidine monophosphate
Energy Technology Data Exchange (ETDEWEB)
Kohler, S.J.; Klein, M.P.
1977-12-07
/sup 31/P nuclear magnetic resonance chemical shielding tensors have been measured from single crystals of L-O-serine phosphate and 3'-cytidine monophosphate. The principal elements of the shielding tensors are -48, -2, and 51 ppM for serine phosphate and -68, -13, and 64 ppM for 3'-cytidine monophosphate, relative to 85% H/sub 3/PO/sub 4/. In both cases four orientations of the shielding tensor on the molecule are possible; in both instances one orientation correlates well with the P--O bond directions. This orientation of the shielding tensor places the most downfield component of the tensor in the plane containing the two longest P--O bonds and the most upfield component of the shielding tensor in the plane containing the two shortest P--O bonds. A similar orientation was reported for the /sup 31/P shielding tensor of phosphorylethanolamine and a comparison is made between the three molecules.
2. A density functional study of 15N chemical shielding tensors in quinolines
Science.gov (United States)
2009-07-01
DFT calculations were carried out to characterize the 15N shielding tensors in quinolines. This computational study is intended to shed light on the differences between two groups of quinolines: series A (7-chloro 4-aminoalkyls quinolines) and series B (quinolines, 3-, 5-, 6-, 8-amino quinolines and 4,8-dichloro quinoline). Unlike the quinolines in series B, the series A quinolines show considerable β-hematin inhibition activity which is essential for quinoline-based drugs. The results show that the substitution position significantly affects the σ11 and σ22 components of 15N shielding tensors of quinolines. The 15N shielding components are noticeably different for the two series and can be related to their ability to interact with hematin.
3. Ab Initio Calculations of 31P NMR Chemical Shielding Anisotropy Tensors in Phosphates: Variations Due to Ring Formation
Directory of Open Access Journals (Sweden)
Todd M. Alam
2002-08-01
Full Text Available Abstract: Ring formation in phosphate systems is expected to influence both the magnitude and orientation of the phosphorus (31P nuclear magnetic resonance (NMR chemical shielding anisotropy (CSA tensor. Ab initio calculations of the 31P CSA tensor in both cyclic and acyclic phosphate clusters were performed as a function of the number of phosphate tetrahedral in the system. The calculation of the 31P CSA tensors employed the GAUSSIAN 98 implementation of the gauge-including atomic orbital (GIAO method at the Hartree-Fock (HF level. It is shown that both the 31P CSA tensor anisotropy, and the isotropic chemical shielding can be used for the identification of cyclic phosphates. The differences between the 31P CSA tensor in acyclic and cyclic phosphate systems become less pronounced with increasing number of phosphate groups within the ring. The orientation of the principal components for the 31P CSA tensor shows some variation due to cyclization, most notably with the smaller, highly strained ring systems.
4. Magnitude and absolute orientation of 1H chemical shielding tensors in polycrystalline powders: a 1H CRAMPS NMR study of KH 2PO 4
Science.gov (United States)
Rasmussen, J. T.; Hohwy, M.; Jakobsen, H. J.; Nielsen, N. C.
1999-12-01
It is shown that the magnitude and absolute orientation of 1H chemical shielding tensors may be determined from polycrystalline powders using combined rotation and multiple pulse spectroscopy (CRAMPS) by simultaneous evolution under chemical shielding and heteronuclear dipolar coupling interactions. An experimental approach based on the broadband high-order truncating MSHOT-3 homonuclear decoupling sequence is demonstrated for the hydrogen bonded proton within the 31P- 1H- 31P three-spin systems of a powder of KH 2PO 4.
5. 31P nuclear magnetic resonance chemical shielding tensors of phosphorylethanolamine, lecithin, and related compounds: Applications to head-group motion in model membranes.
Science.gov (United States)
Kohler, S J; Klein, M P
1976-03-09
31P nuclear magnetic resonance (NMR) powder spectra have been used to obtain the principal values of the chemical shielding tensors of dipalmitoyellecithin (DPL), dipalmitoylphosphatidylethanolamine, and several related organophosphate mono- and diesters. In addition, the principal values and orientation of the phosphorylethanolamine shielding tensor were determined from 31P NMR spectra of a single crystal. In all compounds studied the shielding tensors were clearly monaxial. The monoester spectra are typified by the spectrum of phosphorylethanolamine with principal values of -67, -13, and 69 ppm relative to H3PO4. The diesters have a larger total anisotrophy, as indicated by the DPL values of -81, -25, and 108 ppm. These data as well as the orientation of the phosphorylethanolamine shielding tensor are correlated with the electron density distribution as determined by the bonding pattern of the phosphate. The spectrum of a DPL-water (1:1) mixture at 52 degrees C has a shift anisotrophy of 30 ppm and displays a shape characteristic of an axial tensor. This change from the rigid lattice DPL pattern is explained in terms of motional narrowing, and the shielding tensor data are used to interpret the motion of the phospholipid head group. Simple rotation about the P-O(glycerol) bond is excluded, and a more complex motion involving rotation about both the P-O (glycerol) and glycerol C(2)-C(3) bonds is postulated.
6. /sup 31/P nuclear magnetic resonance chemical shielding tensors of phosphorylethanolamine, lecithin, and related compounds: applications to head-group motion in model membranes
Energy Technology Data Exchange (ETDEWEB)
Kohler, S.J.; Klein, M.P.
1976-03-09
/sup 31/P nuclear magnetic resonance (NMR) power spectra have been used to obtain the principal values of the chemical shielding tensors of dipalmitoyllecithin (DPL), dipalmitoylphosphatidylethanolamine, and several related organophosphate mono- and diesters. In addition, the principal values and orientation of the phosphorylethanolamine shielding tensor were determined from /sup 31/P NMR spectra of a single crystal. In all compounds studied the shielding tensors were clearly nonaxial. The monoester spectra are typified by the spectrum of phosphorylethanolamine with principal values of -67, -13, and 69 ppm relative to H/sub 3/PO/sub 4/. The diesters have a larger total anisotropy, as indicated by the DPL values of -81, -25, and 108 ppm. These data as well as the orientation of the phosphorylethanolamine shielding tensor are correlated with the electron density distribution as determined by the bonding pattern of the phosphate. The spectrum of a DPL--water (1:1) mixture at 52/sup 0/C has a shift anisotropy of 30 ppm and displays a shape characteristic of an axial tensor. This change from the rigid lattice DPL pattern is explained in terms of motional narrowing, and the shielding tensor data are used to interpret the motion of the phospholipid head group. Simple rotation about the P--O(glycerol) bond is excluded, and a more complex motion involving rotation about both the P--O(glycerol) and glycerol C(2)--C(3) bonds is postulated. (auth)
7. Influence of the O-phosphorylation of serine, threonine and tyrosine in proteins on the amidic 15N chemical shielding anisotropy tensors
NARCIS (Netherlands)
Emmer, J.; Vavrinska, A.; Sychrovský, V.; Benda, L.; Kriz, Z.; Koca, J.; Boelens, R.; Sklenár, V.; Trantirek, L.
2012-01-01
Density functional theory was employed to study the influence of O-phosphorylation of serine, threonine, and tyrosine on the amidic 15N chemical shielding anisotropy (CSA) tensor in the context of the complex chemical environments of protein structures. Our results indicate that the amidic 15N CSA
8. Influence of stacking interactions on NMR chemical shielding tensors in benzene and formamide homodimers as studied by HF, DFT and MP2 calculations
Czech Academy of Sciences Publication Activity Database
Czernek, Jiří
2003-01-01
Roč. 107, č. 19 (2003), s. 3952-3959 ISSN 1089-5639 R&D Projects: GA AV ČR KJB4050311 Institutional research plan: CEZ:AV0Z4050913 Keywords : NMR * chemical shielding tensor * ab initio Subject RIV: CD - Macromolecular Chemistry Impact factor: 2.792, year: 2003
9. 71Ga Chemical Shielding and Quadrupole Coupling Tensors of the Garnet Y(3)Ga(5)O(12) from Single-Crystal (71)Ga NMR
DEFF Research Database (Denmark)
Vosegaard, Thomas; Massiot, Dominique; Gautier, Nathalie
1997-01-01
A single-crystal (71)Ga NMR study of the garnet Y(3)Ga(5)O(12) (YGG) has resulted in the determination of the first chemical shielding tensors reported for the (71)Ga quadrupole. The single-crystal spectra are analyzed in terms of the combined effect of quadrupole coupling and chemical shielding...... consistent with its cubic crystal structure which supports the reliability of the experimental data. In addition, the (71)Ga and (27)Al isotropic chemical shifts for YGG and YAG give further support to the linear correlation observed earlier between (71)Ga and (27)Al isotropic chemical shifts....
10. Influence of the O-phosphorylation of serine, threonine and tyrosine in proteins on the amidic ¹⁵N chemical shielding anisotropy tensors.
Science.gov (United States)
Emmer, Jiří; Vavrinská, Andrea; Sychrovský, Vladimír; Benda, Ladislav; Kříž, Zdeněk; Koča, Jaroslav; Boelens, Rolf; Sklenář, Vladimír; Trantírek, Lukáš
2013-01-01
Density functional theory was employed to study the influence of O-phosphorylation of serine, threonine, and tyrosine on the amidic (15)N chemical shielding anisotropy (CSA) tensor in the context of the complex chemical environments of protein structures. Our results indicate that the amidic (15)N CSA tensor has sensitive responses to the introduction of the phosphate group and the phosphorylation-promoted rearrangement of solvent molecules and hydrogen bonding networks in the vicinity of the phosphorylated site. Yet, the calculated (15)N CSA tensors in phosphorylated model peptides were in range of values experimentally observed for non-phosphorylated proteins. The extent of the phosphorylation induced changes suggests that the amidic (15)N CSA tensor in phosphorylated proteins could be reasonably well approximated with averaged CSA tensor values experimentally determined for non-phosphorylated amino acids in practical NMR applications, where chemical surrounding of the phosphorylated site is not known a priori in majority of cases. Our calculations provide estimates of relative errors to be associated with the averaged CSA tensor values in interpretations of NMR data from phosphorylated proteins.
11. Variability of the 15N chemical shielding tensors in the B3 domain of protein G from 15N relaxation measurements at several fields. Implications for backbone order parameters.
Science.gov (United States)
Hall, Jennifer B; Fushman, David
2006-06-21
We applied a combination of 15N relaxation and CSA/dipolar cross-correlation measurements at five magnetic fields (9.4, 11.7, 14.1, 16.4, and 18.8 T) to determine the 15N chemical shielding tensors for backbone amides in protein G in solution. The data were analyzed using various model-independent approaches and those based on Lipari-Szabo approximation, all of them yielding similar results. The results indicate a range of site-specific values of the anisotropy (CSA) and orientation of the 15N chemical shielding tensor, similar to those in ubiquitin (Fushman, et al. J. Am. Chem. Soc. 1998, 120, 10947; J. Am. Chem. Soc. 1999, 121, 8577). Assuming a Gaussian distribution of the 15N CSA values, the mean anisotropy is -173.9 to -177.2 ppm (for 1.02 A NH bond length) and the site-to-site CSA variability is +/-17.6 to +/-21.4 ppm, depending on the method used. This CSA variability is significantly larger than derived previously for ribonuclease H (Kroenke, et al. J. Am. Chem. Soc. 1999, 121, 10119) or recently, using "meta-analysis" for ubiquitin (Damberg, et al. J. Am. Chem. Soc. 2005, 127, 1995). Standard interpretation of 15N relaxation studies of backbone dynamics in proteins involves an a priori assumption of a uniform 15N CSA. We show that this assumption leads to a significant discrepancy between the order parameters obtained at different fields. Using the site-specific CSAs obtained from our study removes this discrepancy and allows simultaneous fit of relaxation data at all five fields to Lipari-Szabo spectral densities. These findings emphasize the necessity of taking into account the variability of 15N CSA for accurate analysis of protein dynamics from 15N relaxation measurements.
12. An investigation of curvature effects on the nitrogen and boron chemical shielding tensors as well as NICS characterization of BN nanotubes with Stone-Wales defects: A DFT study
Science.gov (United States)
Ghafouri, Reza; Anafcheh, Maryam
2013-03-01
A DFT study has been performed to investigate electronic and magnetic properties of armchair (4, 4), (5, 5), and (6, 6) BNNTs with Stone-Wales defects based on 11B and 15N NMR parameters and NICS indices. The smallest 15N chemical shielding arising from B1-N1 bond appears as an individual peak at around 97.0-99.4 ppm for "Parallel" orientation of the defect site (P-SW) and at around 87.0-88.8 ppm for "Diagonal" orientation of the defect site (D-SW), respectively, quite well separated from the rest of the spectrum. These results indicate that 15N NMR patterns might be able to detect the presence of SW defects found in BNNTs. The smallest 11B chemical shielding appears around 68.6-69.3 (P-SW) or 71.6-72.1 ppm (D-SW) arising from the boron surrounded by three different rings. Moreover, CS tensors are shown to be quite sensitive to the curvature at the corresponding site. Finally, NICS at ring centers and along principal axis are calculated to evaluate electron motilities on the surfaces and inside BNNTs. NICS values inside BNNTs with Stone-Wales defect is similar to the parent where the compensation between diatropic and paratropic ring currents leads to the uniformity of magnetic field, but slightly increases only in the zone of defect site.
13. Four-component relativistic density functional theory calculations of NMR shielding tensors for paramagnetic systems.
Science.gov (United States)
Komorovsky, Stanislav; Repisky, Michal; Ruud, Kenneth; Malkina, Olga L; Malkin, Vladimir G
2013-12-27
A four-component relativistic method for the calculation of NMR shielding constants of paramagnetic doublet systems has been developed and implemented in the ReSpect program package. The method uses a Kramer unrestricted noncollinear formulation of density functional theory (DFT), providing the best DFT framework for property calculations of open-shell species. The evaluation of paramagnetic nuclear magnetic resonance (pNMR) tensors reduces to the calculation of electronic g tensors, hyperfine coupling tensors, and NMR shielding tensors. For all properties, modern four-component formulations were adopted. The use of both restricted kinetically and magnetically balanced basis sets along with gauge-including atomic orbitals ensures rapid basis-set convergence. These approaches are exact in the framework of the Dirac-Coulomb Hamiltonian, thus providing useful reference data for more approximate methods. Benchmark calculations on Ru(III) complexes demonstrate good performance of the method in reproducing experimental data and also its applicability to chemically relevant medium-sized systems. Decomposition of the temperature-dependent part of the pNMR tensor into the traditional contact and pseudocontact terms is proposed.
14. Modeling NMR chemical shift: A survey of density functional theory approaches for calculating tensor properties.
Science.gov (United States)
Sefzik, Travis H; Turco, Domenic; Iuliucci, Robbie J; Facelli, Julio C
2005-02-17
The NMR chemical shift, a six-parameter tensor property, is highly sensitive to the position of the atoms in a molecule. To extract structural parameters from chemical shifts, one must rely on theoretical models. Therefore, a high quality group of shift tensors that serve as benchmarks to test the validity of these models is warranted and necessary to highlight existing computational limitations. Here, a set of 102 13C chemical-shift tensors measured in single crystals, from a series of aromatic and saccharide molecules for which neutron diffraction data are available, is used to survey models based on the density functional (DFT) and Hartree-Fock (HF) theories. The quality of the models is assessed by their least-squares linear regression parameters. It is observed that in general DFT outperforms restricted HF theory. For instance, Becke's three-parameter exchange method and mpw1pw91 generally provide the best predicted shieldings for this group of tensors. However, this performance is not universal, as none of the DFT functionals can predict the saccharide tensors better than HF theory. Both the orientations of the principal axis system and the magnitude of the shielding were compared using the chemical-shift distance to evaluate the quality of the calculated individual tensor components in units of ppm. Systematic shortcomings in the prediction of the principal components were observed, but the theory predicts the corresponding isotropic value more accurately. This is because these systematic errors cancel, thereby indicating that the theoretical assessment of shielding predictions based on the isotropic shift should be avoided.
15. ONIOM as an efficient tool for calculating NMR chemical shielding constants in large molecules
Science.gov (United States)
2000-02-01
The ONIOM approach is used to derive an expression for the NMR chemical shielding tensor in a molecule subdivided into n-layers, each of which can be described at a different level of theory. The two-layer ONIOM2(MP2-GIAO:HF-GIAO) variant, in which a small part of the molecule containing the nuclei of interest is described at the MP2-GIAO level of theory, and the rest - using the HF-GIAO approach - is tested through calculations of absolute isotropic 13C, 17O, 19F, and proton NMR chemical shieldings in the water dimer, ethanol, acetone, acrolein, fluorobenzene, and naphthalene. The results show that with an appropriate partitioning this scheme furnishes shieldings which represent close approximations to the corresponding MP2-GIAO values for the entire molecule and offers a highly efficient tool for accurate shielding calculations on large molecules.
16. chemical shift tensors in helical peptides by dipolar-modulated chemical shift recoupling NMR
International Nuclear Information System (INIS)
Yao Xiaolan; Yamaguchi, Satoru; Hong Mei
2002-01-01
The Cα chemical shift tensors of proteins contain information on the backbone conformation. We have determined the magnitude and orientation of the Cα chemical shift tensors of two peptides with α-helical torsion angles: the Ala residue in G*AL (φ=-65.7 deg., ψ=-40 deg.), and the Val residue in GG*V (φ=-81.5 deg., ψ=-50.7 deg.). The magnitude of the tensors was determined from quasi-static powder patterns recoupled under magic-angle spinning, while the orientation of the tensors was extracted from Cα-Hα and Cα-N dipolar modulated powder patterns. The helical Ala Cα chemical shift tensor has a span of 36 ppm and an asymmetry parameter of 0.89. Its σ 11 axis is 116 deg. ± 5 deg. from the Cα-Hα bond while the σ 22 axis is 40 deg. ± 5 deg. from the Cα-N bond. The Val tensor has an anisotropic span of 25 ppm and an asymmetry parameter of 0.33, both much smaller than the values for β-sheet Val found recently (Yao and Hong, 2002). The Val σ 33 axis is tilted by 115 deg. ± 5 deg. from the Cα-Hα bond and 98 deg. ± 5 deg. from the Cα-N bond. These represent the first completely experimentally determined Cα chemical shift tensors of helical peptides. Using an icosahedral representation, we compared the experimental chemical shift tensors with quantum chemical calculations and found overall good agreement. These solid-state chemical shift tensors confirm the observation from cross-correlated relaxation experiments that the projection of the Cα chemical shift tensor onto the Cα-Hα bond is much smaller in α-helices than in β-sheets
17. Anisotropy of the fluorine chemical shift tensor in UF6
International Nuclear Information System (INIS)
Rigny, P.
1965-04-01
An 19 F magnetic resonance study of polycrystalline UF 6 is presented. The low temperature complex line can be analyzed as the superposition of two distinct lines, which is attributed to a distortion of the UF 6 octahedron in the solid. The shape of the two components is studied. Their width is much larger than the theoretical dipolar width, and must be explained by large anisotropies of the fluorine chemical shift tensors. The resulting shape functions of the powder spectra are determined. The values of the parameters of the chemical shift tensors yield estimates of the characters of the U-F bonds, and this gives some information on the ground state electronic wave function of the UF 6 molecule in the solid. (author) [fr
18. Quantum-chemical insights from deep tensor neural networks
Science.gov (United States)
Schütt, Kristof T.; Arbabzadah, Farhad; Chmiela, Stefan; Müller, Klaus R.; Tkatchenko, Alexandre
2017-01-01
Learning from data has led to paradigm shifts in a multitude of disciplines, including web, text and image search, speech recognition, as well as bioinformatics. Can machine learning enable similar breakthroughs in understanding quantum many-body systems? Here we develop an efficient deep learning approach that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems. We unify concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate (1 kcal mol-1) predictions in compositional and configurational chemical space for molecules of intermediate size. As an example of chemical relevance, the model reveals a classification of aromatic rings with respect to their stability. Further applications of our model for predicting atomic energies and local chemical potentials in molecules, reliable isomer energies, and molecules with peculiar electronic structure demonstrate the potential of machine learning for revealing insights into complex quantum-chemical systems.
19. Quantum-Chemical Insights from Deep Tensor Neural Networks
Science.gov (United States)
Discovery of novel materials can be guided by searching databases of known structures and properties. Indeed, electronic structure calculations and machine learning have recently been combined aiming towards the goal of accelerated discovery of chemicals with desired properties. However, the design of an appropriate descriptor is critical to the success of these approaches. Here we address this issue with deep neural tensor networks (DTNN): a deep learning approach that is able to learn efficient representations of molecules and materials. The mathematical construction of the DTNN model provides statistically rigorous partitioning of extensive molecular properties into atomic contributions - a long-standing challenge for quantum-mechanical calculations of molecules. Beyond achieving accurate energy predictions (1 kcal mol-1) throughout compositional and configurational space, DTNN provide spatially and chemically resolved insights into quantum-mechanical properties of molecular systems beyond those trivially contained in the training data. Thus, we propose DTNN as a versatile framework for understanding complex quantum-mechanical systems based on high-throughput electronic structure calculations. This research is supported by the DFG (MU 987/20-1) and BMBF (01IS14013A).
20. Nuclei-selected atomic-orbital response-theory formulation for the calculation of NMR shielding tensors using density-fitting.
Science.gov (United States)
Kumar, Chandan; Kjærgaard, Thomas; Helgaker, Trygve; Fliegl, Heike
2016-12-21
An atomic orbital density matrix based response formulation of the nuclei-selected approach of Beer, Kussmann, and Ochsenfeld [J. Chem. Phys. 134, 074102 (2011)] to calculate nuclear magnetic resonance (NMR) shielding tensors has been developed and implemented into LSDalton allowing for a simultaneous solution of the response equations, which significantly improves the performance. The response formulation to calculate nuclei-selected NMR shielding tensors can be used together with the density-fitting approximation that allows efficient calculation of Coulomb integrals. It is shown that using density-fitting does not lead to a significant loss in accuracy for both the nuclei-selected and the conventional ways to calculate NMR shielding constants and should thus be used for applications with LSDalton.
1. Quantum chemical calculation and experimental measurement of the 13C chemical shift tensors of vanillin and 3,4-dimethoxybenzaldehyde
Science.gov (United States)
Zheng, Guang; Hu, Jianzhi; Zhang, Xiaodong; Shen, Lianfang; Ye, Chaohui; Webb, Graham A.
1997-03-01
The principal values of the 13C nuclear magnetic resonance chemical shift tensors in vanillin and 3,4-dimethoxybenzaldehyde are reported. Theoretical results of the 13C chemical shift tensors were obtained by employing the gauge included atomic orbital (GIAO) approach. The geometrical parameters were optimized by using the MNDO method. The observed chemical shifts of these two compounds were determined in powders by using the recently introduced magic angle turning (MAT) experiment. The results presented in this paper clearly demonstrate the importance of using tensor information in the study of molecular structures.
2. Chemical-shift tensors of heavy nuclei in network solids: a DFT/ZORA investigation of (207)Pb chemical-shift tensors using the bond-valence method.
Science.gov (United States)
Alkan, Fahri; Dybowski, C
2015-10-14
Cluster models are used in calculation of (207)Pb NMR magnetic-shielding parameters of α-PbO, β-PbO, Pb3O4, Pb2SnO4, PbF2, PbCl2, PbBr2, PbClOH, PbBrOH, PbIOH, PbSiO3, and Pb3(PO4)2. We examine the effects of cluster size, method of termination of the cluster, charge on the cluster, introduction of exact exchange, and relativistic effects on calculation of magnetic-shielding tensors with density functional theory. Proper termination of the cluster for a network solid, including approximations such as compensation of charge by the bond-valence (BV) method, is essential to provide results that agree with experiment. The inclusion of relativistic effects at the spin-orbit level for such heavy nuclei is an essential factor in achieving agreement with experiment.
3. Electromagnetic interference shielding properties and mechanisms of chemically reduced graphene aerogels
Energy Technology Data Exchange (ETDEWEB)
Bi, Shuguang [Temasek Laboratories, Nanyang Technological University, 50 Nanyang Drive, 637553 (Singapore); Zhang, Liying, E-mail: [email protected] [Temasek Laboratories, Nanyang Technological University, 50 Nanyang Drive, 637553 (Singapore); Mu, Chenzhong [School of Material Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore); Liu, Ming, E-mail: [email protected] [Temasek Laboratories, Nanyang Technological University, 50 Nanyang Drive, 637553 (Singapore); Hu, Xiao [Temasek Laboratories, Nanyang Technological University, 50 Nanyang Drive, 637553 (Singapore); School of Material Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore)
2017-08-01
Graphical abstract: The electromagnetic interference shielding behavior and proposed mechanisms of ultralight free-standing 3D graphene aerogels. - Highlights: • The electromagnetic interference (EMI) shielding properties and mechanisms of ultralight 3D graphene aerogels (GAs) were systematically studied with respect to both the unique porous network and the intrinsic properties of the graphene sheets. • Thickness of the shielding material played a critical role on EMI SE. • By compressing the porous GAs into compact film didnt increase the EMI SE despite the increased electrical conductivity and connectivity. EMI SE is highly dependent on the effective amounts of the materials response to the EM waves. - Abstract: Graphene was recently demonstrated to exhibit excellent electromagnetic interference (EMI) shielding performance. In this work, ultralight (∼5.5 mg/cm{sup 3}) graphene aerogels (GAs) were fabricated through assembling graphene oxide (GO) using freeze-drying followed by a chemical reduction method. The EMI shielding properties and mechanisms of GAs were systematically studied with respect to the intrinsic properties of the reduced graphene oxide (rGO) sheets and the unique porous network. The EMI shielding effectiveness (SE) of GAs was increased from 20.4 to 27.6 dB when the GO was reduced by high concentration of hydrazine vapor. The presence of more sp{sup 2} graphitic lattice and free electrons from nitrogen atoms resulted in the enhanced EMI SE. Absorption was the dominant shielding mechanism of GAs. Compressing the highly porous GAs into compact thin films did not change the EMI SE, but shifted the dominant shielding mechanism from absorption to reflection.
4. Direct solution of the Chemical Master Equation using quantized tensor trains.
Directory of Open Access Journals (Sweden)
2014-03-01
Full Text Available The Chemical Master Equation (CME is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to "lift" this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species and sub-linearly in the mode size (maximum copy number, and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging hp-discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG methods from quantum chemistry. Our method automatically adapts the "basis" of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of
5. 1H Chemical Shielding Anisotropies from Polycrystalline Powders Using MSHOT-3 Based CRAMPS
Science.gov (United States)
Hohwy, M.; Rasmussen, J. T.; Bower, P. V.; Jakobsen, H. J.; Nielsen, N. C.
1998-08-01
It is demonstrated that combined rotation and multiple-pulse spectroscopy (CRAMPS) based on MSHOT-3 homonuclear multiple-pulse decoupling represents a powerful method for determination of1H chemical shielding anisotropies from polycrystalline powders. By virtue of high-order dipolar decoupling, large spectral width, resonance offset stability, and the absence of artifacts fromtilted-axis precession, MSHOT-3-based CRAMPS enables straightforward sampling of high-quality spectra. Comparison with explicit calculations, taking the effect of the multiple-pulse sequence into account, shows that the spectra may be simulated and iteratively fitted using standard software for the calculation of magic-angle spinning spectra influenced by chemical shielding anisotropy with the shielding interaction reduced by the scaling factor of the MSHOT-3 decoupling sequence. The method is demonstrated by experimental determination of1H chemical shielding anisotropies for adipic acid, Ca(OH)2, malonic acid, and KHSO4. The data are compared with those determined previously from single-crystal NMR studies.
6. Intermolecular Interactions in Crystalline Theobromine as Reflected in Electron Deformation Density and (13)C NMR Chemical Shift Tensors.
Science.gov (United States)
2013-06-11
An understanding of the role of intermolecular interactions in crystal formation is essential to control the generation of diverse crystalline forms which is an important concern for pharmaceutical industry. Very recently, we reported a new approach to interpret the relationships between intermolecular hydrogen bonding, redistribution of electron density in the system, and NMR chemical shifts (Babinský et al. J. Phys. Chem. A, 2013, 117, 497). Here, we employ this approach to characterize a full set of crystal interactions in a sample of anhydrous theobromine as reflected in (13)C NMR chemical shift tensors (CSTs). The important intermolecular contacts are identified by comparing the DFT-calculated NMR CSTs for an isolated theobromine molecule and for clusters composed of several molecules as selected from the available X-ray diffraction data. Furthermore, electron deformation density (EDD) and shielding deformation density (SDD) in the proximity of the nuclei involved in the proposed interactions are calculated and visualized. In addition to the recently reported observations for hydrogen bonding, we focus here particularly on the stacking interactions. Although the principal relations between the EDD and CST for hydrogen bonding (HB) and stacking interactions are similar, the real-space consequences are rather different. Whereas the C-H···X hydrogen bonding influences predominantly and significantly the in-plane principal component of the (13)C CST perpendicular to the HB path and the C═O···H hydrogen bonding modulates both in-plane components of the carbonyl (13)C CST, the stacking modulates the out-of-plane electron density resulting in weak deshielding (2-8 ppm) of both in-plane principal components of the CST and weak shielding (∼ 5 ppm) of the out-of-plane component. The hydrogen-bonding and stacking interactions may add to or subtract from one another to produce total values observed experimentally. On the example of theobromine, we demonstrate
7. Benchmarks for the 13C NMR chemical shielding tensors in peptides in the solid state
Czech Academy of Sciences Publication Activity Database
Czernek, Jiří; Pawlak, T.; Potrzebowski, M. J.
2012-01-01
Roč. 527, - (2012), s. 31-35 ISSN 0009-2614 R&D Projects: GA MŠk 2B08021 Institutional research plan: CEZ:AV0Z40500505 Keywords : NMR * CST * DFT Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 2.145, year: 2012
8. Yield Scaling of Frequency Domain Moment Tensors from Contained Chemical Explosions Detonated in Granite
Science.gov (United States)
MacPhail, M. D.; Stump, B. W.; Zhou, R.
2017-12-01
The Source Phenomenology Experiment (SPE - Arizona) was a series of nine, contained and partially contained chemical explosions within the porphyry granite at the Morenci Copper mine in Arizona. Its purpose was to detonate, record and analyze seismic waveforms from these single-fired explosions. Ground motion data from the SPE is analyzed in this study to assess the uniqueness of the time domain moment tensor source representation and its ability to quantify containment and yield scaling. Green's functions were computed for each of the explosions based on a 1D velocity model developed for the SPE. The Green's functions for the sixteen, near-source stations focused on observations from 37 to 680 m. This study analyzes the three deepest, fully contained explosions with a depth of burial of 30 m and yields of 0.77e-3, 3.08e-3 and 6.17e-3 kt. Inversions are conducted within the frequency domain and moment tensors are decomposed into deviatoric and isotropic components to evaluate the effects of containment and yield on the resulting source representation. Isotropic moments are compared to those for other contained explosions as reported by Denny and Johnson, 1991, and are in good agreement with their scaling results. The explosions in this study have isotropic moments of 1.2e12, 3.1e12 and 6.1e13 n*m. Isotropic and Mzz moment tensor spectra are compared to Mueller-Murphy, Denny-Johnson and revised Heard-Ackerman (HA) models and suggest that the larger explosions fit the HA model better. Secondary source effects resulting from free surface interactions including the effects of spallation contribute to the resulting moment tensors which include a CLVD component. Hudson diagrams, using frequency domain moment tensor data, are computed as a tool to assess how these containment scenarios affect the source representation. Our analysis suggests that, within our band of interest (2-20 Hz), as the frequency increases, the source representation becomes more explosion like
9. Vibrational circular dichroism and electric-field shielding tensors: A new physical interpretation based on nonlocal susceptibility densities
Science.gov (United States)
Hunt, Katharine L. C.; Harris, Robert A.
1991-06-01
Motion of nuclei within a molecule induces a magnetic moment me in the electronic charge distribution, giving a nonzero electronic contribution to the magnetic transition dipole that produces vibrational circular dichroism. In this paper, we develop a new susceptibility density theory for the induced magnetic moment. The theory is based on the response of the electrons to changes in the nuclear Coulomb field, due to shifts in nuclear positions. The electronic response to these changes depends on the same susceptibility densities that determine response to external fields. Our analysis suggests a new physical picture of vibrational circular dichroism. It yields an equation for the density of the induced electronic magnetic moment within a molecule; it also yields a new relation connecting the electric-field shielding at nucleus I of a molecule in an applied magnetic field of frequency ω to the derivative of me with respect to the velocity of nucleus I, regarded as a parameter in the electronic wave function. Within our theory, the derivative of me with respect to nuclear velocity separates into quantum-mechanical and classical components in close analogy with the Hellmann-Feynman theorem for forces on nuclei. In matrix-element form, results from our theory are identical to those obtained with nonadiabatic perturbation theory, to leading order. In general, the leading nonadiabatic corrections to electronic properties are determined directly by the electrons' response to the changes in the nuclear Coulomb field, when the nuclei move.
10. TensorLy: Tensor Learning in Python
NARCIS (Netherlands)
Kossaifi, Jean; Panagakis, Yannis; Pantic, Maja
2016-01-01
Tensor methods are gaining increasing traction in machine learning. However, there are scant to no resources available to perform tensor learning and decomposition in Python. To answer this need we developed TensorLy. TensorLy is a state of the art general purpose library for tensor learning.
11. Protein structure refinement using a quantum mechanics-based chemical shielding predictor.
Science.gov (United States)
Bratholm, Lars A; Jensen, Jan H
2017-03-01
The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural
12. TensorLy: Tensor Learning in Python
OpenAIRE
Kossaifi, Jean; Panagakis, Yannis; Pantic, Maja
2016-01-01
Tensor methods are gaining increasing traction in machine learning. However, there are scant to no resources available to perform tensor learning and decomposition in Python. To answer this need we developed TensorLy. TensorLy is a state of the art general purpose library for tensor learning. Written in Python, it aims at following the same standard adopted by the main projects of the Python scientific community and fully integrating with these. It allows for fast and straightforward tensor d...
13. Calculation of fluorine chemical shift tensors for the interpretation of oriented (19)F-NMR spectra of gramicidin A in membranes.
Science.gov (United States)
Sternberg, Ulrich; Klipfel, Marco; Grage, Stephan L; Witter, Raiker; Ulrich, Anne S
2009-08-28
A semi-empirical method for the prediction of chemical shifts, based on bond polarization theory, has recently been introduced for (13)C. Here, we extended this approach to calculate the (19)F chemical shift tensors of fluorine bound to aromatic rings and in aliphatic CF(3) groups. For the necessary parametrization, ab initio chemical shift calculations were performed at the MP2 level for a set of fluorinated molecules including tryptophan. The bond polarization parameters obtained were used to calculate the (19)F chemical shift tensors for several crystalline molecules, and to reference the calculated values on a chemical shift scale relative to CFCl(3). As a first biophysical application, we examined the distribution of conformations of a (19)F-labeled tryptophan side chain in the membrane-bound ion channel peptide, gramicidin A. The fluorine chemical shift tensors were calculated from snapshots of a molecular dynamics simulation employing the (19)F-parametrized bond polarization theory. In this MD simulation, published (2)H quadrupolar and (15)N-(1)H dipolar couplings of the indole ring were used as orientational constraints to determine the conformational distribution of the 5F-Trp(13) side chain. These conformations were then used to interpret the spectra of (19)F-labeled gramicidin A in fluid and gel phase lipid bilayers.
14. Relativistic calculation of nuclear magnetic shielding tensor using the regular approximation to the normalized elimination of the small component. III. Introduction of gauge-including atomic orbitals and a finite-size nuclear model.
Science.gov (United States)
Hamaya, S; Maeda, H; Funaki, M; Fukui, H
2008-12-14
The relativistic calculation of nuclear magnetic shielding tensors in hydrogen halides is performed using the second-order regular approximation to the normalized elimination of the small component (SORA-NESC) method with the inclusion of the perturbation terms from the metric operator. This computational scheme is denoted as SORA-Met. The SORA-Met calculation yields anisotropies, Delta sigma = sigma(parallel) - sigma(perpendicular), for the halogen nuclei in hydrogen halides that are too small. In the NESC theory, the small component of the spinor is combined to the large component via the operator sigma x piU/2c, in which pi = p + A, U is a nonunitary transformation operator, and c approximately = 137.036 a.u. is the velocity of light. The operator U depends on the vector potential A (i.e., the magnetic perturbations in the system) with the leading order c(-2) and the magnetic perturbation terms of U contribute to the Hamiltonian and metric operators of the system in the leading order c(-4). It is shown that the small Delta sigma for halogen nuclei found in our previous studies is related to the neglect of the U(0,1) perturbation operator of U, which is independent of the external magnetic field and of the first order with respect to the nuclear magnetic dipole moment. Introduction of gauge-including atomic orbitals and a finite-size nuclear model is also discussed.
15. Tensor surgery and tensor rank
NARCIS (Netherlands)
M. Christandl (Matthias); J. Zuiddam (Jeroen)
2018-01-01
textabstractWe introduce a method for transforming low-order tensors into higher-order tensors and apply it to tensors defined by graphs and hypergraphs. The transformation proceeds according to a surgery-like procedure that splits vertices, creates and absorbs virtual edges and inserts new vertices
16. Anisotropic compositional expansion and chemical potential for amorphous lithiated silicon under stress tensor.
Science.gov (United States)
Levitas, Valery I; Attariani, Hamed
2013-01-01
Si is a promising anode material for Li-ion batteries, since it absorbs large amounts of Li. However, insertion of Li leads to 334% of volumetric expansion, huge stresses, and fracture; it can be suppressed by utilizing nanoscale anode structures. Continuum approaches to stress relaxation in LixSi, based on plasticity theory, are unrealistic, because the yield strength of LixSi is much higher than the generated stresses. Here, we suggest that stress relaxation is due to anisotropic (tensorial) compositional straining that occurs during insertion-extraction at any deviatoric stresses. Developed theory describes known experimental and atomistic simulation data. A method to reduce stresses is predicted and confirmed by known experiments. Chemical potential has an additional contribution due to deviatoric stresses, which leads to increases in the driving force both for insertion and extraction. The results have conceptual and general character and are applicable to any material systems.
17. 2D relayed anisotropy correlation NMR: Characterization of the 13C' chemical shift tensor orientation in the peptide plane of the dipeptide AibAib
International Nuclear Information System (INIS)
Heise, Bert; Leppert, Joerg; Wenschuh, Holger; Ohlenschlaeger, Oliver; Goerlach, Matthias; Ramachandran, Ramadurai
2001-01-01
An approach to the determination of the orientation of the carbonyl chemical shift (CS) tensor in a 13 C'- 15 N- 1 H dipolar coupled spin network is proposed. The method involves the measurement of the Euler angles of the 13 C'- 15 N and 15 N- 1 H dipolar vectors in the 13 C' CS tensor principal axes system, respectively, via a 13 C- 15 N REDOR experiment and by a 2D relayed anisotropy correlation of the 13 C' CSA (ω 2 ) and 15 N- 1 H dipolar interaction (ω 1 ). Via numerical simulations the sensitivity of the ω 1 cross sections of the 2D spectrum to the Euler angles of the 15 N- 1 H bond vector in the 13 C' CSA frame is shown. Employing the procedure outlined in this work, we have determined the orientation of the 13 C' CS tensor in the peptide plane of the dipeptide AibAib-NH 2 (Aib = α-aminoisobutyric acid). The Euler angles are found to be (χ CN , ψ CN ) = (34 deg. ± 2 deg., 88 deg. ± 2 deg.) and (χ NH , ψ NH ) = (90 deg. ± 10 deg., 80 deg. ± 10 deg.). From the measured Euler angles it is seen that the σ 33 and σ 22 components of the 13 C' CS tensor approximately lie in the peptide plane
18. Protein structure refinement using a quantum mechanics-based chemical shielding predictor
DEFF Research Database (Denmark)
Bratholm, Lars Andersen; Jensen, Jan Halborg
2017-01-01
The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor...... of a protein backbone and CB chemical shifts (ProCS15, PeerJ, 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic...... geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers...
International Nuclear Information System (INIS)
Hosoya, Yasuaki
1993-01-01
In the present invention, the thickness of the radiation shields is minimized to save the quantity of shields thereby utilizing spaces in a facility effectively. That is, the radiation shields of the present invention comprise first and second shields forming stepwise gaps. They are disposed between a high dose region and a low dose region. The first and second shields have a feature in that the thickness thereof can be set to a size capable of shielding the gaps in accordance with the strength of the radiation source to be shielded. With such a constitution, the thickness of the shields of the radiation processing facility can be minimized. Accordingly, the quantity of the shields can be greatly saved. Spaces in the facility can be utilized effectively. (I.S.)
20. Comparison of Magnetic Susceptibility Tensor and Diffusion Tensor of the Brain.
Science.gov (United States)
Li, Wei; Liu, Chunlei
2013-10-01
Susceptibility tensor imaging (STI) provides a novel approach for noninvasive assessment of the white matter pathways of the brain. Using mouse brain ex vivo , we compared STI with diffusion tensor imaging (DTI), in terms of tensor values, principal tensor values, anisotropy values, and tensor orientations. Despite the completely different biophysical underpinnings, magnetic susceptibility tensors and diffusion tensors show many similarities in the tensor and principal tensor images, for example, the tensors perpendicular to the fiber direction have the highest gray-white matter contrast, and the largest principal tensor is along the fiber direction. Comparison to DTI fractional anisotropy, the susceptibility anisotropy provides much higher sensitivity to the chemical composition of the white matter, especially myelin. The high sensitivity can be further enhanced with the perfusion of ProHance, a gadolinium-based contrast agent. Regarding the tensor orientations, the direction of the largest principal susceptibility tensor agrees with that of diffusion tensors in major white matter fiber bundles. The STI fiber tractography can reconstruct the fiber pathways for the whole corpus callosum and for white matter fiber bundles that are in close contact but in different orientations. There are some differences between susceptibility and diffusion tensor orientations, which are likely due to the limitations in the current STI reconstruction. With the development of more accurate reconstruction methods, STI holds the promise for probing the white matter micro-architectures with more anatomical details and higher chemical sensitivity.
1. Electromagnetic shielding
International Nuclear Information System (INIS)
Tzeng, Wen-Shian V.
1991-01-01
Electromagnetic interference (EMI) shielding materials are well known in the art in forms such as gaskets, caulking compounds, adhesives, coatings and the like for a variety of EMI shielding purposes. In the past, where high shielding performance is necessary, EMI shielding has tended to use silver particles or silver coated copper particles dispersed in a resin binder. More recently, aluminum core silver coated particles have been used to reduce costs while maintaining good electrical and physical properties. (author). 8 figs
2. Shielding container
International Nuclear Information System (INIS)
Darling, K.A.M.
1981-01-01
A shielding container incorporates a dense shield, for example of depleted uranium, cast around a tubular member of curvilinear configuration for accommodating a radiation source capsule. A lining for the tubular member, in the form of a close-coiled flexible guide, provides easy replaceability to counter wear while the container is in service. Container life is extended, and maintenance costs are reduced. (author)
3. Shielding Effectiveness of Laminated Shields
Directory of Open Access Journals (Sweden)
B. P. Rao
2008-12-01
Full Text Available Shielding prevents coupling of undesired radiated electromagnetic energy into equipment otherwise susceptible to it. In view of this, some studies on shielding effectiveness of laminated shields with conductors and conductive polymers using plane-wave theory are carried out in this paper. The plane wave shielding effectiveness of new combination of these materials is evaluated as a function of frequency and thickness of material. Conductivity of the polymers, measured in previous investigations by the cavity perturbation technique, is used to compute the overall reflection and transmission coefficients of single and multiple layers of the polymers. With recent advances in synthesizing stable highly conductive polymers these lightweight mechanically strong materials appear to be viable alternatives to metals for EM1 shielding.
4. REACTOR SHIELD
Science.gov (United States)
Wigner, E.P.; Ohlinger, L.E.; Young, G.J.; Weinberg, A.M.
1959-02-17
Radiation shield construction is described for a nuclear reactor. The shield is comprised of a plurality of steel plates arranged in parallel spaced relationship within a peripheral shell. Reactor coolant inlet tubes extend at right angles through the plates and baffles are arranged between the plates at right angles thereto and extend between the tubes to create a series of zigzag channels between the plates for the circulation of coolant fluid through the shield. The shield may be divided into two main sections; an inner section adjacent the reactor container and an outer section spaced therefrom. Coolant through the first section may be circulated at a faster rate than coolant circulated through the outer section since the area closest to the reactor container is at a higher temperature and is more radioactive. The two sections may have separate cooling systems to prevent the coolant in the outer section from mixing with the more contaminated coolant in the inner section.
5. Nuclear shields
International Nuclear Information System (INIS)
Linares, R.C.; Nienart, L.F.; Toelcke, G.A.
1976-01-01
A process is described for preparing melt-processable nuclear shielding compositions from chloro-fluoro substituted ethylene polymers, particularly PCTFE and E-CTFE, containing 1 to 75 percent by weight of a gadolinium compound. 13 claims, no drawings
6. Tensor Transpose and Its Properties
OpenAIRE
Pan, Ran
2014-01-01
Tensor transpose is a higher order generalization of matrix transpose. In this paper, we use permutations and symmetry group to define? the tensor transpose. Then we discuss the classification and composition of tensor transposes. Properties of tensor transpose are studied in relation to tensor multiplication, tensor eigenvalues, tensor decompositions and tensor rank.
7. Influence of the O-phosphorylation of serine, threonine and tyrosine in proteins on the amidic N-15 chemical shielding anisotropy tensors
Czech Academy of Sciences Publication Activity Database
Emmer, Jiří; Vavrinská, A.; Sychrovský, V.; Benda, L.; Kříž, Z.; Koča, J.; Boelens, R.; Sklenář, V.; Trantírek, L.
2013-01-01
Roč. 55, č. 1 (2013), s. 59-70 ISSN 0925-2738 Institutional support: RVO:60077344 Keywords : csa * Phosphorylation * Amidic nitrogen * Serine * Threonine * Tyrosine * Protein * nmr Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Biochemistry and molecular biology Impact factor: 3.305, year: 2013
8. Influence of the O-phosphorylation of serine, threonine and tyrosine in proteins on the amidic N-15 chemical shielding anisotropy tensors
Czech Academy of Sciences Publication Activity Database
Emmer, J.; Vavrinská, A.; Sychrovský, Vladimír; Benda, Ladislav; Kříž, Z.; Koča, J.; Boelens, R.; Sklenář, V.; Trantírek, L.
2013-01-01
Roč. 55, č. 1 (2013), s. 59-70 ISSN 0925-2738 R&D Projects: GA ČR GAP205/10/0228 Grant - others:CEITEC(XE) CZ.1.05/1.1.00/02.0068 Institutional support: RVO:61388963 Keywords : CSA * phosphorylation * amidic nitrogen * serine * threonine * tyrosine * protein * NMR Subject RIV: CE - Biochemistry Impact factor: 3.305, year: 2013
9. Tensors for physics
CERN Document Server
Hess, Siegfried
2015-01-01
This book presents the science of tensors in a didactic way. The various types and ranks of tensors and the physical basis is presented. Cartesian Tensors are needed for the description of directional phenomena in many branches of physics and for the characterization the anisotropy of material properties. The first sections of the book provide an introduction to the vector and tensor algebra and analysis, with applications to physics, at undergraduate level. Second rank tensors, in particular their symmetries, are discussed in detail. Differentiation and integration of fields, including generalizations of the Stokes law and the Gauss theorem, are treated. The physics relevant for the applications in mechanics, quantum mechanics, electrodynamics and hydrodynamics is presented. The second part of the book is devoted to tensors of any rank, at graduate level. Special topics are irreducible, i.e. symmetric traceless tensors, isotropic tensors, multipole potential tensors, spin tensors, integration and spin-...
10. Random tensors
CERN Document Server
Gurau, Razvan
2017-01-01
Written by the creator of the modern theory of random tensors, this book is the first self-contained introductory text to this rapidly developing theory. Starting from notions familiar to the average researcher or PhD student in mathematical or theoretical physics, the book presents in detail the theory and its applications to physics. The recent detections of the Higgs boson at the LHC and gravitational waves at LIGO mark new milestones in Physics confirming long standing predictions of Quantum Field Theory and General Relativity. These two experimental results only reinforce today the need to find an underlying common framework of the two: the elusive theory of Quantum Gravity. Over the past thirty years, several alternatives have been proposed as theories of Quantum Gravity, chief among them String Theory. While these theories are yet to be tested experimentally, key lessons have already been learned. Whatever the theory of Quantum Gravity may be, it must incorporate random geometry in one form or another....
11. Tensor rank is not multiplicative under the tensor product
NARCIS (Netherlands)
M. Christandl (Matthias); A. K. Jensen (Asger Kjærulff); J. Zuiddam (Jeroen)
2018-01-01
textabstractThe tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the
12. Tensor rank is not multiplicative under the tensor product
NARCIS (Netherlands)
M. Christandl (Matthias); A. K. Jensen (Asger Kjærulff); J. Zuiddam (Jeroen)
2017-01-01
textabstractThe tensor rank of a tensor is the smallest number r such that the tensor can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor (not to be confused with the "tensor Kronecker product" used in
13. Scenarios of groundwater chemical evolution in a region of the Canadian Shield based on multivariate statistical analysis
Directory of Open Access Journals (Sweden)
Ombeline Ghesquière
2015-09-01
New hydrological insights for the region: Four sample clusters were identified. Cluster 1 is composed of low-salinity Ca-HCO3 groundwater corresponding to recently infiltrated water in surface granular aquifers in recharge areas. Cluster 4 Na-(HCO3-Cl groundwater is more saline and corresponds to more evolved groundwater probably from confined bedrock aquifers. Cluster 2 and Cluster 3 (Ca-Na-HCO3 and Ca-HCO3 groundwater, respectively, correspond to mixed or intermediate water between Cluster 1 and Cluster 4 from possibly interconnected granular and bedrock aquifers. This study identifies groundwater recharge, water–rock interactions, ion exchange, solute diffusion from marine clay aquitards, saltwater intrusion and also hydraulic connections between the Canadian Shield and the granular deposits, as the main processes affecting the hydrogeochemical evolution of groundwater in the CHCN region.
14. Tensor rank is not multiplicative under the tensor product
OpenAIRE
Christandl, Matthias; Jensen, Asger Kjærulff; Zuiddam, Jeroen
2017-01-01
The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection between restrictions and degenerations. A result of our study is that tensor rank is not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. Specif...
15. Tensor structure for Nori motives
OpenAIRE
Barbieri-Viale, Luca; Huber, Annette; Prest, Mike
2018-01-01
We construct a tensor product on Freyd's universal abelian category attached to an additive tensor category or a tensor quiver and establish a universal property. This is used to give an alternative construction for the tensor product on Nori motives.
16. Tensor eigenvalues and their applications
CERN Document Server
Qi, Liqun; Chen, Yannan
2018-01-01
This book offers an introduction to applications prompted by tensor analysis, especially by the spectral tensor theory developed in recent years. It covers applications of tensor eigenvalues in multilinear systems, exponential data fitting, tensor complementarity problems, and tensor eigenvalue complementarity problems. It also addresses higher-order diffusion tensor imaging, third-order symmetric and traceless tensors in liquid crystals, piezoelectric tensors, strong ellipticity for elasticity tensors, and higher-order tensors in quantum physics. This book is a valuable reference resource for researchers and graduate students who are interested in applications of tensor eigenvalues.
17. Solvent Dependence of (14)N Nuclear Magnetic Resonance Chemical Shielding Constants as a Test of the Accuracy of the Computed Polarization of Solute Electron Densities by the Solvent.
Science.gov (United States)
Ribeiro, Raphael F; Marenich, Aleksandr V; Cramer, Christopher J; Truhlar, Donald G
2009-09-08
Although continuum solvation models have now been shown to provide good quantitative accuracy for calculating free energies of solvation, questions remain about the accuracy of the perturbed solute electron densities and properties computed from them. Here we examine those questions by applying the SM8, SM8AD, SMD, and IEF-PCM continuum solvation models in combination with the M06-L density functional to compute the (14)N magnetic resonance nuclear shieldings of CH3CN, CH3NO2, CH3NCS, and CH3ONO2 in multiple solvents, and we analyze the dependence of the chemical shifts on solvent dielectric constant. We examine the dependence of the computed chemical shifts on the definition of the molecular cavity (both united-atom models and models based on superposed individual atomic spheres) and three kinds of treatments of the electrostatics, namely the generalized Born approximation with the Coulomb field approximation, the generalized Born model with asymmetric descreening, and models based on approximate numerical solution schemes for the nonhomogeneous Poisson equation. Our most systematic analyses are based on the computation of relative (14)N chemical shifts in a series of solvents, and we compare calculated shielding constants relative to those in CCl4 for various solvation models and density functionals. While differences in the overall results are found to be reasonably small for different solvation models and functionals, the SMx models SM8, and SM8AD, using the same cavity definitions (which for these models means the same atomic radii) as those employed for the calculation of free energies of solvation, exhibit the best agreement with experiment for every functional tested. This suggests that in addition to predicting accurate free energies of solvation, the SM8 and SM8AD generalized Born models also describe the solute polarization in a manner reasonably consistent with experimental (14)N nuclear magnetic resonance spectroscopy. Models based on the
18. Tensors: a Brief Introduction
OpenAIRE
Comon, Pierre
2014-01-01
International audience; Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor decomposition plays a central role in identification of underdetermined mixtures. Despite some similarities, CP and Singular value Decomposition (SVD) are quite different. More generally, tensors and matrices enjoy different properties, as pointed out in this brief survey.
19. Bowen-York tensors
International Nuclear Information System (INIS)
Beig, Robert; Krammer, Werner
2004-01-01
For a conformally flat 3-space, we derive a family of linear second-order partial differential operators which sends vectors into trace-free, symmetric 2-tensors. These maps, which are parametrized by conformal Killing vectors on the 3-space, are such that the divergence of the resulting tensor field depends only on the divergence of the original vector field. In particular, these maps send source-free electric fields into TT tensors. Moreover, if the original vector field is the Coulomb field on R 3 {0}, the resulting tensor fields on R 3 {0} are nothing but the family of TT tensors originally written by Bowen and York
20. Relativistic heavy-atom effects on heavy-atom nuclear shieldings
Science.gov (United States)
Lantto, Perttu; Romero, Rodolfo H.; Gómez, Sergio S.; Aucar, Gustavo A.; Vaara, Juha
2006-11-01
The principal relativistic heavy-atom effects on the nuclear magnetic resonance (NMR) shielding tensor of the heavy atom itself (HAHA effects) are calculated using ab initio methods at the level of the Breit-Pauli Hamiltonian. This is the first systematic study of the main HAHA effects on nuclear shielding and chemical shift by perturbational relativistic approach. The dependence of the HAHA effects on the chemical environment of the heavy atom is investigated for the closed-shell X2+, X4+, XH2, and XH3- (X =Si-Pb) as well as X3+, XH3, and XF3 (X =P-Bi) systems. Fully relativistic Dirac-Hartree-Fock calculations are carried out for comparison. It is necessary in the Breit-Pauli approach to include the second-order magnetic-field-dependent spin-orbit (SO) shielding contribution as it is the larger SO term in XH3-, XH3, and XF3, and is equally large in XH2 as the conventional, third-order field-independent spin-orbit contribution. Considering the chemical shift, the third-order SO mechanism contributes two-thirds of the difference of ˜1500ppm between BiH3 and BiF3. The second-order SO mechanism and the numerically largest relativistic effect, which arises from the cross-term contribution of the Fermi contact hyperfine interaction and the relativistically modified spin-Zeeman interaction (FC/SZ-KE), are isotropic and practically independent of electron correlation effects as well as the chemical environment of the heavy atom. The third-order SO terms depend on these factors and contribute both to heavy-atom shielding anisotropy and NMR chemical shifts. While a qualitative picture of heavy-atom chemical shifts is already obtained at the nonrelativistic level of theory, reliable shifts may be expected after including the third-order SO contributions only, especially when calculations are carried out at correlated level. The FC/SZ-KE contribution to shielding is almost completely produced in the s orbitals of the heavy atom, with values diminishing with the principal
1. Tensor rank is not multiplicative under the tensor product
DEFF Research Database (Denmark)
Christandl, Matthias; Jensen, Asger Kjærulff; Zuiddam, Jeroen
2018-01-01
The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection...... between restrictions and degenerations. A result of our study is that tensor rank is not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. Specifically, if a tensor t has border rank strictly smaller than its rank, then the tensor rank of t...... is not multiplicative under taking a sufficiently hight tensor product power. The “tensor Kronecker product” from algebraic complexity theory is related to our tensor product but different, namely it multiplies two k-tensors to get a k-tensor. Nonmultiplicativity of the tensor Kronecker product has been known since...
2. Cartesian tensors an introduction
CERN Document Server
Temple, G
2004-01-01
This undergraduate text provides an introduction to the theory of Cartesian tensors, defining tensors as multilinear functions of direction, and simplifying many theorems in a manner that lends unity to the subject. The author notes the importance of the analysis of the structure of tensors in terms of spectral sets of projection operators as part of the very substance of quantum theory. He therefore provides an elementary discussion of the subject, in addition to a view of isotropic tensors and spinor analysis within the confines of Euclidean space. The text concludes with an examination of t
3. Theoretical study of lithium clusters by electronic stress tensor
International Nuclear Information System (INIS)
Ichikawa, Kazuhide; Nozaki, Hiroo; Komazawa, Naoya; Tachibana, Akitomo
2012-01-01
We study the electronic structure of small lithium clusters Li n (n = 2 ∼ 8) using the electronic stress tensor. We find that the three eigenvalues of the electronic stress tensor of the Li clusters are negative and degenerate, just like the stress tensor of liquid. This leads us to propose that we may characterize a metallic bond in terms of the electronic stress tensor. Our proposal is that in addition to the negativity of the three eigenvalues of the electronic stress tensor, their degeneracy characterizes some aspects of the metallic nature of chemical bonding. To quantify the degree of degeneracy, we use the differential eigenvalues of the electronic stress tensor. By comparing the Li clusters and hydrocarbon molecules, we show that the sign of the largest eigenvalue and the differential eigenvalues could be useful indices to evaluate the metallicity or covalency of a chemical bond.
4. Theoretical study of the effective chemical shielding anisotropy (CSA) in peptide backbone, rating the impact of CSAs on the cross-correlated relaxations in L-alanyl-L-alanine
Czech Academy of Sciences Publication Activity Database
2009-01-01
Roč. 113, č. 15 (2009), s. 5273-5281 ISSN 1520-6106 R&D Projects: GA AV ČR IAA400550701; GA AV ČR IAA400550702; GA MŠk MEB060705 Institutional research plan: CEZ:AV0Z40550506 Keywords : chemical shielding anisotropy * CSA * L-alanyl-L- alanine Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.471, year: 2009
5. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI
Science.gov (United States)
Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.
2015-01-01
Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085
Directory of Open Access Journals (Sweden)
2017-10-01
Full Text Available Shields played major role in the armament system of the Scythians. Made from organic materials, they are poorly traced on the materials of archaeological excavations. Besides, scaly surface of shields was often perceived in practice as the remnants of the scaly armor. E. V. Chernenko was able to discern the difference between shields’ scaly plates and armor scales. The top edge of the scales was bent inwards, and shield plates had a wire fixation. These observations let significantly increase the number of shields, found in the burial complexes of the Scythians. The comparison of archaeological materials and the images of Scythian warriors allow distinguishing the main forms of Scythian shields. All shields are divided into fencing shields and cover shields. The fencing shields include round wooden shields, reinforced with bronze sheet, and round moon-shaped shields with a notch at the top, with a metal scaly surface. They came to the Scythians under the Greek influence and are known in the monuments of the 4th century BC. Oval shields with scaly surface (back cover shields were used by the Scythian cavalry. They protected the rider in case of frontal attack, and moved back in case of maneuver or closein fighting. Scythian battle tactics were based on rapid approaching the enemy and throwing spears and further rapid withdrawal. Spears stuck in the shields of enemies, forcing them to drop the shields, uncover, and in this stage of the battle the archers attacked the disorganized ranks of the enemy. That was followed by the stage of close fight. Oval form of a wooden shield with leather covering was used by the Scythian infantry and spearmen. Rectangular shields, including wooden shields and the shields pleached from rods, represented a special category. The top of such shield was made of wood, and a pleached pad on leather basis was attached to it. This shield could be a reliable protection from arrows, but it could not protect against javelins
7. Meromorphic tensor categories
OpenAIRE
Soibelman, Yan
1997-01-01
We introduce the notion of meromorphic tensor category and illustrate it in several examples. They include representations of quantum affine algebras, chiral algebras of Beilinson and Drinfeld, G-vertex algebras of Borcherds, and representations of GL over a local field. Hopefully the formalism will accomodate various tensor structures arising in relation to the quantized Knizhnik-Zamolodchikov equations and deformed CFT
8. Simulation of a Shielded Thermocouple | Berntsson | Rwanda Journal
African Journals Online (AJOL)
A shielded thermocouple is a measurement device used for monitoring the temperature in chemically, or mechanically, hostile environments. The sensitive parts of the thermocouple are protected by a shielding layer. In this work we use numerical methods to study the accuracy and dynamic properties of a shielded ...
International Nuclear Information System (INIS)
Nakagawa, Takahiro; Yamagami, Makoto.
1996-01-01
A fixed shielding member made of a radiation shielding material is constituted in perpendicular to an opening formed on radiation shielding walls. The fixed shielding member has one side opened and has other side, the upper portion and the lower portion disposed in close contact with the radiation shielding walls. Movable shielding members made of a radiation shielding material are each disposed openably on both side of the fixed shielding member. The movable shielding member has a shaft as a fulcrum on one side thereof for connecting it to the radiation shielding walls. The other side has a handle attached for opening/closing the movable shielding member. Upon access of an operator, when each one of the movable shielding members is opened/closed on every time, leakage of linear or scattered radiation can be prevented. Even when both of the movable shielding members are opened simultaneously, the fixed shielding member and the movable shielding members form labyrinth to prevent leakage of linear radioactivity. (I.N.)
10. Robust tensor estimation in diffusion tensor imaging
Science.gov (United States)
Maximov, Ivan I.; Grinberg, Farida; Jon Shah, N.
2011-12-01
The signal response measured in diffusion tensor imaging is subject to detrimental influences caused by noise. Noise fields arise due to various contributions such as thermal and physiological noise and sources related to the hardware imperfection. As a result, diffusion tensors estimated by different linear and non-linear least squares methods in absence of a proper noise correction tend to be substantially corrupted. In this work, we propose an advanced tensor estimation approach based on the least median squares method of the robust statistics. Both constrained and non-constrained versions of the method are considered. The performance of the developed algorithm is compared to that of the conventional least squares method and of the alternative robust methods proposed in the literature. Two examples of simulated diffusion attenuations and experimental in vivo diffusion data sets were used as a basis for comparison. The robust algorithms were shown to be advantageous compared to the least squares method in the cases where elimination of the outliers is desirable. Additionally, the constraints were applied in order to prevent generation of the non-positive definite tensors and reduce related artefacts in the maps of fractional anisotropy. The developed method can potentially be exploited also by other MR techniques where a robust regression or outlier localisation is required.
11. Solvent effects and dynamic averaging of 195Pt NMR shielding in cisplatin derivatives.
Science.gov (United States)
Truflandier, Lionel A; Sutter, Kiplangat; Autschbach, Jochen
2011-03-07
The influences of solvent effects and dynamic averaging on the (195)Pt NMR shielding and chemical shifts of cisplatin and three cisplatin derivatives in aqueous solution were computed using explicit and implicit solvation models. Within the density functional theory framework, these simulations were carried out by combining ab initio molecular dynamics (aiMD) simulations for the phase space sampling with all-electron relativistic NMR shielding tensor calculations using the zeroth-order regular approximation. Structural analyses support the presence of a solvent-assisted "inverse" or "anionic" hydration previously observed in similar square-planar transition-metal complexes. Comparisons with computationally less demanding implicit solvent models show that error cancellation is ubiquitous when dealing with liquid-state NMR simulations. After aiMD averaging, the calculated chemical shifts for the four complexes are in good agreement with experiment, with relative deviations between theory and experiment of about 5% on average (1% of the Pt(II) chemical shift range). © 2011 American Chemical Society
12. STRUCTURAL STUDY AND INVESTIGATION OF NMR TENSORS ...
African Journals Online (AJOL)
NBO studies were performed to the second-order and perturbative estimates of donor-acceptor interaction have been done. The procedures of gauge-invariant atomic orbital (GIAO) and continuous-set-of-gauge-transformation (CSGT) were employed to calculate isotropic shielding, chemical shifts anisotropy and chemical ...
13. Tensor spherical harmonics and tensor multipoles. II. Minkowski space
International Nuclear Information System (INIS)
Daumens, M.; Minnaert, P.
1976-01-01
The bases of tensor spherical harmonics and of tensor multipoles discussed in the preceding paper are generalized in the Hilbert space of Minkowski tensor fields. The transformation properties of the tensor multipoles under Lorentz transformation lead to the notion of irreducible tensor multipoles. We show that the usual 4-vector multipoles are themselves irreducible, and we build the irreducible tensor multipoles of the second order. We also give their relations with the symmetric tensor multipoles defined by Zerilli for application to the gravitational radiation
14. Tensors and their applications
CERN Document Server
Islam, Nazrul
2006-01-01
About the Book: The book is written is in easy-to-read style with corresponding examples. The main aim of this book is to precisely explain the fundamentals of Tensors and their applications to Mechanics, Elasticity, Theory of Relativity, Electromagnetic, Riemannian Geometry and many other disciplines of science and engineering, in a lucid manner. The text has been explained section wise, every concept has been narrated in the form of definition, examples and questions related to the concept taught. The overall package of the book is highly useful and interesting for the people associated with the field. Contents: Preliminaries Tensor Algebra Metric Tensor and Riemannian Metric Christoffels Symbols and Covariant Differentiation Riemann-Christoffel Tensor The e-Systems and the Generalized Krönecker Deltas Geometry Analytical Mechanics Curvature of a Curve, Geodesic Parallelism of Vectors Riccis Coefficients of Rotation and Congruence Hyper Surfaces
15. Symmetric Tensor Decomposition
DEFF Research Database (Denmark)
Brachat, Jerome; Comon, Pierre; Mourrain, Bernard
2010-01-01
We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....
16. Nuclear Magnetic Shielding of Monoboranes: Calculation and Assessment of B-11 NMR Chemical Shifts in Planar BX3 and in Tetrahedral [BX4](-) Systems
Czech Academy of Sciences Publication Activity Database
Macháček, Jan; Bühl, M.; Fanfrlík, Jindřich; Hnyk, Drahomír
2017-01-01
Roč. 121, č. 50 (2017), s. 9631-9637 ISSN 1089-5639 R&D Projects: GA ČR(CZ) GA17-08045S Institutional support: RVO:61388980 ; RVO:61388963 Keywords : Electrostatic potentials * Nonrelativistic * Nuclear magnetic shieldings Subject RIV: CA - Inorganic Chemistry OBOR OECD: Inorganic and nuclear chemistry Impact factor: 2.847, year: 2016
17. Ab initio study, investigation of NMR shielding tensors, NBO and ...
African Journals Online (AJOL)
The electrochemical oxidation of dopamine and 3,4-dihydroxymethamphetamine (HHMA) has been studied in the presence of GSH and cysteine as a nucleophile. In order to determine the optimized geometries, energies, dipole moments, atomic charges, thermochemical analysis and other properties, we performed ...
International Nuclear Information System (INIS)
Scheunert, M.
1982-10-01
We develop a graded tensor calculus corresponding to arbitrary Abelian groups of degrees and arbitrary commutation factors. The standard basic constructions and definitions like tensor products, spaces of multilinear mappings, contractions, symmetrization, symmetric algebra, as well as the transpose, adjoint, and trace of a linear mapping, are generalized to the graded case and a multitude of canonical isomorphisms is presented. Moreover, the graded versions of the classical Lie algebras are introduced and some of their basic properties are described. (orig.)
19. Design of emergency shield
International Nuclear Information System (INIS)
Soliman, S.E.
1993-01-01
Manufacturing of an emergency movable shield in the hot laboratories center is urgently needed for the safety of personnel in case of accidents or spilling of radioactive materials. In this report, a full design for an emergency shield is presented and the corresponding dose rates behind the shield for different activities (from 1 mCi to 5 Ci) was calculated by using micro shield computer code. 4 figs., 1 tab
20. Electromagnetically shielded building
International Nuclear Information System (INIS)
Takahashi, T.; Nakamura, M.; Yabana, Y.; Ishikawa, T.; Nagata, K.
1992-01-01
This invention relates to a building having an electromagnetic shield structure well-suited for application to an information network system utilizing electromagnetic waves, and more particularly to an electromagnetically shielded building for enhancing the electromagnetic shielding performance of an external wall. 6 figs
1. A Review of Tensors and Tensor Signal Processing
Science.gov (United States)
Cammoun, L.; Castaño-Moraga, C. A.; Muñoz-Moreno, E.; Sosa-Cabrera, D.; Acar, B.; Rodriguez-Florido, M. A.; Brun, A.; Knutsson, H.; Thiran, J. P.
Tensors have been broadly used in mathematics and physics, since they are a generalization of scalars or vectors and allow to represent more complex properties. In this chapter we present an overview of some tensor applications, especially those focused on the image processing field. From a mathematical point of view, a lot of work has been developed about tensor calculus, which obviously is more complex than scalar or vectorial calculus. Moreover, tensors can represent the metric of a vector space, which is very useful in the field of differential geometry. In physics, tensors have been used to describe several magnitudes, such as the strain or stress of materials. In solid mechanics, tensors are used to define the generalized Hooke’s law, where a fourth order tensor relates the strain and stress tensors. In fluid dynamics, the velocity gradient tensor provides information about the vorticity and the strain of the fluids. Also an electromagnetic tensor is defined, that simplifies the notation of the Maxwell equations. But tensors are not constrained to physics and mathematics. They have been used, for instance, in medical imaging, where we can highlight two applications: the diffusion tensor image, which represents how molecules diffuse inside the tissues and is broadly used for brain imaging; and the tensorial elastography, which computes the strain and vorticity tensor to analyze the tissues properties. Tensors have also been used in computer vision to provide information about the local structure or to define anisotropic image filters.
2. NMR shielding calculations across the periodic table: diamagnetic uranium compounds. 2. Ligand and metal NMR.
Science.gov (United States)
Schreckenbach, Georg
2002-12-16
In this and a previous article (J. Phys. Chem. A 2000, 104, 8244), the range of application for relativistic density functional theory (DFT) is extended to the calculation of nuclear magnetic resonance (NMR) shieldings and chemical shifts in diamagnetic actinide compounds. Two relativistic DFT methods are used, ZORA ("zeroth-order regular approximation") and the quasirelativistic (QR) method. In the given second paper, NMR shieldings and chemical shifts are calculated and discussed for a wide range of compounds. The molecules studied comprise uranyl complexes, [UO(2)L(n)](+/-)(q); UF(6); inorganic UF(6) derivatives, UF(6-n)Cl(n), n = 0-6; and organometallic UF(6) derivatives, UF(6-n)(OCH(3))(n), n = 0-5. Uranyl complexes include [UO(2)F(4)](2-), [UO(2)Cl(4)](2-), [UO(2)(OH)(4)](2-), [UO(2)(CO(3))(3)](4-), and [UO(2)(H(2)O)(5)](2+). For the ligand NMR, moderate (e.g., (19)F NMR chemical shifts in UF(6-n)Cl(n)) to excellent agreement [e.g., (19)F chemical shift tensor in UF(6) or (1)H NMR in UF(6-n)(OCH(3))(n)] has been found between theory and experiment. The methods have been used to calculate the experimentally unknown (235)U NMR chemical shifts. A large chemical shift range of at least 21,000 ppm has been predicted for the (235)U nucleus. ZORA spin-orbit appears to be the most accurate method for predicting actinide metal chemical shifts. Trends in the (235)U NMR chemical shifts of UF(6-n)L(n) molecules are analyzed and explained in terms of the calculated electronic structure. It is argued that the energy separation and interaction between occupied and virtual orbitals with f-character are the determining factors.
3. Shielding benchmark problems, (2)
International Nuclear Information System (INIS)
Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.
1980-02-01
Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)
4. Tensor analysis for physicists
CERN Document Server
Schouten, J A
1989-01-01
This brilliant study by a famed mathematical scholar and former professor of mathematics at the University of Amsterdam integrates a concise exposition of the mathematical basis of tensor analysis with admirably chosen physical examples of the theory. The first five chapters incisively set out the mathematical theory underlying the use of tensors. The tensor algebra in EN and RN is developed in Chapters I and II. Chapter II introduces a sub-group of the affine group, then deals with the identification of quantities in EN. The tensor analysis in XN is developed in Chapter IV. In chapters VI through IX, Professor Schouten presents applications of the theory that are both intrinsically interesting and good examples of the use and advantages of the calculus. Chapter VI, intimately connected with Chapter III, shows that the dimensions of physical quantities depend upon the choice of the underlying group, and that tensor calculus is the best instrument for dealing with the properties of anisotropic media. In Chapte...
5. The use of chemical shift temperature gradients to establish the paramagnetic susceptibility tensor orientation: Implication for structure determination/refinement in paramagnetic metalloproteins
International Nuclear Information System (INIS)
Xia Zhicheng; Nguyen, Bao D.; La Mar, Gerd N.
2000-01-01
The use of dipolar shifts as important constraints in refining molecular structure of paramagnetic metalloproteins by solution NMR is now well established. A crucial initial step in this procedure is the determination of the orientation of the anisotropic paramagnetic susceptibility tensor in the molecular frame which is generated interactively with the structure refinement. The use of dipolar shifts as constraints demands knowledge of the diamagnetic shift, which, however, is very often not directly and easily accessible. We demonstrate that temperature gradients of dipolar shifts can serve as alternative constraints for determining the orientation of the magnetic axes, thereby eliminating the need to estimate the diamagnetic shifts. This approach is tested on low-spin, ferric sperm whale cyanometmyoglobin by determining the orientation, anisotropies and anisotropy temperature gradients by the alternate routes of using dipolar shifts and dipolar shift gradients as constraints. The alternate routes ultimately lead to very similar orientation of the magnetic axes, magnetic anisotropies and magnetic anisotropy temperature gradients which, by inference, would lead to an equally valid description of the molecular structure. It is expected that the use of the dipolar shift temperature gradients, rather than the dipolar shifts directly, as constraints will provide an accurate shortcut in a solution structure determination of a paramagnetic metalloprotein
6. Killing tensors and conformal Killing tensors from conformal Killing vectors
International Nuclear Information System (INIS)
Rani, Raffaele; Edgar, S Brian; Barnes, Alan
2003-01-01
Koutras has proposed some methods to construct reducible proper conformal Killing tensors and Killing tensors (which are, in general, irreducible) when a pair of orthogonal conformal Killing vectors exist in a given space. We give the completely general result demonstrating that this severe restriction of orthogonality is unnecessary. In addition, we correct and extend some results concerning Killing tensors constructed from a single conformal Killing vector. A number of examples demonstrate that it is possible to construct a much larger class of reducible proper conformal Killing tensors and Killing tensors than permitted by the Koutras algorithms. In particular, by showing that all conformal Killing tensors are reducible in conformally flat spaces, we have a method of constructing all conformal Killing tensors, and hence all the Killing tensors (which will in general be irreducible) of conformally flat spaces using their conformal Killing vectors
7. Tensors, relativity, and cosmology
CERN Document Server
2015-01-01
Tensors, Relativity, and Cosmology, Second Edition, combines relativity, astrophysics, and cosmology in a single volume, providing a simplified introduction to each subject that is followed by detailed mathematical derivations. The book includes a section on general relativity that gives the case for a curved space-time, presents the mathematical background (tensor calculus, Riemannian geometry), discusses the Einstein equation and its solutions (including black holes and Penrose processes), and considers the energy-momentum tensor for various solutions. In addition, a section on relativistic astrophysics discusses stellar contraction and collapse, neutron stars and their equations of state, black holes, and accretion onto collapsed objects, with a final section on cosmology discussing cosmological models, observational tests, and scenarios for the early universe. This fully revised and updated second edition includes new material on relativistic effects, such as the behavior of clocks and measuring rods in m...
8. Applied tensor stereology
DEFF Research Database (Denmark)
Ziegel, Johanna; Nyengaard, Jens Randel; Jensen, Eva B. Vedel
In the present paper, statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles are developed. The focus of this work is on the case where the particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle...... shape and orientation, and stereological estimators of the tensors are derived. It is shown that these estimators can be combined to provide consistent estimators of the moments of the so-called particle cover density. The covariance structure associated with the particle cover density depends...... may be analysed using a generalized methods of moments in which the volume tensors enter. The developed methods are used to study the cell organization in the human brain cortex....
9. Tensor Calculus: Unlearning Vector Calculus
Science.gov (United States)
Lee, Wha-Suck; Engelbrecht, Johann; Moller, Rita
2018-01-01
Tensor calculus is critical in the study of the vector calculus of the surface of a body. Indeed, tensor calculus is a natural step-up for vector calculus. This paper presents some pitfalls of a traditional course in vector calculus in transitioning to tensor calculus. We show how a deeper emphasis on traditional topics such as the Jacobian can…
10. The evolution of tensor polarization
International Nuclear Information System (INIS)
Huang, H.; Lee, S.Y.; Ratner, L.
1993-01-01
By using the equation of motion for the vector polarization, the spin transfer matrix for spin tensor polarization, the spin transfer matrix for spin tensor polarization is derived. The evolution equation for the tensor polarization is studied in the presence of an isolate spin resonance and in the presence of a spin rotor, or snake
11. Diffusion tensor image registration using hybrid connectivity and tensor features.
Science.gov (United States)
Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang
2014-07-01
Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. Copyright © 2013 Wiley Periodicals, Inc.
12. Evaluation of Bayesian tensor estimation using tensor coherence
Science.gov (United States)
Kim, Dae-Jin; Kim, In-Young; Jeong, Seok-Oh; Park, Hae-Jeong
2009-06-01
Fiber tractography, a unique and non-invasive method to estimate axonal fibers within white matter, constructs the putative streamlines from diffusion tensor MRI by interconnecting voxels according to the propagation direction defined by the diffusion tensor. This direction has uncertainties due to the properties of underlying fiber bundles, neighboring structures and image noise. Therefore, robust estimation of the diffusion direction is essential to reconstruct reliable fiber pathways. For this purpose, we propose a tensor estimation method using a Bayesian framework, which includes an a priori probability distribution based on tensor coherence indices, to utilize both the neighborhood direction information and the inertia moment as regularization terms. The reliability of the proposed tensor estimation was evaluated using Monte Carlo simulations in terms of accuracy and precision with four synthetic tensor fields at various SNRs and in vivo human data of brain and calf muscle. Proposed Bayesian estimation demonstrated the relative robustness to noise and the higher reliability compared to the simple tensor regression.
13. Evaluation of Bayesian tensor estimation using tensor coherence
Energy Technology Data Exchange (ETDEWEB)
Kim, Dae-Jin; Park, Hae-Jeong [Laboratory of Molecular Neuroimaging Technology, Brain Korea 21 Project for Medical Science, Yonsei University, College of Medicine, Seoul (Korea, Republic of); Kim, In-Young [Department of Biomedical Engineering, Hanyang University, Seoul (Korea, Republic of); Jeong, Seok-Oh [Department of Statistics, Hankuk University of Foreign Studies, Yongin (Korea, Republic of)], E-mail: [email protected]
2009-06-21
Fiber tractography, a unique and non-invasive method to estimate axonal fibers within white matter, constructs the putative streamlines from diffusion tensor MRI by interconnecting voxels according to the propagation direction defined by the diffusion tensor. This direction has uncertainties due to the properties of underlying fiber bundles, neighboring structures and image noise. Therefore, robust estimation of the diffusion direction is essential to reconstruct reliable fiber pathways. For this purpose, we propose a tensor estimation method using a Bayesian framework, which includes an a priori probability distribution based on tensor coherence indices, to utilize both the neighborhood direction information and the inertia moment as regularization terms. The reliability of the proposed tensor estimation was evaluated using Monte Carlo simulations in terms of accuracy and precision with four synthetic tensor fields at various SNRs and in vivo human data of brain and calf muscle. Proposed Bayesian estimation demonstrated the relative robustness to noise and the higher reliability compared to the simple tensor regression.
14. Gogny interactions with tensor terms
Energy Technology Data Exchange (ETDEWEB)
Anguiano, M.; Lallena, A.M.; Bernard, R.N. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain); Co' , G. [INFN, Lecce (Italy); De Donno, V. [Universita del Salento, Dipartimento di Matematica e Fisica ' ' E. De Giorgi' ' , Lecce (Italy); Grasso, M. [Universite Paris-Sud, Institut de Physique Nucleaire, IN2P3-CNRS, Orsay (France)
2016-07-15
We present a perturbative approach to include tensor terms in the Gogny interaction. We do not change the values of the usual parameterisations, with the only exception of the spin-orbit term, and we add tensor terms whose only free parameters are the strengths of the interactions. We identify observables sensitive to the presence of the tensor force in Hartree-Fock, Hartree-Fock-Bogoliubov and random phase approximation calculations. We show the need of including two tensor contributions, at least: a pure tensor term and a tensor-isospin term. We show results relevant for the inclusion of the tensor term for single-particle energies, charge-conserving magnetic excitations and Gamow-Teller excitations. (orig.)
15. The geomagnetic field gradient tensor
DEFF Research Database (Denmark)
Kotsiaros, Stavros; Olsen, Nils
2012-01-01
We develop the general mathematical basis for space magnetic gradiometry in spherical coordinates. The magnetic gradient tensor is a second rank tensor consisting of 3 × 3 = 9 spatial derivatives. Since the geomagnetic field vector B is always solenoidal (∇ · B = 0) there are only eight independent...... tensor elements. Furthermore, in current free regions the magnetic gradient tensor becomes symmetric, further reducing the number of independent elements to five. In that case B is a Laplacian potential field and the gradient tensor can be expressed in series of spherical harmonics. We present properties...... of the magnetic gradient tensor and provide explicit expressions of its elements in terms of spherical harmonics. Finally we discuss the benefit of using gradient measurements for exploring the Earth’s magnetic field from space, in particular the advantage of the various tensor elements for a better determination...
16. TensorFlow Distributions
OpenAIRE
Dillon, Joshua V.; Langmore, Ian; Tran, Dustin; Brevdo, Eugene; Vasudevan, Srinivas; Moore, Dave; Patton, Brian; Alemi, Alex; Hoffman, Matt; Saurous, Rif A.
2017-01-01
The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation. Building on two basic abstractions, it offers flexible building blocks for probabilistic computation. Distributions provide fast, numerically stable methods for generating samples and computing statistics, e.g., log density. Bijectors provide composable volume-tracking transformations with automatic caching. Together these enable...
17. Tensoral: A system for post-processing turbulence simulation data
Science.gov (United States)
Dresselhaus, Eliot
1993-01-01
Many computer simulations in engineering and science -- and especially in computational fluid dynamics (CFD) -- produce huge quantities of numerical data. These data are often so large as to make even relatively simple post-processing of this data unwieldy. The data, once computed and quality-assured, is most likely analyzed by only a few people. As a result, much useful numerical data is under-utilized. Since future state-of-the-art simulations will produce even larger datasets, will use more complex flow geometries, and will be performed on more complex supercomputers, data management issues will become increasingly cumbersome. My goal is to provide software which will automate the present and future task of managing and post-processing large turbulence datasets. My research has focused on the development of these software tools -- specifically, through the development of a very high-level language called 'Tensoral'. The ultimate goal of Tensoral is to convert high-level mathematical expressions (tensor algebra, calculus, and statistics) into efficient low-level programs which numerically calculate these expressions given simulation datasets. This approach to the database and post-processing problem has several advantages. Using Tensoral the numerical and data management details of a simulation are shielded from the concerns of the end user. This shielding is carried out without sacrificing post-processor efficiency and robustness. Another advantage of Tensoral is that its very high-level nature lends itself to portability across a wide variety of computing (and supercomputing) platforms. This is especially important considering the rapidity of changes in supercomputing hardware.
18. Photonic Bandgap (PBG) Shielding Technology
Science.gov (United States)
Bastin, Gary L.
2007-01-01
Photonic Bandgap (PBG) shielding technology is a new approach to designing electromagnetic shielding materials for mitigating Electromagnetic Interference (EM!) with small, light-weight shielding materials. It focuses on ground planes of printed wiring boards (PWBs), rather than on components. Modem PSG materials also are emerging based on planar materials, in place of earlier, bulkier, 3-dimensional PBG structures. Planar PBG designs especially show great promise in mitigating and suppressing EMI and crosstalk for aerospace designs, such as needed for NASA's Constellation Program, for returning humans to the moon and for use by our first human visitors traveling to and from Mars. Photonic Bandgap (PBG) materials are also known as artificial dielectrics, meta-materials, and photonic crystals. General PBG materials are fundamentally periodic slow-wave structures in I, 2, or 3 dimensions. By adjusting the choice of structure periodicities in terms of size and recurring structure spacings, multiple scatterings of surface waves can be created that act as a forbidden energy gap (i.e., a range of frequencies) over which nominally-conductive metallic conductors cease to be a conductor and become dielectrics. Equivalently, PBG materials can be regarded as giving rise to forbidden energy gaps in metals without chemical doping, analogous to electron bandgap properties that previously gave rise to the modem semiconductor industry 60 years ago. Electromagnetic waves cannot propagate over bandgap regions that are created with PBG materials, that is, over frequencies for which a bandgap is artificially created through introducing periodic defects
19. Tensor Permutation Matrices in Finite Dimensions
OpenAIRE
Christian, Rakotonirina
2005-01-01
We have generalised the properties with the tensor product, of one 4x4 matrix which is a permutation matrix, and we call a tensor commutation matrix. Tensor commutation matrices can be constructed with or without calculus. A formula allows us to construct a tensor permutation matrix, which is a generalisation of tensor commutation matrix, has been established. The expression of an element of a tensor commutation matrix has been generalised in the case of any element of a tensor permutation ma...
20. Thermal shielding walls
International Nuclear Information System (INIS)
Fujii, Takenori.
1980-01-01
Purpose: To suppress the amount of heat released from a pressure vessel and reliably shield neutron fluxes and gamma rays from a reactor core by the addition of cooling ducts in a thermal shielding wall provided with a blower and an air cooling cooler. Constitution: A thermal shielding wall is located on a pedestal so as to surround a pressure vessel and the pressure vessel is located by way of a skirt in the same manner. Heat insulators are disposed between the pressure vessel and the shielding wall while closer to the skirt in the skirt portion and closer to the shielding wall in the vessel body portion. A plurality of cooling ducts are arranged side by side at the inner side in the shielding wall. A through-duct radially passing through the wall is provided in the lower portion thereof and a blower fan for cooling air and a cooler for cooling returned air are connected by way of a communication duct to the other end of the through-duct. This enables to provide a shielding wall capable of suppressing the amount of heat released from the pressure vessel as much as possible and giving more effective cooling. (Kawakami, Y.)
1. Tensor Factorization for Low-Rank Tensor Completion.
Science.gov (United States)
Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao
2018-03-01
Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.
2. Radiation shielding phenolic fibers and method of producing same
International Nuclear Information System (INIS)
Ohtomo, K.
1976-01-01
A radiation shielding phenolic fiber is described comprising a filamentary phenolic polymer consisting predominantly of a sulfonic acid group-containing cured novolak resin and a metallic atom having a great radiation shielding capacity, the metallic atom being incorporated in the polymer by being chemically bound in the ionic state in the novolak resin. A method for the production of the fiber is discussed
3. Shielding high energy accelerators
CERN Document Server
Stevenson, Graham Roger
2001-01-01
After introducing the subject of shielding high energy accelerators, point source, line-of-sight models, and in particular the Moyer model. are discussed. Their use in the shielding of proton and electron accelerators is demonstrated and their limitations noted. especially in relation to shielding in the forward direction provided by large, flat walls. The limitations of reducing problems to those using it cylindrical geometry description are stressed. Finally the use of different estimators for predicting dose is discussed. It is suggested that dose calculated from track-length estimators will generally give the most satisfactory estimate. (9 refs).
4. Tensor norms and operator ideals
CERN Document Server
Defant, A; Floret, K
1992-01-01
The three chapters of this book are entitled Basic Concepts, Tensor Norms, and Special Topics. The first may serve as part of an introductory course in Functional Analysis since it shows the powerful use of the projective and injective tensor norms, as well as the basics of the theory of operator ideals. The second chapter is the main part of the book: it presents the theory of tensor norms as designed by Grothendieck in the Resumé and deals with the relation between tensor norms and operator ideals. The last chapter deals with special questions. Each section is accompanied by a series of exer
5. Notes on super Killing tensors
Energy Technology Data Exchange (ETDEWEB)
Howe, P.S. [Department of Mathematics, King’s College London,The Strand, London WC2R 2LS (United Kingdom); Lindström, University [Department of Physics and Astronomy, Theoretical Physics, Uppsala University,SE-751 20 Uppsala (Sweden); Theoretical Physics, Imperial College London,Prince Consort Road, London SW7 2AZ (United Kingdom)
2016-03-14
The notion of a Killing tensor is generalised to a superspace setting. Conserved quantities associated with these are defined for superparticles and Poisson brackets are used to define a supersymmetric version of the even Schouten-Nijenhuis bracket. Superconformal Killing tensors in flat superspaces are studied for spacetime dimensions 3,4,5,6 and 10. These tensors are also presented in analytic superspaces and super-twistor spaces for 3,4 and 6 dimensions. Algebraic structures associated with superconformal Killing tensors are also briefly discussed.
6. Tensor Train Neighborhood Preserving Embedding
Science.gov (United States)
Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin
2018-05-01
In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.
7. Scintillation counter, segmented shield
International Nuclear Information System (INIS)
Olson, R.E.; Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Science.gov (United States)
Klebanoff, Leonard Elliott [Dublin, CA; Rader, Daniel John [Albuquerque, NM; Walton, Christopher [Berkeley, CA; Folta, James [Livermore, CA
2009-01-06
An efficient device for capturing fast moving particles has an adhesive particle shield that includes (i) a mounting panel and (ii) a film that is attached to the mounting panel wherein the outer surface of the film has an adhesive coating disposed thereon to capture particles contacting the outer surface. The shield can be employed to maintain a substantially particle free environment such as in photolithographic systems having critical surfaces, such as wafers, masks, and optics and in the tools used to make these components, that are sensitive to particle contamination. The shield can be portable to be positioned in hard-to-reach areas of a photolithography machine. The adhesive particle shield can incorporate cooling means to attract particles via the thermophoresis effect.
9. Correlating the P-31 NMR Chemical Shielding Tensor and the (2)J(P,C) Spin-Spin Coupling Constants with Torsion Angles zeta and alpha in the Backbone of Nucleic Acids
Czech Academy of Sciences Publication Activity Database
2012-01-01
Roč. 116, č. 12 (2012), s. 3823-3833 ISSN 1520-6106 R&D Projects: GA ČR GAP205/10/0228; GA ČR GPP208/10/P398; GA ČR GA203/09/2037 Institutional research plan: CEZ:AV0Z40550506 Keywords : nucleic acids * phosphorus NMR * NMR calculations * cross-correlated relaxation * spin–spin coupling constants Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.607, year: 2012
10. Grounding, shielding, and bonding
Science.gov (United States)
Catrysse, J.
1991-06-01
In the electromagnetic compatibility design (EMC) of systems and circuits, both grounding and shielding are related to the coupling mechanisms of the system with (radiated) electromagnetic fields. Grounding is more related to the source or victim circuit (or system) and determines the characteristic of the coupling mechanism between fields and currents/voltages. Shielding is a way of interacting in the radiation path of an electromagnetic field. The basic principles and practical design rules are discussed.
International Nuclear Information System (INIS)
Mizuochi, Akira; Narita, Takuya; Omori, Tetsu; Nemezawa, Isao; Kimura, Kunihiro.
1997-01-01
12. Asymptotic tensor rank of graph tensors: beyond matrix multiplication
NARCIS (Netherlands)
M. Christandl (Matthias); P. Vrana (Péter); J. Zuiddam (Jeroen)
2016-01-01
textabstractWe present an upper bound on the exponent of the asymptotic behaviour of the tensor rank of a family of tensors defined by the complete graph on $k$ vertices. For $k\\geq4$, we show that the exponent per edge is at most 0.77, outperforming the best known upper bound on the exponent per
13. Solvent effects on the magnetic shielding of tertiary butyl alcohol
African Journals Online (AJOL)
)4 and tetramethyl ammonium cation N(CH3)4(+) have also been presented. KEY WORDS: Solvent effects, Magnetic shielding, Tertiary butyl alcohol, Tertiary butyl amine, Continuum solvation calculations, Chemical shift estimation methods
14. Automated Fragmentation Polarizable Embedding Density Functional Theory (PE-DFT) Calculations of Nuclear Magnetic Resonance (NMR) Shielding Constants of Proteins with Application to Chemical Shift Predictions
DEFF Research Database (Denmark)
Steinmann, Casper; Bratholm, Lars Andersen; Olsen, Jógvan Magnus Haugaard
2017-01-01
that are comparable with experiment. The introduction of a probabilistic linear regression model allows us to substantially reduce the number of snapshots that are needed to make comparisons with experiment. This approach is further improved by augmenting snapshot selection with chemical shift predictions by which we...
15. Indicial tensor manipulation on MACSYMA
International Nuclear Information System (INIS)
Bogen, R.A.; Pavelle, R.
1977-01-01
A new computational tool for physical calculations is described. It is the first computer system capable of performing indicial tensor calculus (as opposed to component tensor calculus). It is now operational on the symbolic manipulation system MACSYMA. The authors outline the capabilities of the system and describe some of the physical problems considered as well as others being examined at this time. (Auth.)
16. Killing-Yano tensors and Nambu mechanics
International Nuclear Information System (INIS)
Baleanu, D.
1998-01-01
Killing-Yano tensors were introduced in 1952 by Kentaro-Yano from mathematical point of view. The physical interpretation of Killing-Yano tensors of rank higher than two was unclear. We found that all Killing-Yano tensors η i 1 i 2 . .. i n with covariant derivative zero are Nambu tensors. We found that in the case of flat space case all Killing-Yano tensors are Nambu tensors. In the case of Taub-NUT and Kerr-Newmann metric Killing-Yano tensors of order two generate Nambu tensors of rank 3
17. Neutron shielding material
International Nuclear Information System (INIS)
Nodaka, M.; Iida, T.; Taniuchi, H.; Yosimura, K.; Nagahama, H.
1993-01-01
From among the neutron shielding materials of the 'kobesh' series developed by Kobe Steel, Ltd. for transport and storage packagings, silicon rubber base type material has been tested for several items with a view to practical application and official authorization, and in order to determine its adaptability to actual vessels. Silicon rubber base type 'kobesh SR-T01' is a material in which, from among the silicone rubber based neutron shielding materials, the hydrogen content is highest and the boron content is most optimized. Its neutron shielding capability has been already described in the previous report (Taniuchi, 1986). The following tests were carried out to determine suitability for practical application; 1) Long-term thermal stability test 2) Pouring test on an actual-scale model 3) Fire test The experimental results showed that the silicone rubber based neutron shielding material has good neutron shielding capability and high long-term fire resistance, and that it can be applied to the advanced transport packaging. (author)
18. Method for dismantling shields
International Nuclear Information System (INIS)
Fukuzawa, Rokuro; Kondo, Nobuhiro; Kamiyama, Yoshinori; Kawasato, Ken; Hiraga, Tomoaki.
1990-01-01
The object of the present invention is to enable operators to dismantle shieldings contaminated by radioactivity easily and in a short period of time without danger of radiation exposure. A plurality of introduction pipes are embedded previously to the shielding walls of shielding members which contain a reactor core in a state where both ends of the introduction pipes are in communication with the outside. A wire saw is inserted into the introduction pipes to cut the shieldings upon dismantling. Then, shieldings can be dismantled easily in a short period of time with no radiation exposure to operator's. Further, according to the present invention, since the wire saw can be set easily and a large area can be cut at once, operation efficiency is improved. Further, since remote control is possible, cutting can be conducted in water and complicated places of the reactor. Biting upon starting the wire saw in the introduction pipe is reduced to facilitate startup for the rotation. (I.S.)
19. Local recovery of lithospheric stress tensor from GOCE gravitational tensor
Science.gov (United States)
Eshagh, Mehdi
2017-04-01
The sublithospheric stress due to mantle convection can be computed from gravity data and propagated through the lithosphere by solving the boundary-value problem of elasticity for the Earth's lithosphere. In this case, a full tensor of stress can be computed at any point inside this elastic layer. Here, we present mathematical foundations for recovering such a tensor from gravitational tensor measured at satellite altitudes. The mathematical relations will be much simpler in this way than the case of using gravity data as no derivative of spherical harmonics (SHs) or Legendre polynomials is involved in the expressions. Here, new relations between the SH coefficients of the stress and gravitational tensor elements are presented. Thereafter, integral equations are established from them to recover the elements of stress tensor from those of the gravitational tensor. The integrals have no closed-form kernels, but they are easy to invert and their spatial truncation errors are reducible. The integral equations are used to invert the real data of the gravity field and steady-state ocean circulation explorer mission (GOCE), in 2009 November, over the South American plate and its surroundings to recover the stress tensor at a depth of 35 km. The recovered stress fields are in good agreement with the tectonic and geological features of the area.
20. Double-layer neutron shield design as neutron shielding application
Science.gov (United States)
Sariyer, Demet; Küçer, Rahmi
2018-02-01
The shield design in particle accelerators and other high energy facilities are mainly connected to the high-energy neutrons. The deep penetration of neutrons through massive shield has become a very serious problem. For shielding to be efficient, most of these neutrons should be confined to the shielding volume. If the interior space will become limited, the sufficient thickness of multilayer shield must be used. Concrete and iron are widely used as a multilayer shield material. Two layers shield material was selected to guarantee radiation safety outside of the shield against neutrons generated in the interaction of the different proton energies. One of them was one meter of concrete, the other was iron-contained material (FeB, Fe2B and stainless-steel) to be determined shield thicknesses. FLUKA Monte Carlo code was used for shield design geometry and required neutron dose distributions. The resulting two layered shields are shown better performance than single used concrete, thus the shield design could leave more space in the interior shielded areas.
1. MATLAB tensor classes for fast algorithm prototyping.
Energy Technology Data Exchange (ETDEWEB)
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)
2004-10-01
Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.
International Nuclear Information System (INIS)
Matsumoto, Akio; Isobe, Eiji.
1976-01-01
Purpose: To increase the shielding capacity of the radiation shielding material having an abundant flexibility. Constitution: A mat consisting of a lead or lead alloy fibrous material is covered with a cloth, and the two are made integral by sewing in a kilted fashion by using a yarn. Thereafter, the system is covered with a gas-tight film or sheet. The shielding material obtained in this way has, in addition to the above merits, advantages in that (1) it is free from restoration due to elasticity so that it can readily seal contaminants, (2) it can be used in a state consisting of a number of overlapped layers, (3) it fits the shoulder well and is readily portable and (4) it permits attachment of fasteners or the like. (Ikeda, J.)
3. Hybrid Active-Passive Radiation Shielding System
Data.gov (United States)
National Aeronautics and Space Administration — A radiation shielding system is proposed that integrates active magnetic fields with passive shielding materials. The objective is to increase the shielding...
4. Glove box shield
Science.gov (United States)
Brackenbush, L.W.; Hoenes, G.R.
A shield for a glove box housing radioactive material is comprised of spaced apart clamping members which maintain three overlapping flaps in place therebetween. There is a central flap and two side flaps, the side flaps overlapping at the interior edges thereof and the central flap extending past the intersection of the side flaps in order to insure that the shield is always closed when the user wthdraws his hand from the glove box. Lead loaded neoprene rubber is the preferred material for the three flaps, the extent of lead loading depending upon the radiation levels within the glove box.
5. Random SU(2) invariant tensors
Science.gov (United States)
Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei
2018-04-01
SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n = 4. In this paper, we show that for n > 4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.
6. Continuous electrodeionization through electrostatic shielding
International Nuclear Information System (INIS)
Dermentzis, Konstantinos
2008-01-01
We report a new continuous electrodeionization cell with electrostatically shielded concentrate compartments or electrochemical Faraday cages formed by porous electronically and ionically conductive media, instead of permselective ion exchange membranes. Due to local elimination of the applied electric field within the compartments, they electrostatically retain the incoming ions and act as 'electrostatic ion pumps' or 'ion traps' and therefore concentrate compartments. The porous media are chemically and thermally stable. Electrodeionization or electrodialysis cells containing such concentrate compartments in place of ion exchange membranes can be used to regenerate ion exchange resins and produce deionized water, to purify industrial effluents and desalinate brackish or seawater. The cells can work by polarity reversal without any negative impact to the deionization process. Because the electronically and ionically active media constituting the electrostatically shielded concentrate compartments are not permselective and coions are not repelled but can be swept by the migrating counterions, the cells are not affected by the known membrane associated limitations, such as concentration polarization or scaling and show an increased current efficiency
7. Calculating contracted tensor Feynman integrals
International Nuclear Information System (INIS)
Fleischer, J.; Riemann, T.
2011-01-01
A recently derived approach to the tensor reduction of 5-point one-loop Feynman integrals expresses the tensor coefficients by scalar 1-point to 4-point Feynman integrals completely algebraically. In this Letter we derive extremely compact algebraic expressions for the contractions of the tensor integrals with external momenta. This is based on sums over signed minors weighted with scalar products of the external momenta. With these contractions one can construct the invariant amplitudes of the matrix elements under consideration, and the evaluation of one-loop contributions to massless and massive multi-particle production at high energy colliders like LHC and ILC is expected to be performed very efficiently.
8. Metric Tensor Vs. Metric Extensor
OpenAIRE
Fernández, V. V.; Moya, A. M.; Rodrigues Jr, Waldyr A.
2002-01-01
In this paper we give a comparison between the formulation of the concept of metric for a real vector space of finite dimension in terms of \\emph{tensors} and \\emph{extensors}. A nice property of metric extensors is that they have inverses which are also themselves metric extensors. This property is not shared by metric tensors because tensors do \\emph{not} have inverses. We relate the definition of determinant of a metric extensor with the classical determinant of the corresponding matrix as...
9. Calculating contracted tensor Feynman integrals
International Nuclear Information System (INIS)
Fleischer, J.
2011-05-01
A recently derived approach to the tensor reduction of 5-point one-loop Feynman integrals expresses the tensor coefficients by scalar 1-point to 4-point Feynman integrals completely algebraically. In this letter we derive extremely compact algebraic expressions for the contractions of the tensor integrals with externalmomenta. This is based on sums over signedminors weighted with scalar products of the external momenta. With these contractions one can construct the invariant amplitudes of the matrix elements under consideration, and the evaluation of one-loop contributions to massless and massive multi-particle production at high energy colliders like LHC and ILC is expected to be performed very efficiently. (orig.)
10. Spectroscopic (FT-IR, FT-Raman and UV-Visible) investigations, NMR chemical shielding anisotropy (CSA) parameters of 2,6-Diamino-4-chloropyrimidine for dye sensitized solar cells using density functional theory.
Science.gov (United States)
Gladis Anitha, E; Joseph Vedhagiri, S; Parimala, K
2015-02-05
The molecular structure, geometry optimization, vibrational frequencies of organic dye sensitizer 2,6-Diamino-4-chloropyrimidine (DACP) were studied based on Hartree-Fock (HF) and density functional theory (DFT) using B3LYP methods with 6-311++G(d,p) basis set. Ultraviolet-Visible (UV-Vis) spectrum was investigated by time dependent DFT (TD-DFT). Features of the electronic absorption spectrum in the UV-Visible regions were assigned based on TD-DFT calculation. The absorption bands are assigned to transitions. The interfacial electron transfer between semiconductor TiO2 electrode and dye sensitizer DACP is due to an electron injection process from excited dye to the semiconductor's conduction band. The observed and the calculated frequencies are found to be in good agreement. The energies of the frontier molecular orbitals (FMOS) have also been determined. The chemical shielding anisotropic (CSA) parameters are calculated from the NMR analysis, Stability of the molecule arising from hyperconjugative interactions and charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Copyright © 2014 Elsevier B.V. All rights reserved.
11. Magnetizability and rotational g tensors for density fitted local second-order Møller-Plesset perturbation theory using gauge-including atomic orbitals
International Nuclear Information System (INIS)
Loibl, Stefan; Schütz, Martin
2014-01-01
In this paper, we present theory and implementation of an efficient program for calculating magnetizabilities and rotational g tensors of closed-shell molecules at the level of local second-order Møller-Plesset perturbation theory (MP2) using London orbitals. Density fitting is employed to factorize the electron repulsion integrals with ordinary Gaussians as fitting functions. The presented program for the calculation of magnetizabilities and rotational g tensors is based on a previous implementation of NMR shielding tensors reported by S. Loibl and M. Schütz [J. Chem. Phys. 137, 084107 (2012)]. Extensive test calculations show (i) that the errors introduced by density fitting are negligible, and (ii) that the errors of the local approximation are still rather small, although larger than for nuclear magnetic resonance (NMR) shielding tensors. Electron correlation effects for magnetizabilities are tiny for most of the molecules considered here. MP2 appears to overestimate the correlation contribution of magnetizabilities such that it does not constitute an improvement over Hartree-Fock (when comparing to higher-order methods like CCSD(T)). For rotational g tensors the situation is different and MP2 provides a significant improvement in accuracy over Hartree-Fock. The computational performance of the new program was tested for two extended systems, the larger comprising about 2200 basis functions. It turns out that a magnetizability (or rotational g tensor) calculation takes about 1.5 times longer than a corresponding NMR shielding tensor calculation
12. Relativistic theory of nuclear spin-rotation tensor with kinetically balanced rotational London orbitals.
Science.gov (United States)
Xiao, Yunlong; Zhang, Yong; Liu, Wenjian
2014-10-28
Both kinetically balanced (KB) and kinetically unbalanced (KU) rotational London orbitals (RLO) are proposed to resolve the slow basis set convergence in relativistic calculations of nuclear spin-rotation (NSR) coupling tensors of molecules containing heavy elements [Y. Xiao and W. Liu, J. Chem. Phys. 138, 134104 (2013)]. While they perform rather similarly, the KB-RLO Ansatz is clearly preferred as it ensures the correct nonrelativistic limit even with a finite basis. Moreover, it gives rise to the same "direct relativistic mapping" between nuclear magnetic resonance shielding and NSR coupling tensors as that without using the London orbitals [Y. Xiao, Y. Zhang, and W. Liu, J. Chem. Theory Comput. 10, 600 (2014)].
13. Tensor Product of Polygonal Cell Complexes
OpenAIRE
Chien, Yu-Yen
2017-01-01
We introduce the tensor product of polygonal cell complexes, which interacts nicely with the tensor product of link graphs of complexes. We also develop the unique factorization property of polygonal cell complexes with respect to the tensor product, and study the symmetries of tensor products of polygonal cell complexes.
14. Hinged Shields for Machine Tools
Science.gov (United States)
Lallande, J. B.; Poland, W. W.; Tull, S.
1985-01-01
Flaps guard against flying chips, but fold away for tool setup. Clear plastic shield in position to intercept flying chips from machine tool and retracted to give operator access to workpiece. Machine shops readily make such shields for own use.
15. Electrostatic shielding of transformers
Energy Technology Data Exchange (ETDEWEB)
De Leon, Francisco
2017-11-28
Toroidal transformers are currently used only in low-voltage applications. There is no published experience for toroidal transformer design at distribution-level voltages. Toroidal transformers are provided with electrostatic shielding to make possible high voltage applications and withstand the impulse test.
International Nuclear Information System (INIS)
Tada, Nobuo; Ito, Masato; Nihei, Ken-ichi; Takeshi, Tetsu
1998-01-01
A radiation shielding member comprises a metal vessel and a liquid therein, and is disposed to the upper surface of a lower flange of a reactor core shroud. Waterproof hot wires are contained in the liquid and are connected to a power source disposed at the outside. Electric current is supplied to the hot wires to elevate the temperature of the liquid, and the temperature of the vessel is kept higher than an atmospheric temperature thereby suppressing generation of dew condensation or water droplets. In addition, a water repellent coating is applied to the shielding member itself to prevent deposition of water droplets. Further, the bottom of the shielding member is inclined, and a water droplet-recovering vessel is disposed at the lower portion of the shielding member, so that the water droplets collected by the inclination of the bottom are recovered to the water droplet recovering vessel. With such a constitution, access of an operator to the inside of a reactor pressure vessel is facilitated, and at the same time, the working circumstance at the reactor bottom can be improved. (I.N.)
17. Shield For Flexible Pipe
Science.gov (United States)
Ponton, Michael K.; Williford, Clifford B.; Lagen, Nicholas T.
1995-01-01
Cylindrical shield designed to fit around flexible pipe to protect nearby workers from injury and equipment from damage if pipe ruptures. Designed as pressure-relief device. Absorbs impact of debris ejected radially from broken flexible pipe. Also redirects flow of pressurized fluid escaping from broken pipe onto flow path allowing for relief of pressure while minimizing potential for harm.
18. Heat shielding device
International Nuclear Information System (INIS)
Yatabe, Hiroshi; Motoya, Koji; Kodama, Hiroshi.
1997-01-01
Panel-like water cooling tubes are disposed on a shielding concrete wall as a floor surface on which a reactor pressure vessel of a HTGR type reactor is settled. The panel like water cooling tube comprises a large number of water cooling tubes and fin plates connecting them with each other. A heat shielding device is disposed to the opening of an air vent hole on the shielding concrete wall. The heat shielding device has a plurality of supports are disposed between a disk-like upper support plate larger than the opening of the vent hole and covered with a heat insulation material and a lower support plate having a vent hole at the center. The lower support plate is connected with the fin plate. A portion between the supports is formed as pressure releasing channels. Radiation heat from the reactor pressure vessel is transferred to the fin plate by way of the upper support plate, support and a lower support plate and transferred to cooling water of a water-cooling pipeline. Accordingly, radiation heat of the reactor pressure vessel is not transferred to the vent holes. (I.N.)
Science.gov (United States)
2008-01-01
This project analyzed the feasibility of placing an electrostatic field around a spacecraft to provide a shield against radiation. The concept was originally proposed in the 1960s and tested on a spacecraft by the Soviet Union in the 1970s. Such tests and analyses showed that this concept is not only feasible but operational. The problem though is that most of this work was aimed at protection from 10- to 100-MeV radiation. We now appreciate that the real problem is 1- to 2-GeV radiation. So, the question is one of scaling, in both energy and size. Can electrostatic shielding be made to work at these high energy levels and can it protect an entire vehicle? After significant analysis and consideration, an electrostatic shield configuration was proposed. The selected architecture was a torus, charged to a high negative voltage, surrounding the vehicle, and a set of positively charged spheres. Van de Graaff generators were proposed as the mechanism to move charge from the vehicle to the torus to generate the fields necessary to protect the spacecraft. This design minimized complexity, residual charge, and structural forces and resolved several concerns raised during the internal critical review. But, it still is not clear if such a system is costeffective or feasible, even though several studies have indicated usefulness for radiation protection at energies lower than that of the galactic cosmic rays. Constructing such a system will require power supplies that can generate voltages 10 times that of the state of the art. Of more concern is the difficulty of maintaining the proper net charge on the entire structure and ensuring that its interaction with solar wind will not cause rapid discharge. Yet, if these concerns can be resolved, such a scheme may provide significant radiation shielding to future vehicles, without the excessive weight or complexity of other active shielding techniques.
20. Colored Tensor Models - a Review
Directory of Open Access Journals (Sweden)
Razvan Gurau
2012-04-01
Full Text Available Colored tensor models have recently burst onto the scene as a promising conceptual and computational tool in the investigation of problems of random geometry in dimension three and higher. We present a snapshot of the cutting edge in this rapidly expanding research field. Colored tensor models have been shown to share many of the properties of their direct ancestor, matrix models, which encode a theory of fluctuating two-dimensional surfaces. These features include the possession of Feynman graphs encoding topological spaces, a 1/N expansion of graph amplitudes, embedded matrix models inside the tensor structure, a resumable leading order with critical behavior and a continuum large volume limit, Schwinger-Dyson equations satisfying a Lie algebra (akin to the Virasoro algebra in two dimensions, non-trivial classical solutions and so on. In this review, we give a detailed introduction of colored tensor models and pointers to current and future research directions.
1. Shielding experiments for accelerator facilities
Energy Technology Data Exchange (ETDEWEB)
Nakashima, Hiroshi; Tanaka, Susumu; Sakamoto, Yukio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [and others
2000-06-01
A series of shielding experiments was carried out by using AVF cyclotron accelerator of TIARA at JAERI in order to validate shielding design methods for accelerator facilities in intermediate energy region. In this paper neutron transmission experiment through thick shields and radiation streaming experiment through a labyrinth are reported. (author)
2. Shielding experiments for accelerator facilities
International Nuclear Information System (INIS)
Nakashima, Hiroshi; Tanaka, Susumu; Sakamoto, Yukio
2000-01-01
A series of shielding experiments was carried out by using AVF cyclotron accelerator of TIARA at JAERI in order to validate shielding design methods for accelerator facilities in intermediate energy region. In this paper neutron transmission experiment through thick shields and radiation streaming experiment through a labyrinth are reported. (author)
3. Shielding Design and Radiation Shielding Evaluation for LSDS System Facility
International Nuclear Information System (INIS)
Kim, Younggook; Kim, Jeongdong; Lee, Yongdeok
2015-01-01
As the system characteristics, the target in the spectrometer emits approximately 1012 neutrons/s. To efficiently shield the neutron, the shielding door designs are proposed for the LSDS system through a comparison of the direct shield and maze designs. Hence, to guarantee the radiation safety for the facility, the door design is a compulsory course of the development of the LSDS system. To improve the shielding rates, 250x250 covering structure was added as a subsidiary around the spectrometer. In this study, the evaluations of the suggested shielding designs were conducted using MCNP code. The suggested door design and covering structures can shield the neutron efficiently, thus all evaluations of all conditions are satisfied within the public dose limits. From the Monte Carlo code simulation, Resin(Indoor type) and Tungsten(Outdoor type) were selected as the shielding door materials. From a comparative evaluation of the door thickness, In and Out door thickness was selected 50 cm
4. Physical and Geometric Interpretations of the Riemann Tensor, Ricci Tensor, and Scalar Curvature
OpenAIRE
Loveridge, Lee C.
2004-01-01
Various interpretations of the Riemann Curvature Tensor, Ricci Tensor, and Scalar Curvature are described. Also, the physical meanings of the Einstein Tensor and Einstein's Equations are discussed. Finally a derivation of Newtonian Gravity from Einstein's Equations is given.
5. Shielding container for radioactive isotopes
International Nuclear Information System (INIS)
Sumi, Tetsuo; Tosa, Masayoshi; Hatogai, Tatsuaki.
1975-01-01
Object: To effect opening and closing bidirectional radiation used particularly for a gamma densimeter or the like by one operation. Structure: This device comprises a rotatable shielding body for receiving radioactive isotope in the central portion thereof and having at least two radiation openings through which radiation is taken out of the isotope, and a shielding container having openings corresponding to the first mentioned radiation openings, respectively. The radioactive isotope is secured to a rotational shaft of the shielding body, and the shielding body is rotated to register the openings of the shielding container with the openings of the shielding body or to shield the openings, thereby effecting radiation and cut off of gamma ray in the bidirection by one operation. (Kamimura, M.)
6. Primary shield displacement and bowing
International Nuclear Information System (INIS)
Scott, K.V.
1978-01-01
The reactor primary shield is constructed of high density concrete and surrounds the reactor core. The inlet, outlet and side primary shields were constructed in-place using 2.54 cm (1 in) thick steel plates as the forms. The plates remained as an integral part of the shields. The elongation of the pressure tubes due to thermal expansion and pressurization is not moving through the inlet nozzle hardware as designed but is accommodated by outward displacement and bowing of the inlet and outlet shields. Excessive distortion of the shields may result in gas seal failures, intolerable helium gas leaks, increased argon-41 emissions, and shield cooling tube failures. The shield surveillance and testing results are presented
7. Light shielding apparatus
Energy Technology Data Exchange (ETDEWEB)
Miller, Richard Dean; Thom, Robert Anthony
2017-10-10
A light shielding apparatus for blocking light from reaching an electronic device, the light shielding apparatus including left and right support assemblies, a cross member, and an opaque shroud. The support assemblies each include primary support structure, a mounting element for removably connecting the apparatus to the electronic device, and a support member depending from the primary support structure for retaining the apparatus in an upright orientation. The cross member couples the left and right support assemblies together and spaces them apart according to the size and shape of the electronic device. The shroud may be removably and adjustably connectable to the left and right support assemblies and configured to take a cylindrical dome shape so as to form a central space covered from above. The opaque shroud prevents light from entering the central space and contacting sensitive elements of the electronic device.
8. Shielding benchmark test
International Nuclear Information System (INIS)
Kawai, Masayoshi
1984-01-01
Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)
9. The manufacturing of depleted uranium biological shield components
International Nuclear Information System (INIS)
Metelkin, J.A.
1998-01-01
The unique combination of the physical and mechanical properties of uranium made it possible to manufacture biological shield components of transport package container (TPC) for transportation nuclear power plant irradiated fuel and radionuclides of radiation diagnostic instruments. Protective properties are substantially dependent on the nature radionuclide composition of uranium, that why I recommended depleted uranium after radiation chemical processing. Depleted uranium biological shield (DUBS) has improved specific mass-size characteristics compared to a shield made of lead, steel or tungsten. Technological achievements in uranium casting and machining made it possible to manufacture DUBS components of TPC up to 3 tons of mass and up to 2 metres of the maximum size. (authors)
10. Neutron shielding materials
International Nuclear Information System (INIS)
Tomoshige, Toru; Fujii, Yasumasa; Nifuku, Masataka.
1985-01-01
Purpose: To obtain shielding materials excellent in heat and radiation resistance, as well as having mechanical strength in a reduced weight. Constitution: A mixture comprising from 30 to 80 % by weight of epoxy resin, from 5 to 50 % by weight of polyethylene and from 1 to 50 % by weight of inorganic boron compound is cured to prepare a neutron shielding material. The epoxy resin used herein is a compound having more than 18 epoxy groups per one molecule. Polyethylene is a polyethylene homopolymer or a copolymer of ethylene and less than 10 % of other copolymerizable monomer which is preferably powdery and in the grain size of from 10 to 200 μm. The inorganic boric compound can include, for example, boron carbide, boron nitride and anhydrous boric acid. As the curing agent, all sorts of compounds known as the curing agent for epoxy resins can be used. The shielding material is excellent in heat resistance, particularly, in the strength, thermal deformation temperature and the bondability at high temperature and also satisfactory in compression strength and bondability. (Kawakami, Y.)
11. The tensor rank of tensor product of two three-qubit W states is eight
OpenAIRE
Chen, Lin; Friedland, Shmuel
2017-01-01
We show that the tensor rank of tensor product of two three-qubit W states is not less than eight. Combining this result with the recent result of M. Christandl, A. K. Jensen, and J. Zuiddam that the tensor rank of tensor product of two three-qubit W states is at most eight, we deduce that the tensor rank of tensor product of two three-qubit W states is eight. We also construct the upper bound of the tensor rank of tensor product of many three-qubit W states.
12. Tensor Target Polarization at TRIUMF
Energy Technology Data Exchange (ETDEWEB)
Smith, G
2014-10-27
The first measurements of tensor observables in $\\pi \\vec{d}$ scattering experiments were performed in the mid-80's at TRIUMF, and later at SIN/PSI. The full suite of tensor observables accessible in $\\pi \\vec{d}$ elastic scattering were measured: $T_{20}$, $T_{21}$, and $T_{22}$. The vector analyzing power $iT_{11}$ was also measured. These results led to a better understanding of the three-body theory used to describe this reaction. %Some measurements were also made in the absorption and breakup channels. A direct measurement of the target tensor polarization was also made independent of the usual NMR techniques by exploiting the (nearly) model-independent result for the tensor analyzing power at 90$^\\circ _{cm}$ in the $\\pi \\vec{d} \\rightarrow 2p$ reaction. This method was also used to check efforts to enhance the tensor polarization by RF burning of the NMR spectrum. A brief description of the methods developed to measure and analyze these experiments is provided.
13. Link prediction via generalized coupled tensor factorisation
DEFF Research Database (Denmark)
Ermiş, Beyza; Evrim, Acar Ataman; Taylan Cemgil, A.
2012-01-01
and higher-order tensors. We propose to use an approach based on probabilistic interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor Factorisation, which can simultaneously fit a large class of tensor models to higher-order tensors/matrices with com- mon latent factors using...... different loss functions. Numerical experiments demonstrate that joint analysis of data from multiple sources via coupled factorisation improves the link prediction performance and the selection of right loss function and tensor model is crucial for accurately predicting missing links....
14. A contribution to shielding effectiveness analysis of shielded tents
Directory of Open Access Journals (Sweden)
Vranić Zoran M.
2004-01-01
Full Text Available An analysis of shielding effectiveness (SE of the shielded tents made of the metallised fabrics is given. First, two electromagnetic characteristic fundamental for coupling through electrically thin shield, the skin depth break frequency and the surface resistance or transfer impedance, is defined and analyzed. Then, the transfer function and the SE are analyzed regarding to the frequency range of interest to the Electromagnetic Compatibility (EMC Community.
15. Measurement of the transient shielding effectiveness of shielding cabinets
Directory of Open Access Journals (Sweden)
H. Herlemann
2008-05-01
Full Text Available Recently, new definitions of shielding effectiveness (SE for high-frequency and transient electromagnetic fields were introduced by Klinkenbusch (2005. Analytical results were shown for closed as well as for non closed cylindrical shields. In the present work, the shielding performance of different shielding cabinets is investigated by means of numerical simulations and measurements inside a fully anechoic chamber and a GTEM-cell. For the GTEM-cell-measurements, a downscaled model of the shielding cabinet is used. For the simulations, the numerical tools CONCEPT II and COMSOL MULTIPHYSICS were available. The numerical results agree well with the measurements. They can be used to interpret the behaviour of the shielding effectiveness of enclosures as function of frequency. From the measurement of the electric and magnetic fields with and without the enclosure in place, the electric and magnetic shielding effectiveness as well as the transient shielding effectiveness of the enclosure are calculated. The transient SE of four different shielding cabinets is determined and discussed.
16. Tensor product of quantum logics
Science.gov (United States)
Pulmannová, Sylvia
1985-01-01
A quantum logic is the couple (L,M) where L is an orthomodular σ-lattice and M is a strong set of states on L. The Jauch-Piron property in the σ-form is also supposed for any state of M. A tensor product'' of quantum logics is defined. This definition is compared with the definition of a free orthodistributive product of orthomodular σ-lattices. The existence and uniqueness of the tensor product in special cases of Hilbert space quantum logics and one quantum and one classical logic are studied.
17. Phase transition in tensor models
Energy Technology Data Exchange (ETDEWEB)
Delepouve, Thibault [Laboratoire de Physique Théorique, CNRS UMR 8627, Université Paris Sud,91405 Orsay Cedex (France); Centre de Physique Théorique, CNRS UMR 7644, École Polytechnique,91128 Palaiseau Cedex (France); Gurau, Razvan [Centre de Physique Théorique, CNRS UMR 7644, École Polytechnique,91128 Palaiseau Cedex (France); Perimeter Institute for Theoretical Physics,31 Caroline St. N, N2L 2Y5, Waterloo, ON (Canada)
2015-06-25
Generalizing matrix models, tensor models generate dynamical triangulations in any dimension and support a 1/N expansion. Using the intermediate field representation we explicitly rewrite a quartic tensor model as a field theory for a fluctuation field around a vacuum state corresponding to the resummation of the entire leading order in 1/N (a resummation of the melonic family). We then prove that the critical regime in which the continuum limit in the sense of dynamical triangulations is reached is precisely a phase transition in the field theory sense for the fluctuation field.
18. Neutron Shielding composition
International Nuclear Information System (INIS)
Seki, Kiiro; Okuda, Hisashi; Harada, Yoshihisa.
1994-01-01
1, 3-bis (N, N-diglycidyl aminomethyl) cyclohexane as a specific epoxy resin is used together with a usual epoxy resin. A polyamine mixture and an imidazole type compound are used as a hardening agent. Further, a boron compound and an inorganic filler are added. Such a neutron shielding composition is hardened at a normal temperature without requiring heating, and mechanical strength, especially, compression strength can be kept over a wide range from low temperature to high temperature after the hardening. (T.M.)
19. A shield against distraction
OpenAIRE
Halin, N.; Marsh, J.E.; Hellman, A.; Hellstrom, I.; Sörqvist, Patrik
2014-01-01
In this paper, we apply the basic idea of a trade-off between the level of concentration and distractibility to test whether a manipulation of task difficulty can shield against distraction. Participants read, either in quiet or with a speech noise background, texts that were displayed either in an easy-to-read or a hard-to-read font. Background speech impaired prose recall, but only when the text was displayed in the easy-to-read font. Most importantly, recall was better in the background sp...
20. Neutronic reactor thermal shield
International Nuclear Information System (INIS)
Lowe, P.E.
1976-01-01
A shield for a nuclear reactor includes at least two layers of alternating wide and narrow rectangular blocks so arranged that the spaces between blocks in adjacent layers are out of registry, each block having an opening therein equally spaced from the sides of the blocks and nearer the top of the block than the bottom, the distance from the top of the block to the opening in one layer being different from this distance in adjacent layers, openings in blocks in adjacent layers being in registry. 1 claim, 7 drawing figures
1. Selective shielding device for scintiphotography
International Nuclear Information System (INIS)
Harper, J.W.; Kay, T.D.
1976-01-01
A selective shielding device to be used in combination with a scintillation camera is described. The shielding device is a substantially oval-shaped configuration removably secured to the scintillation camera. As a result of this combination scanning of preselected areas of a patient can be rapidly and accurately performed without the requirement of mounting any type of shielding paraphernalia on the patient. 1 claim, 2 drawing figures
2. Measuring space radiation shielding effectiveness
OpenAIRE
Bahadori Amir; Semones Edward; Ewert Michael; Broyan James; Walker Steven
2017-01-01
Passive radiation shielding is one strategy to mitigate the problem of space radiation exposure. While space vehicles are constructed largely of aluminum, polyethylene has been demonstrated to have superior shielding characteristics for both galactic cosmic rays and solar particle events due to the high hydrogen content. A method to calculate the shielding effectiveness of a material relative to reference material from Bragg peak measurements performed using energetic heavy charged particles ...
3. Tensor calculus for physics a concise guide
CERN Document Server
Neuenschwander, Dwight E
2015-01-01
Understanding tensors is essential for any physics student dealing with phenomena where causes and effects have different directions. A horizontal electric field producing vertical polarization in dielectrics; an unbalanced car wheel wobbling in the vertical plane while spinning about a horizontal axis; an electrostatic field on Earth observed to be a magnetic field by orbiting astronauts—these are some situations where physicists employ tensors. But the true beauty of tensors lies in this fact: When coordinates are transformed from one system to another, tensors change according to the same rules as the coordinates. Tensors, therefore, allow for the convenience of coordinates while also transcending them. This makes tensors the gold standard for expressing physical relationships in physics and geometry. Undergraduate physics majors are typically introduced to tensors in special-case applications. For example, in a classical mechanics course, they meet the "inertia tensor," and in electricity and magnetism...
4. The 'gravitating' tensor in the dualistic theory
International Nuclear Information System (INIS)
Mahanta, M.N.
1989-01-01
The exact microscopic system of Einstein-type field equations of the dualistic gravitation theory is investigated as well as an analysis of the modified energy-momentum tensor or so called 'gravitating' tensor is presented
5. Multifunctional Hot Structure Heat Shield
Data.gov (United States)
National Aeronautics and Space Administration — This project is performing preliminary development of a Multifunctional Hot Structure (HOST) heat shield for planetary entry. Results of this development will...
6. Reciprocal mass tensor : a general form
International Nuclear Information System (INIS)
Roy, C.L.
1978-01-01
Using the results of earlier treatment of wave packets, a general form of reciprocal mass tensor has been obtained. The elements of this tensor are seen to be dependent on momentum as well as space coordinates of the particle under consideration. The conditions under which the tensor would reduce to the usual space-independent form, are discussed and the impact of the space-dependence of this tensor on the motion of Bloch electrons, is examined. (author)
7. A new deteriorated energy-momentum tensor
International Nuclear Information System (INIS)
Duff, M.J.
1982-01-01
The stress-tensor of a scalar field theory is not unique because of the possibility of adding an 'improvement term'. In supersymmetric field theories the stress-tensor will appear in a super-current multiplet along with the sypersymmetry current. The general question of the supercurrent multiplet for arbitrary deteriorated stress tensors and their relationship to supercurrent multiplets for models with gauge antisymmetric tensors is answered for various models of N = 1, 2 and 4 supersymmetry. (U.K.)
8. Tensor-based spatiotemporal saliency detection
Science.gov (United States)
Dou, Hao; Li, Bin; Deng, Qianqian; Zhang, LiRui; Pan, Zhihong; Tian, Jinwen
2018-03-01
This paper proposes an effective tensor-based spatiotemporal saliency computation model for saliency detection in videos. First, we construct the tensor representation of video frames. Then, the spatiotemporal saliency can be directly computed by the tensor distance between different tensors, which can preserve the complete temporal and spatial structure information of object in the spatiotemporal domain. Experimental results demonstrate that our method can achieve encouraging performance in comparison with the state-of-the-art methods.
9. The direct tensor solution and higher-order acquisition schemes for generalized diffusion tensor imaging
NARCIS (Netherlands)
Akkerman, Erik M.
2010-01-01
Both in diffusion tensor imaging (DTI) and in generalized diffusion tensor imaging (GDTI) the relation between the diffusion tensor and the measured apparent diffusion coefficients is given by a tensorial equation, which needs to be inverted in order to solve the diffusion tensor. The traditional
10. SHIELDS Final Technical Report
Energy Technology Data Exchange (ETDEWEB)
Jordanova, Vania Koleva [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-03
Predicting variations in the near-Earth space environment that can lead to spacecraft damage and failure, i.e. “space weather”, remains a big space physics challenge. A new capability was developed at Los Alamos National Laboratory (LANL) to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. This framework simulates the dynamics of the Surface Charging Environment (SCE), the hot (keV) electrons representing the source and seed populations for the radiation belts, on both macro- and micro-scale. In addition to using physics-based models (like RAM-SCB, BATS-R-US, and iPIC3D), new data assimilation techniques employing data from LANL instruments on the Van Allen Probes and geosynchronous satellites were developed. An order of magnitude improvement in the accuracy in the simulation of the spacecraft surface charging environment was thus obtained. SHIELDS also includes a post-processing tool designed to calculate the surface charging for specific spacecraft geometry using the Curvilinear Particle-In-Cell (CPIC) code and to evaluate anomalies' relation to SCE dynamics. Such diagnostics is critically important when performing forensic analyses of space-system failures.
11. New Toroid shielding design
CERN Multimedia
Hedberg V
On the 15th of June 2001 the EB approved a new conceptual design for the toroid shield. In the old design, shown in the left part of the figure above, the moderator part of the shielding (JTV) was situated both in the warm and cold areas of the forward toroid. It consisted both of rings of polyethylene and hundreds of blocks of polyethylene (or an epoxy resin) inside the toroid vacuum vessel. In the new design, shown to the right in the figure above, only the rings remain inside the toroid. To compensate for the loss of moderator in the toroid, the copper plug (JTT) has been reduced in radius so that a layer of borated polyethylene can be placed around it (see figure below). The new design gives significant cost-savings and is easier to produce in the tight time schedule of the forward toroid. Since the amount of copper is reduced the weight that has to be carried by the toroid is also reduced. Outgassing into the toroid vacuum was a potential problem in the old design and this is now avoided. The main ...
12. Weyl tensors for asymmetric complex curvatures
International Nuclear Information System (INIS)
Oliveira, C.G.
Considering a second rank Hermitian field tensor and a general Hermitian connection the associated complex curvature tensor is constructed. The Weyl tensor that corresponds to this complex curvature is determined. The formalism is applied to the Weyl unitary field theory and to the Moffat gravitational theory. (Author) [pt
13. Vector and tensor analysis with applications
CERN Document Server
Borisenko, A I; Silverman, Richard A
1979-01-01
Concise and readable, this text ranges from definition of vectors and discussion of algebraic operations on vectors to the concept of tensor and algebraic operations on tensors. It also includes a systematic study of the differential and integral calculus of vector and tensor functions of space and time. Worked-out problems and solutions. 1968 edition.
14. Drip Shield Emplacement Gantry Concept
International Nuclear Information System (INIS)
Silva, R.A.; Cron, J.
2000-01-01
This design analysis has shown that, on a conceptual level, the emplacement of drip shields is feasible with current technology and equipment. A plan for drip shield emplacement was presented using a Drip Shield Transporter, a Drip Shield Emplacement Gantry, a locomotive, and a Drip Shield Gantry Carrier. The use of a Drip Shield Emplacement Gantry as an emplacement concept results in a system that is simple, reliable, and interfaces with the numerous other exising repository systems. Using the Waste Emplacement/Retrieval System design as a basis for the drip shield emplacement concept proved to simplify the system by using existing equipment, such as the gantry carrier, locomotive, Electrical and Control systems, and many other systems, structures, and components. Restricted working envelopes for the Drip Shield Emplacement System require further consideration and must be addressed to show that the emplacement operations can be performed as the repository design evolves. Section 6.1 describes how the Drip Shield Emplacement System may use existing equipment. Depending on the length of time between the conclusion of waste emplacement and the commencement of drip shield emplacement, this equipment could include the locomotives, the gantry carrier, and the electrical, control, and rail systems. If the exisiting equipment is selected for use in the Drip Shield Emplacement System, then the length of time after the final stages of waste emplacement and start of drip shield emplacement may pose a concern for the life cycle of the system (e.g., reliability, maintainability, availability, etc.). Further investigation should be performed to consider the use of existing equipment for drip shield emplacement operations. Further investigation will also be needed regarding the interfaces and heat transfer and thermal effects aspects. The conceptual design also requires further design development. Although the findings of this analysis are accurate for the assumptions made
15. Radiation shield for PWR reactors
International Nuclear Information System (INIS)
Esenov, Amra; Pustovgar, Andrey
2013-01-01
One of the chief structures of a reactor pit is a 'dry' shield. Setting up a 'dry' shield includes the technologically complex process of thermal processing of serpentinite concrete. Modern advances in the area of materials technology permit avoiding this complex and demanding procedure, and this significantly decreases the duration, labor intensity, and cost of setting it up. (orig.)
16. Monitoring the refinement of crystal structures with 15N solid-state NMR shift tensor data
Science.gov (United States)
Kalakewich, Keyton; Iuliucci, Robbie; Mueller, Karl T.; Eloranta, Harriet; Harper, James K.
2015-11-01
The 15N chemical shift tensor is shown to be extremely sensitive to lattice structure and a powerful metric for monitoring density functional theory refinements of crystal structures. These refinements include lattice effects and are applied here to five crystal structures. All structures improve based on a better agreement between experimental and calculated 15N tensors, with an average improvement of 47.0 ppm. Structural improvement is further indicated by a decrease in forces on the atoms by 2-3 orders of magnitude and a greater similarity in atom positions to neutron diffraction structures. These refinements change bond lengths by more than the diffraction errors including adjustments to X-Y and X-H bonds (X, Y = C, N, and O) of 0.028 ± 0.002 Å and 0.144 ± 0.036 Å, respectively. The acquisition of 15N tensors at natural abundance is challenging and this limitation is overcome by improved 1H decoupling in the FIREMAT method. This decoupling dramatically narrows linewidths, improves signal-to-noise by up to 317%, and significantly improves the accuracy of measured tensors. A total of 39 tensors are measured with shifts distributed over a range of more than 400 ppm. Overall, experimental 15N tensors are at least 5 times more sensitive to crystal structure than 13C tensors due to nitrogen's greater polarizability and larger range of chemical shifts.
17. Monitoring the refinement of crystal structures with (15)N solid-state NMR shift tensor data.
Science.gov (United States)
Kalakewich, Keyton; Iuliucci, Robbie; Mueller, Karl T; Eloranta, Harriet; Harper, James K
2015-11-21
The (15)N chemical shift tensor is shown to be extremely sensitive to lattice structure and a powerful metric for monitoring density functional theory refinements of crystal structures. These refinements include lattice effects and are applied here to five crystal structures. All structures improve based on a better agreement between experimental and calculated (15)N tensors, with an average improvement of 47.0 ppm. Structural improvement is further indicated by a decrease in forces on the atoms by 2-3 orders of magnitude and a greater similarity in atom positions to neutron diffraction structures. These refinements change bond lengths by more than the diffraction errors including adjustments to X-Y and X-H bonds (X, Y = C, N, and O) of 0.028 ± 0.002 Å and 0.144 ± 0.036 Å, respectively. The acquisition of (15)N tensors at natural abundance is challenging and this limitation is overcome by improved (1)H decoupling in the FIREMAT method. This decoupling dramatically narrows linewidths, improves signal-to-noise by up to 317%, and significantly improves the accuracy of measured tensors. A total of 39 tensors are measured with shifts distributed over a range of more than 400 ppm. Overall, experimental (15)N tensors are at least 5 times more sensitive to crystal structure than (13)C tensors due to nitrogen's greater polarizability and larger range of chemical shifts.
18. The Physical Interpretation of the Lanczos Tensor
OpenAIRE
Roberts, Mark D.
1999-01-01
The field equations of general relativity can be written as first order differential equations in the Weyl tensor, the Weyl tensor in turn can be written as a first order differential equation in a three index tensor called the Lanczos tensor. The Lanczos tensor plays a similar role in general relativity to that of the vector potential in electro-magnetic theory. The Aharonov-Bohm effect shows that when quantum mechanics is applied to electro-magnetic theory the vector potential is dynamicall...
19. Welding shield for coupling heaters
Science.gov (United States)
Menotti, James Louis
2010-03-09
Systems for coupling end portions of two elongated heater portions and methods of using such systems to treat a subsurface formation are described herein. A system may include a holding system configured to hold end portions of the two elongated heater portions so that the end portions are abutted together or located near each other; a shield for enclosing the end portions, and one or more inert gas inlets configured to provide at least one inert gas to flush the system with inert gas during welding of the end portions. The shield may be configured to inhibit oxidation during welding that joins the end portions together. The shield may include a hinged door that, when closed, is configured to at least partially isolate the interior of the shield from the atmosphere. The hinged door, when open, is configured to allow access to the interior of the shield.
20. Shield calculations, optimization vs. paradigm
International Nuclear Information System (INIS)
Cornejo D, N.; Hernandez S, A.; Martinez G, A.
2006-01-01
Many shieldings have been designed under the criteria of 'Maximum dose rates of project'. It has created the paradigm of those 'low dose rates', for the one which not few specialists would consider unacceptable levels of dose rate superior to the units of μSv.h -1 , independently of the exposure times. At the present time numerous shieldings are being designed considering dose restrictions in real times of exposure. After these new shieldings, the dose rates could be notably superior to those after traditional shieldings, without it implies inadequate designs or constructive errors. In the work significant differences in levels of dose rates and thickness of shieldings estimated by both methods for some typical facilities. It was concluded that the use of real times of exposure is more adequate for the optimization of the Radiological Protection, although this method demands bigger care in its application. (Author)
1. Parameters calculation of shielding experiment
International Nuclear Information System (INIS)
Gavazza, S.
1986-02-01
The radiation transport methodology comparing the calculated reactions and dose rates for neutrons and gama-rays, with experimental measurements obtained on iron shield, irradiated in the YAYOI reactor is evaluated. The ENDF/B-IV and VITAMIN-C libraries and the AMPX-II modular system, for cross sections generation collapsed by the ANISN code were used. The transport calculations were made using the DOT 3.5 code, adjusting the boundary iron shield source spectrum to the reactions and dose rates, measured at the beginning of shield. The neutron and gamma ray distributions calculated on the iron shield presented reasonable agreement with experimental measurements. An experimental arrangement using the IEA-R1 reactor to determine a shielding benchmark is proposed. (Author) [pt
2. Socket Shield Technique
OpenAIRE
Ferreira, João Eduardo Freitas
2017-01-01
Nos dias atuais, é cada vez mais comum a realização de extrações de dentes que estejam severamente comprometidos e substituí-los por implantes dentários. Após extração, existe uma reabsorção de osso alveolar que vai resultar numa perda de osso vertical e horizontal, tornando-se um dos fatores que subsequentemente se vai colocar como uma das maiores dificuldades na colocação de implantes. A técnica Socket Shield é uma técnica de preservação de osso alveolar em situações de implantes imediatos,...
3. Shielded Canister Transporter
International Nuclear Information System (INIS)
Eidem, G.G. Jr.; Fages, R.
1993-01-01
The Hanford Waste Vitrification Plant (HWVP) will produce canisters filled with high-level radioactive waste immobilized in borosilicate glass. This report discusses a Shielded Canister Transporter (SCT) which will provide the means for safe transportation and handling of the canisters from the Vitrification Building to the Canister Storage Building (CSB). The stainless steel canisters are 0.61 meters in diameter, 3.0 meters tall, and weigh approximately 2,135 kilograms, with a maximum exterior surface dose rate of 90,000 R/hr. The canisters are placed into storage tubes to a maximum of three tall (two for overpack canisters) with an impact limiter placed at the tube bottom and between each canister. A floor plug seals the top of the storage tube at the operating floor level of the CSB
4. Computational shielding benchmarks
International Nuclear Information System (INIS)
The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility
5. Conformal correlators of mixed-symmetry tensors
CERN Document Server
Costa, Miguel S
2015-01-01
We generalize the embedding formalism for conformal field theories to the case of general operators with mixed symmetry. The index-free notation encoding symmetric tensors as polynomials in an auxiliary polarization vector is extended to mixed-symmetry tensors by introducing a new commuting or anticommuting polarization vector for each row or column in the Young diagram that describes the index symmetries of the tensor. We determine the tensor structures that are allowed in n-point conformal correlation functions and give an algorithm for counting them in terms of tensor product coefficients. We show, with an example, how the new formalism can be used to compute conformal blocks of arbitrary external fields for the exchange of any conformal primary and its descendants. The matching between the number of tensor structures in conformal field theory correlators of operators in d dimensions and massive scattering amplitudes in d+1 dimensions is also seen to carry over to mixed-symmetry tensors.
6. Antisymmetric tensor generalizations of affine vector fields.
Science.gov (United States)
Houri, Tsuyoshi; Morisawa, Yoshiyuki; Tomoda, Kentaro
2016-02-01
Tensor generalizations of affine vector fields called symmetric and antisymmetric affine tensor fields are discussed as symmetry of spacetimes. We review the properties of the symmetric ones, which have been studied in earlier works, and investigate the properties of the antisymmetric ones, which are the main theme in this paper. It is shown that antisymmetric affine tensor fields are closely related to one-lower-rank antisymmetric tensor fields which are parallelly transported along geodesics. It is also shown that the number of linear independent rank- p antisymmetric affine tensor fields in n -dimensions is bounded by ( n + 1)!/ p !( n - p )!. We also derive the integrability conditions for antisymmetric affine tensor fields. Using the integrability conditions, we discuss the existence of antisymmetric affine tensor fields on various spacetimes.
7. Spectral Tensor-Train Decomposition
DEFF Research Database (Denmark)
Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.
2016-01-01
discretizations of the target function. We assess the performance of the method on a range of numerical examples: a modified set of Genz functions with dimension up to 100, and functions with mixed Fourier modes or with local features. We observe significant improvements in performance over an anisotropic......The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT.......e., the “cores”) comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting spectral tensor-train decomposition combines the favorable dimension-scaling of the TT...
8. Radiation shielding for fusion reactors
International Nuclear Information System (INIS)
Santoro, R.T.
2000-01-01
Radiation shielding requirements for fusion reactors present different problems than those for fission reactors and accelerators. Fusion devices, particularly tokamak reactors, are complicated by geometry constraints that complicate disposition of fully effective shielding. This paper reviews some of these shielding issues and suggested solutions for optimizing the machine and biological shielding. Radiation transport calculations are essential for predicting and confirming the nuclear performance of the reactor and, as such, must be an essential part of the reactor design process. Development and optimization of reactor components from the first wall and primary shielding to the penetrations and containment shielding must be carried out in a sensible progression. Initial results from one-dimensional transport calculations are used for scoping studies and are followed by detailed two- and three-dimensional analyses to effectively characterize the overall radiation environment. These detail model calculations are essential for accounting for the radiation leakage through ports and other penetrations in the bulk shield. Careful analysis of component activation and radiation damage is cardinal for defining remote handling requirements, in-situ replacement of components, and personnel access at specific locations inside the reactor containment vessel. (author)
9. Lunar Surface Reactor Shielding Study
International Nuclear Information System (INIS)
Kang, Shawn; McAlpine, William; Lipinski, Ronald
2006-01-01
A nuclear reactor system could provide power to support long term human exploration of the moon. Such a system would require shielding to protect astronauts from its emitted radiations. Shielding studies have been performed for a Gas Cooled Reactor system because it is considered to be the most suitable nuclear reactor system available for lunar exploration, based on its tolerance of oxidizing lunar regolith and its good conversion efficiency. The goals of the shielding studies were to determine a material shielding configuration that reduces the dose (rem) to the required level in order to protect astronauts, and to estimate the mass of regolith that would provide an equivalent protective effect if it were used as the shielding material. All calculations were performed using MCNPX, a Monte Carlo transport code. Lithium hydride must be kept between 600 K and 700 K to prevent excessive swelling from large amounts of gamma or neutron irradiation. The issue is that radiation damage causes separation of the lithium and the hydrogen, resulting in lithium metal and hydrogen gas. The proposed design uses a layer of B4C to reduce the combined neutron and gamma dose to below 0.5Grads before the LiH is introduced. Below 0.5Grads the swelling in LiH is small (less than about 1%) for all temperatures. This approach causes the shield to be heavier than if the B4C were replaced by LiH, but it makes the shield much more robust and reliable
10. Diffusion tensor optical coherence tomography
Science.gov (United States)
Marks, Daniel L.; Blackmon, Richard L.; Oldenburg, Amy L.
2018-01-01
In situ measurements of diffusive particle transport provide insight into tissue architecture, drug delivery, and cellular function. Analogous to diffusion-tensor magnetic resonance imaging (DT-MRI), where the anisotropic diffusion of water molecules is mapped on the millimeter scale to elucidate the fibrous structure of tissue, here we propose diffusion-tensor optical coherence tomography (DT-OCT) for measuring directional diffusivity and flow of optically scattering particles within tissue. Because DT-OCT is sensitive to the sub-resolution motion of Brownian particles as they are constrained by tissue macromolecules, it has the potential to quantify nanoporous anisotropic tissue structure at micrometer resolution as relevant to extracellular matrices, neurons, and capillaries. Here we derive the principles of DT-OCT, relating the detected optical signal from a minimum of six probe beams with the six unique diffusion tensor and three flow vector components. The optimal geometry of the probe beams is determined given a finite numerical aperture, and a high-speed hardware implementation is proposed. Finally, Monte Carlo simulations are employed to assess the ability of the proposed DT-OCT system to quantify anisotropic diffusion of nanoparticles in a collagen matrix, an extracellular constituent that is known to become highly aligned during tumor development.
Science.gov (United States)
Stenström, B; Rehnmark-Larsson, S; Julin, P; Richter, S
1983-01-01
Energy Technology Data Exchange (ETDEWEB)
Stenstroem, B.; Rehnmark-Larsson, S.; Julin, P.; Richter, S.
1983-01-01
Energy Technology Data Exchange (ETDEWEB)
Stenstroem, B.; Rehnmark-Larsson, S.; Julin, P.; Richter, S.
1984-01-01
14. Morphometry of terrestrial shield volcanoes
Science.gov (United States)
Grosse, Pablo; Kervyn, Matthieu
2018-03-01
Shield volcanoes are described as low-angle edifices built primarily by the accumulation of successive lava flows. This generic view of shield volcano morphology is based on a limited number of monogenetic shields from Iceland and Mexico, and a small set of large oceanic islands (Hawaii, Galápagos). Here, the morphometry of 158 monogenetic and polygenetic shield volcanoes is analyzed quantitatively from 90-meter resolution SRTM DEMs using the MORVOLC algorithm. An additional set of 24 lava-dominated 'shield-like' volcanoes, considered so far as stratovolcanoes, are documented for comparison. Results show that there is a large variation in shield size (volumes from 0.1 to > 1000 km3), profile shape (height/basal width (H/WB) ratios mostly from 0.01 to 0.1), flank slope gradients (average slopes mostly from 1° to 15°), elongation and summit truncation. Although there is no clear-cut morphometric difference between shield volcanoes and stratovolcanoes, an approximate threshold can be drawn at 12° average slope and 0.10 H/WB ratio. Principal component analysis of the obtained database enables to identify four key morphometric descriptors: size, steepness, plan shape and truncation. Hierarchical cluster analysis of these descriptors results in 12 end-member shield types, with intermediate cases defining a continuum of morphologies. The shield types can be linked in terms of growth stages and shape evolution, related to (1) magma composition and rheology, effusion rate and lava/pyroclast ratio, which will condition edifice steepness; (2) spatial distribution of vents, in turn related to the magmatic feeding system and the tectonic framework, which will control edifice plan shape; and (3) caldera formation, which will condition edifice truncation.
15. Transposes, L-Eigenvalues and Invariants of Third Order Tensors
OpenAIRE
Qi, Liqun
2017-01-01
Third order tensors have wide applications in mechanics, physics and engineering. The most famous and useful third order tensor is the piezoelectric tensor, which plays a key role in the piezoelectric effect, first discovered by Curie brothers. On the other hand, the Levi-Civita tensor is famous in tensor calculus. In this paper, we study third order tensors and (third order) hypermatrices systematically, by regarding a third order tensor as a linear operator which transforms a second order t...
16. Sparse alignment for robust tensor learning.
Science.gov (United States)
Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming
2014-10-01
Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.
17. Shielding around spallation neutron sources
International Nuclear Information System (INIS)
Fragopoulou, M; Manolopoulou, M; Stoulos, S; Brandt, R; Westmeier, W; Krivopustov, M; Sosnin, A; Golovatyuk, S; Zamani, M
2006-01-01
Spallation neutron sources provide more intense and harder neutron spectrum than nuclear reactors for which a substantial amount of shielding measurements have been performed. Although the main part of the cost for a spallation station is the cost of the shielding, measurements regarding shielding for the high energy neutron region are still very scarce. In this work calculation of the neutron interaction length in polyethylene moderator for different neutron energies is presented. Measurements which were carried out in Nuclotron accelerator at the Laboratory of High Energies (Joint Institute for Nuclear Research, Dubna) and comparison with calculation are also presented. The measurements were performed with Solid State Nuclear Track Detectors (SSNTDs)
International Nuclear Information System (INIS)
Disney, R.K.
1977-01-01
Radiation protection/shielding design of a nuclear facility requires a coordinated effort of many engineering disciplines to meet the requirements imposed by regulations. In the following discussion, the system approach to Clinch River Breeder Reactor Plant (CRBRP) radiation protection will be described, and the program developed to implement this approach will be defined. In addition, the principal shielding design problems of LMFBR nuclear reactor systems will be discussed in realtion to LWR nuclear reactor system shielding designs. The methodology used to analyze these problems in the U.S. LMFBR program, the resultant design solutions, and the experimental verification of these designs and/or methods will be discussed. (orig.) [de
19. CHEMICALS
CERN Multimedia
Medical Service
2002-01-01
It is reminded that all persons who use chemicals must inform CERN's Chemistry Service (TIS-GS-GC) and the CERN Medical Service (TIS-ME). Information concerning their toxicity or other hazards as well as the necessary individual and collective protection measures will be provided by these two services. Users must be in possession of a material safety data sheet (MSDS) for each chemical used. These can be obtained by one of several means : the manufacturer of the chemical (legally obliged to supply an MSDS for each chemical delivered) ; CERN's Chemistry Service of the General Safety Group of TIS ; for chemicals and gases available in the CERN Stores the MSDS has been made available via EDH either in pdf format or else via a link to the supplier's web site. Training courses in chemical safety are available for registration via HR-TD. CERN Medical Service : TIS-ME :73186 or [email protected] Chemistry Service : TIS-GS-GC : 78546
20. High frequency electromagnetic interference shielding magnetic polymer nanocomposites
Science.gov (United States)
He, Qingliang
Electromagnetic interference is one of the most concerned pollution and problem right now since more and more electronic devices have been extensively utilized in our daily lives. Besides the interference, long time exposure to electromagnetic radiation may also result in severe damage to human body. In order to mitigate the undesirable part of the electromagnetic wave energy and maintain the long term sustainable development of our modern civilized society, new technology development based researches have been made to solve this problem. However, one of the major challenges facing to the electromagnetic interference shielding is the relatively low shielding efficiency and the high cost as well as the complicated shielding material manufacture. From the materials science point of view, the key solutions to these challenges are strongly depended on the breakthrough of the current limit of shielding material design and manufacture (such as hierarchical material design with controllable and predictable arrangement in nanoscale particle configuration via an easy in-situ manner). From the chemical engineering point of view, the upgrading of advanced material shielding performance and the enlarged production scale for shielding materials (for example, configure the effective components in the shielding material in order to lower their usage, eliminate the "rate-limiting" step to enlarge the production scale) are of great importance. In this dissertation, the design and preparation of morphology controlled magnetic nanoparticles and their reinforced polypropylene polymer nanocomposites will be covered first. Then, the functionalities of these polymer nanocomposites will be demonstrated. Based on the innovative materials design and synergistic effect on the performance advancement, the magnetic polypropylene polymer nanocomposites with desired multifunctionalities are designed and produced targeting to the electromagnetic interference shielding application. In addition
1. Survivor shielding. Part C. Improvements in terrain shielding
International Nuclear Information System (INIS)
Egbert, Stephen D.; Kaul, Dean C.; Roberts, James A.; Kerr, George D.
2005-01-01
A number of atomic-bomb survivors were affected by shielding provided by terrain features. These terrain features can be a small hill, affecting one or two houses, or a high mountain that shields large neighborhoods. In the survivor dosimetry system, terrain shielding can be described by a transmission factor (TF), which is the ratio between the dose with and without the terrain present. The terrain TF typically ranges between 0.1 and 1.0. After DS86 was implemented at RERF, the terrain shielding categories were examined and found to either have a bias or an excessive uncertainty that could readily be removed. In 1989, an improvement in the terrain model was implemented at RERF in the revised DS86 code, but the documentation was not published. It is now presented in this section. The solution to the terrain shielding in front of a house is described in this section. The problem of terrain shielding of survivors behind Hijiyama mountain at Hiroshima and Konpirasan mountain at Nagasaki has also been recognized, and a solution to this problem has been included in DS02. (author)
2. Self-shielding factors
International Nuclear Information System (INIS)
Kaul, D.C.
1982-01-01
Throughout the last two decades many efforts have been made to estimate the effect of body self-shielding on organ doses from externally incident neutrons and gamma rays. These began with the use of simple geometry phantoms and have culminated in the use of detailed anthropomorphic phantoms. In a recent effort, adjoint Monte Carlo analysis techniques have been used to determine dose and dose equivalent to the active marrow as a function of energy and angle of neutron fluence externally incident on an anthropomorphic phantom. When combined with fluences from actual nuclear devices, these dose-to-fluence factors result in marrow dose values that demonstrate great sensitivity to variations in device type, range, and body orientation. Under a state-of-the-art radiation transport analysis demonstration program for the Japanese cities, sponsored by the Defense Nuclear Agency at the request of the National Council on Radiation Protection and Measurements, the marrow dose study referred to above is being repeated to obtain spectral distributions within the marrow for externally incident neutrons and gamma rays of arbitrary energy and angle. This is intended to allow radiobiologists and epidemiologists to select and to modify numbers of merit for correlation with health effects and to permit a greater understanding of the relationship between human and laboratory subject dosimetry
International Nuclear Information System (INIS)
Nemezawa, Isao; Kimura, Tadahiro; Mizuochi, Akira; Omori, Tetsu
1998-01-01
A single body of a radiation shield comprises a bag prepared by welding or bonding a polyurethane sheet which is made flat while interposing metal plates at the upper and the lower portion of the bag. Eyelet fittings are disposed to the upper and the lower portions of the bag passing through the metal plates and the flat portion of the bag. Water supplying/draining ports are disposed to two upper and lower places of the bag at a height where the metal plates are disposed. Reinforcing walls welded or bonded to the inner wall surface of the bag are elongated in vertical direction to divide the inside of the bag to a plurality of cells. The bag is suspended and supported from a frame with S-shaped hooks inserted into the eyelet fittings as connecting means. A plurality of bags are suspended and supported from the frame at a required height by way of the eyelets at the lower portion of the suspended and supported bag and the eyelet fittings at the upper portion of the bag below the intermediate connection means. (I.N.)
4. Tensor SOM and tensor GTM: Nonlinear tensor analysis by topographic mappings.
Science.gov (United States)
Iwasaki, Tohru; Furukawa, Tetsuo
2016-05-01
In this paper, we propose nonlinear tensor analysis methods: the tensor self-organizing map (TSOM) and the tensor generative topographic mapping (TGTM). TSOM is a straightforward extension of the self-organizing map from high-dimensional data to tensorial data, and TGTM is an extension of the generative topographic map, which provides a theoretical background for TSOM using a probabilistic generative model. These methods are useful tools for analyzing and visualizing tensorial data, especially multimodal relational data. For given n-mode relational data, TSOM and TGTM can simultaneously organize a set of n-topographic maps. Furthermore, they can be used to explore the tensorial data space by interactively visualizing the relationships between modes. We present the TSOM algorithm and a theoretical description from the viewpoint of TGTM. Various TSOM variations and visualization techniques are also described, along with some applications to real relational datasets. Additionally, we attempt to build a comprehensive description of the TSOM family by adapting various data structures. Copyright © 2016 Elsevier Ltd. All rights reserved.
5. Active Radiation Shield, Phase I
Data.gov (United States)
National Aeronautics and Space Administration — DEC-Shield technology offers the means to generate electric power from cosmic radiation sources and fuse dissimilar systems and functionality into a structural...
6. Shielding calculations. Optimization vs. Paradigms
International Nuclear Information System (INIS)
Cornejo Diaz, Nestor; Hernandez Saiz, Alejandro; Martinez Gonzalez, Alina
2005-01-01
Many radiation shielding barriers in Cuba have been designed according to the criterion of Maxi-mum Projected Dose Rates. This fact has created the paradigm of low dose rates. Because of this, dose rate levels greater than units of Sv.h-1 would be considered unacceptable by many specialists, regardless of the real exposure times. Nowadays many shielding barriers are being designed using dose constraints in real exposure times. Behind the new barriers, dose rates could be notably greater than those behind the traditional ones, and it does not imply inadequate designs or constructive errors. In this work were obtained significant differences in dose rate levels and shield-ing thicknesses calculated by both methods for some typical installations. The work concludes that real exposure time approach is more adequate in order to optimise Radiation Protection, although this method should be carefully applied
7. Seamless warping of diffusion tensor fields
DEFF Research Database (Denmark)
Xu, Dongrong; Hao, Xuejun; Bansal, Ravi
2008-01-01
To warp diffusion tensor fields accurately, tensors must be reoriented in the space to which the tensors are warped based on both the local deformation field and the orientation of the underlying fibers in the original image. Existing algorithms for warping tensors typically use forward mapping...... of seams, including voxels in which the deformation is extensive. Backward mapping, however, cannot reorient tensors in the template space because information about the directional orientation of fiber tracts is contained in the original, unwarped imaging space only, and backward mapping alone cannot...... transfer that information to the template space. To combine the advantages of forward and backward mapping, we propose a novel method for the spatial normalization of diffusion tensor (DT) fields that uses a bijection (a bidirectional mapping with one-to-one correspondences between image spaces) to warp DT...
8. Diffusion tensor imaging using multiple coils for mouse brain connectomics.
Science.gov (United States)
Nouls, John C; Badea, Alexandra; Anderson, Robert B J; Cofer, Gary P; Allan Johnson, G
2018-04-19
The correlation between brain connectivity and psychiatric or neurological diseases has intensified efforts to develop brain connectivity mapping techniques on mouse models of human disease. The neural architecture of mouse brain specimens can be shown non-destructively and three-dimensionally by diffusion tensor imaging, which enables tractography, the establishment of a connectivity matrix and connectomics. However, experiments on cohorts of animals can be prohibitively long. To improve throughput in a 7-T preclinical scanner, we present a novel two-coil system in which each coil is shielded, placed off-isocenter along the axis of the magnet and connected to a receiver circuit of the scanner. Preservation of the quality factor of each coil is essential to signal-to-noise ratio (SNR) performance and throughput, because mouse brain specimen imaging at 7 T takes place in the coil-dominated noise regime. In that regime, we show a shielding configuration causing no SNR degradation in the two-coil system. To acquire data from several coils simultaneously, the coils are placed in the magnet bore, around the isocenter, in which gradient field distortions can bias diffusion tensor imaging metrics, affect tractography and contaminate measurements of the connectivity matrix. We quantified the experimental alterations in fractional anisotropy and eigenvector direction occurring in each coil. We showed that, when the coils were placed 12 mm away from the isocenter, measurements of the brain connectivity matrix appeared to be minimally altered by gradient field distortions. Simultaneous measurements on two mouse brain specimens demonstrated a full doubling of the diffusion tensor imaging throughput in practice. Each coil produced images devoid of shading or artifact. To further improve the throughput of mouse brain connectomics, we suggested a future expansion of the system to four coils. To better understand acceptable trade-offs between imaging throughput and connectivity
9. Should I use TensorFlow
OpenAIRE
Schrimpf, Martin
2016-01-01
Google's Machine Learning framework TensorFlow was open-sourced in November 2015 [1] and has since built a growing community around it. TensorFlow is supposed to be flexible for research purposes while also allowing its models to be deployed productively. This work is aimed towards people with experience in Machine Learning considering whether they should use TensorFlow in their environment. Several aspects of the framework important for such a decision are examined, such as the heterogenity,...
10. The Topology of Symmetric Tensor Fields
Science.gov (United States)
Levin, Yingmei; Batra, Rajesh; Hesselink, Lambertus; Levy, Yuval
1997-01-01
Combinatorial topology, also known as "rubber sheet geometry", has extensive applications in geometry and analysis, many of which result from connections with the theory of differential equations. A link between topology and differential equations is vector fields. Recent developments in scientific visualization have shown that vector fields also play an important role in the analysis of second-order tensor fields. A second-order tensor field can be transformed into its eigensystem, namely, eigenvalues and their associated eigenvectors without loss of information content. Eigenvectors behave in a similar fashion to ordinary vectors with even simpler topological structures due to their sign indeterminacy. Incorporating information about eigenvectors and eigenvalues in a display technique known as hyperstreamlines reveals the structure of a tensor field. The simplify and often complex tensor field and to capture its important features, the tensor is decomposed into an isotopic tensor and a deviator. A tensor field and its deviator share the same set of eigenvectors, and therefore they have a similar topological structure. A a deviator determines the properties of a tensor field, while the isotopic part provides a uniform bias. Degenerate points are basic constituents of tensor fields. In 2-D tensor fields, there are only two types of degenerate points; while in 3-D, the degenerate points can be characterized in a Q'-R' plane. Compressible and incompressible flows share similar topological feature due to the similarity of their deviators. In the case of the deformation tensor, the singularities of its deviator represent the area of vortex core in the field. In turbulent flows, the similarities and differences of the topology of the deformation and the Reynolds stress tensors reveal that the basic addie-viscosity assuptions have their validity in turbulence modeling under certain conditions.
11. Measuring space radiation shielding effectiveness
Directory of Open Access Journals (Sweden)
2017-01-01
Full Text Available Passive radiation shielding is one strategy to mitigate the problem of space radiation exposure. While space vehicles are constructed largely of aluminum, polyethylene has been demonstrated to have superior shielding characteristics for both galactic cosmic rays and solar particle events due to the high hydrogen content. A method to calculate the shielding effectiveness of a material relative to reference material from Bragg peak measurements performed using energetic heavy charged particles is described. Using accelerated alpha particles at the National Aeronautics and Space Administration Space Radiation Laboratory at Brookhaven National Laboratory, the method is applied to sample tiles from the Heat Melt Compactor, which were created by melting material from a simulated astronaut waste stream, consisting of materials such as trash and unconsumed food. The shielding effectiveness calculated from measurements of the Heat Melt Compactor sample tiles is about 10% less than the shielding effectiveness of polyethylene. Shielding material produced from the astronaut waste stream in the form of Heat Melt Compactor tiles is therefore found to be an attractive solution for protection against space radiation.
12. Measuring space radiation shielding effectiveness
Science.gov (United States)
Bahadori, Amir; Semones, Edward; Ewert, Michael; Broyan, James; Walker, Steven
2017-09-01
Passive radiation shielding is one strategy to mitigate the problem of space radiation exposure. While space vehicles are constructed largely of aluminum, polyethylene has been demonstrated to have superior shielding characteristics for both galactic cosmic rays and solar particle events due to the high hydrogen content. A method to calculate the shielding effectiveness of a material relative to reference material from Bragg peak measurements performed using energetic heavy charged particles is described. Using accelerated alpha particles at the National Aeronautics and Space Administration Space Radiation Laboratory at Brookhaven National Laboratory, the method is applied to sample tiles from the Heat Melt Compactor, which were created by melting material from a simulated astronaut waste stream, consisting of materials such as trash and unconsumed food. The shielding effectiveness calculated from measurements of the Heat Melt Compactor sample tiles is about 10% less than the shielding effectiveness of polyethylene. Shielding material produced from the astronaut waste stream in the form of Heat Melt Compactor tiles is therefore found to be an attractive solution for protection against space radiation.
13. Dictionary-Based Tensor Canonical Polyadic Decomposition
Science.gov (United States)
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
14. Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif
2007-01-01
Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
15. Inflationary cosmology and 4-index tensor fields
International Nuclear Information System (INIS)
Moorhouse, R.G.; Nixon, J.
1985-01-01
We show how an arbitrarily large expansion of the ordinary dimensions in the very early universe can be achieved in the d=11 supergravity theory where the 4-index anti-symmetric tensor field supplies the energy-momentum tensor. However, the decrease of the extra dimensions is too fast to give a satisfactory inflationary cosmology. If a 4-index tensor field is similar used to provide the energy-momentum tensor in dimensions significantly greater than 11 the inflationary outlook is more hopeful. (orig.)
16. A RENORMALIZATION PROCEDURE FOR TENSOR MODELS AND SCALAR-TENSOR THEORIES OF GRAVITY
OpenAIRE
SASAKURA, NAOKI
2010-01-01
Tensor models are more-index generalizations of the so-called matrix models, and provide models of quantum gravity with the idea that spaces and general relativity are emergent phenomena. In this paper, a renormalization procedure for the tensor models whose dynamical variable is a totally symmetric real three-tensor is discussed. It is proven that configurations with certain Gaussian forms are the attractors of the three-tensor under the renormalization procedure. Since these Gaussian config...
17. Boron filled siloxane polymers for radiation shielding
Science.gov (United States)
2018-03-01
The purpose of the present work was to evaluate changes to structure-property relationships of 10B filled siloxane-based polymers when exposed to nuclear reactor radiation. Highly filled polysiloxanes were synthesized with the intent of fabricating materials that could shield high neutron fluences. The newly formulated materials consisted of cross-linked poly-diphenyl-methylsiloxane filled with natural boron and carbon nanofibers. This polymer was chosen because of its good thermal and chemical stabilities, as well as resistance to ionizing radiation thanks to the presence of aromatic groups in the siloxane backbone. Highly isotopically enriched 10B filler was used to provide an efficient neutron radiation shield, and carbon nanofibers were added to improve mechanical strength. This novel polymeric material was exposed in the Annular Core Research Reactor (ACRR) at Sandia National Labs to five different neutron/gamma fluxes consisting of very high neutron fluences within very short time periods. Thermocouples placed on the specimens recorded in-situ temperature changes during radiation exposure, which agreed well with those obtained from our MCNP simulations. Changes in the microstructural, thermal, chemical, and mechanical properties were evaluated by SEM, DSC, TGA, FT-IR NMR, solvent swelling, and uniaxial compressive load measurements. Our results demonstrate that these newly formulated materials are well-suitable to be used in applications that require exposure to different types of ionizing conditions that take place simultaneously.
18. 3D reconstruction of tensors and vectors
Energy Technology Data Exchange (ETDEWEB)
Defrise, Michel; Gullberg, Grant T.
2005-02-17
Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.
19. Algebraic classification of the Weyl tensor in higher dimensions based on its 'superenergy' tensor
International Nuclear Information System (INIS)
Senovilla, Jose M M
2010-01-01
The algebraic classification of the Weyl tensor in the arbitrary dimension n is recovered by means of the principal directions of its 'superenergy' tensor. This point of view can be helpful in order to compute the Weyl aligned null directions explicitly, and permits one to obtain the algebraic type of the Weyl tensor by computing the principal eigenvalue of rank-2 symmetric future tensors. The algebraic types compatible with states of intrinsic gravitational radiation can then be explored. The underlying ideas are general, so that a classification of arbitrary tensors in the general dimension can be achieved. (fast track communication)
20. The Dalkon Shield in perspective.
Science.gov (United States)
Pendergast, P; Hirsh, H L
1986-01-01
When the Dalkon Shield IUD became clinically available in the early 1970s, it appeared that an ideal contraceptive had been developed, with essentially no adverse effects (very safe) and a very high degree of efficacy. Within a few years, however, it became apparent that the Dalkon Shield had not lived up to expectations. In fact, it caused very severe complications, not infrequently resulting in the loss of reproductive ability and in 17 cases, death. In addition, the pregnancy rate among women using this IUD was significantly high, with many resulting in mid-trimester abortion when the IUD remained in-place. The authors trace the legal consequences of this medical disaster, which has resulted in both the development of new and the extension of old legal theories and doctrines involving negligence and product liability. Dalkon Shield litigation is most likely to continue. Many women are assumed to wear the shield still, and neither the FDA or A.H. Robins, manufacturer of the shield, has issued a formal recall.
1. Electromagnetic stress tensor for an amorphous metamaterial medium
Science.gov (United States)
Wang, Neng; Wang, Shubo; Ng, Jack
2018-03-01
We analytically and numerically investigated the internal optical forces exerted by an electromagnetic wave inside an amorphous metamaterial medium. We derived, by using the principle of virtual work, the Helmholtz stress tensor, which takes into account the electrostriction effect. Several examples of amorphous media are considered, and different electromagnetic stress tensors, such as the Einstein-Laub tensor and Minkowski tensor, are also compared. It is concluded that the Helmholtz stress tensor is the appropriate tensor for such systems.
2. Unique characterization of the Bel-Robinson tensor
International Nuclear Information System (INIS)
Bergqvist, G; Lankinen, P
2004-01-01
We prove that a completely symmetric and trace-free rank-4 tensor is, up to sign, a Bel-Robinson-type tensor, i.e., the superenergy tensor of a tensor with the same algebraic symmetries as the Weyl tensor, if and only if it satisfies a certain quadratic identity. This may be seen as the first Rainich theory result for rank-4 tensors
3. Differential invariants for higher-rank tensors. A progress report
International Nuclear Information System (INIS)
Tapial, V.
2004-07-01
We outline the construction of differential invariants for higher-rank tensors. In section 2 we outline the general method for the construction of differential invariants. A first result is that the simplest tensor differential invariant contains derivatives of the same order as the rank of the tensor. In section 3 we review the construction for the first-rank tensors (vectors) and second-rank tensors (metrics). In section 4 we outline the same construction for higher-rank tensors. (author)
4. Friction tensor concept for textured surfaces
This paper proposes the concept of a friction tensor analogous to the heat conduc- tion tensor in anisotropic media. This implies that there exists two principal friction coefficients μ1,2 analogous to the principal conductivities k1,2. For symmetrically textured surfaces the principal directions are orthogonal with atleast one ...
5. Gravitational Metric Tensor Exterior to Rotating Homogeneous ...
African Journals Online (AJOL)
The covariant and contravariant metric tensors exterior to a homogeneous spherical body rotating uniformly about a common φ axis with constant angular velocity ω is constructed. The constructed metric tensors in this gravitational field have seven non-zero distinct components.The Lagrangian for this gravitational field is ...
6. Friction tensor concept for textured surfaces
Depending on the sliding direction the coefficient of friction varies between maximum and minimum for textured surfaces. For random surfaces without any texture the friction coefficient becomes independent of the sliding direction. This paper proposes the concept of a friction tensor analogous to the heat conduction tensor ...
7. Radiation Shielding Materials and Containers Incorporating Same
Energy Technology Data Exchange (ETDEWEB)
Mirsky, Steven M.; Krill, Stephen J.; and Murray, Alexander P.
2005-11-01
An improved radiation shielding material and storage systems for radioactive materials incorporating the same. The PYRolytic Uranium Compound (''PYRUC'') shielding material is preferably formed by heat and/or pressure treatment of a precursor material comprising microspheres of a uranium compound, such as uranium dioxide or uranium carbide, and a suitable binder. The PYRUC shielding material provides improved radiation shielding, thermal characteristic, cost and ease of use in comparison with other shielding materials. The shielding material can be used to form containment systems, container vessels, shielding structures, and containment storage areas, all of which can be used to house radioactive waste. The preferred shielding system is in the form of a container for storage, transportation, and disposal of radioactive waste. In addition, improved methods for preparing uranium dioxide and uranium carbide microspheres for use in the radiation shielding materials are also provided.
8. MMW [multimegawatt] shielding design and analysis
International Nuclear Information System (INIS)
Olson, A.P.
1988-01-01
Reactor shielding for multimegawatt (MMW) space power must satisfy a mass constraint as well as performance specifications for neutron fluence and gamma dose. A minimum mass shield is helpful in attaining the launch mass goal for the entire vehicle, because the shield comprises about 1% to 2% of the total vehicle mass. In addition, the shield internal heating must produce tolerable temperatures. The analysis of shield performance for neutrons and gamma rays is emphasized. Topics addressed include cross section preparation for multigroup 2D S/sub n/-transport analyses, and the results of parametric design studies on shadow shield performance and mass versus key shield design variables such as cone angle, number, placement, and thickness of layers of tungsten, and shield top radius. Finally, adjoint methods are applied to the shield in order to spatially map its relative contribution to dose reduction, and to provide insight into further design optimization. 7 refs., 2 figs., 3 tabs
9. Fabric Tensor Characterization of Tensor-Valued Directional Data: Solution, Accuracy, and Symmetrization
Directory of Open Access Journals (Sweden)
Kuang-dai Leng
2012-01-01
Full Text Available Fabric tensor has proved to be an effective tool statistically characterizing directional data in a smooth and frame-indifferent form. Directional data arising from microscopic physics and mechanics can be summed up as tensor-valued orientation distribution functions (ODFs. Two characterizations of the tensor-valued ODFs are proposed, using the asymmetric and symmetric fabric tensors respectively. The later proves to be nonconvergent and less accurate but still an available solution for where fabric tensors are required in full symmetry. Analytic solutions of the two types of fabric tensors characterizing centrosymmetric and anticentrosymmetric tensor-valued ODFs are presented in terms of orthogonal irreducible decompositions in both two- and three-dimensional (2D and 3D spaces. Accuracy analysis is performed on normally distributed random ODFs to evaluate the approximation quality of the two characterizations, where fabric tensors of higher orders are employed. It is shown that the fitness is dominated by the dispersion degree of the original ODFs rather than the orders of fabric tensors. One application of tensor-valued ODF and fabric tensor in continuum damage mechanics is presented.
10. Tensor completion and low-n-rank tensor recovery via convex optimization
International Nuclear Information System (INIS)
Gandy, Silvia; Yamada, Isao; Recht, Benjamin
2011-01-01
In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery problem, using a convex relaxation technique proved to be a valuable solution strategy. Here, we will adapt these techniques to the tensor setting. We use the n-rank of a tensor as a sparsity measure and consider the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-rank that fulfills some linear constraints. We introduce a tractable convex relaxation of the n-rank and propose efficient algorithms to solve the low-n-rank tensor recovery problem numerically. The algorithms are based on the Douglas–Rachford splitting technique and its dual variant, the alternating direction method of multipliers
11. Weyl curvature tensor in static spherical sources
International Nuclear Information System (INIS)
Ponce de Leon, J.
1988-01-01
The role of the Weyl curvature tensor in static sources of the Schwarzschild field is studied. It is shown that in general the contribution from the Weyl curvature tensor (the ''purely gravitational field energy'') to the mass-energy inside the body may be positive, negative, or zero. It is proved that a positive (negative) contribution from the Weyl tensor tends to increase (decrease) the effective gravitational mass, the red-shift (from a point in the sphere to infinity), as well as the gravitational force which acts on a constituent matter element of a body. It is also proved that the contribution from the Weyl tensor always is negative in sources with surface gravitational potential larger than (4/9. It is pointed out that large negative contributions from the Weyl tensor could give rise to the phenomenon of gravitational repulsion. A simple example which illustrates the results is discussed
12. A recursive reduction of tensor Feynman integrals
International Nuclear Information System (INIS)
Diakonidis, T.; Riemann, T.; Tausk, J.B.; Fleischer, J.
2009-07-01
We perform a recursive reduction of one-loop n-point rank R tensor Feynman integrals [in short: (n,R)-integrals] for n≤6 with R≤n by representing (n,R)-integrals in terms of (n,R-1)- and (n-1,R-1)-integrals. We use the known representation of tensor integrals in terms of scalar integrals in higher dimension, which are then reduced by recurrence relations to integrals in generic dimension. With a systematic application of metric tensor representations in terms of chords, and by decomposing and recombining these representations, we find the recursive reduction for the tensors. The procedure represents a compact, sequential algorithm for numerical evaluations of tensor Feynman integrals appearing in next-to-leading order contributions to massless and massive three- and four-particle production at LHC and ILC, as well as at meson factories. (orig.)
13. Seamless warping of diffusion tensor fields
DEFF Research Database (Denmark)
Xu, Dongrong; Hao, Xuejun; Bansal, Ravi
2008-01-01
of seams, including voxels in which the deformation is extensive. Backward mapping, however, cannot reorient tensors in the template space because information about the directional orientation of fiber tracts is contained in the original, unwarped imaging space only, and backward mapping alone cannot......To warp diffusion tensor fields accurately, tensors must be reoriented in the space to which the tensors are warped based on both the local deformation field and the orientation of the underlying fibers in the original image. Existing algorithms for warping tensors typically use forward mapping...... deformations in an attempt to ensure that the local deformations in the warped image remains true to the orientation of the underlying fibers; forward mapping, however, can also create "seams" or gaps and consequently artifacts in the warped image by failing to define accurately the voxels in the template...
14. On Lovelock analogs of the Riemann tensor
Energy Technology Data Exchange (ETDEWEB)
Camanho, Xian O. [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik, Golm (Germany); Dadhich, Naresh [Jamia Millia Islamia, Centre for Theoretical Physics, New Delhi (India); Inter-University Centre for Astronomy and Astrophysics, Pune (India)
2016-03-15
It is possible to define an analog of the Riemann tensor for Nth order Lovelock gravity, its characterizing property being that the trace of its Bianchi derivative yields the corresponding analog of the Einstein tensor. Interestingly there exist two parallel but distinct such analogs and the main purpose of this note is to reconcile both formulations. In addition we will introduce a simple tensor identity and use it to show that any pure Lovelock vacuum in odd d = 2N + 1 dimensions is Lovelock flat, i.e. any vacuum solution of the theory has vanishing Lovelock-Riemann tensor. Further, in the presence of cosmological constant it is the Lovelock-Weyl tensor that vanishes. (orig.)
15. Shielding design and performance confirmation of cyclotron facilities
International Nuclear Information System (INIS)
1978-01-01
Medical uses of cyclotrons have become active recently in the form of carcinoma therapy and utilization of short-lived radioisotopes. For the shielding design of medical purpose accelerators, already ''Guide-line for shielding calculation for medical purpose high energy accelerator laboratories'' and ''Guide-line for shielding calculation for fast neutron laboratories'' were published. However, the calculation method for cyclotrons is not clear. As for the fundamental concept, the shielding for neutrons generated by the interaction of accelerated particles and the materials used for accelerator construction should be considered as well as the type, energy and intensity of radiations. Also about the activation of air and cooling water, the reactions due to fast and thermal neutrons as well as charged particles should be considered. Then the amount of neutron generation, spectra of neutron energy, angular distribution of neutron, γ ray emitted with neutrons, attenuation of neutron beam and other specific items such as skyshine are described. The specific items include so-called ''groundshine'', 41 Ar generation due to (n-r) reaction, 13 N and 15 O due to fast neutrons and activation of cooling water. Next, the actual results of shielding calculation are described in the case of Institute of Physical and Chemical Research, Institute of Nuclear Study (University of Tokyo), Medical Science Research Laboratory of University of Tokyo and National Institute of Radiological Sciences. (Wakatsuki, Y.)
16. Shielding structure analysis for LSDS facility
Energy Technology Data Exchange (ETDEWEB)
Choi, Hong Yeop; Kim, Jeong Dong; Lee, Yong Deok; Kim, Ho Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
The nuclear material (Pyro, Spent nuclear fuel) itself and the target material to generate neutrons is the LSDS system for isotopic fissile assay release of high intensity neutron and gamma rays. This research was performed to shield from various strong radiation. A shielding evaluation was carried out with a facilities model of LSDS system. The MCNPX 2.5 code was used and a shielding evaluation was performed for the shielding structure and location. The radiation dose based on the hole structure and location of the wall was evaluated. The shielding evaluation was performed to satisfy the safety standard for a normal person (1 μSv/h) and to use enough interior space. The MCNPX2.5 code was used and a dose evaluation was performed for the location of the shielding material, shielding structure, and hole structure. The evaluation result differs according to the shielding material location. The dose rate was small when the shielding material was positioned at the center. The dose evaluation result regarding the location of the shielding material was applied to the facility and the shielding thickness was determined (In 50 cm + Borax 5 cm + Out 45cm). In the existing hole structure, the radiation leak is higher than the standard. A hole structure model to prevent leakage of radiation was proposed. The general public dose limit was satisfied when using the concrete reinforcement and a zigzag structure. The shielding result will be of help to the facility shielding optimization.
17. Shielding structure analysis for LSDS facility
International Nuclear Information System (INIS)
Choi, Hong Yeop; Kim, Jeong Dong; Lee, Yong Deok; Kim, Ho Dong
2014-01-01
The nuclear material (Pyro, Spent nuclear fuel) itself and the target material to generate neutrons is the LSDS system for isotopic fissile assay release of high intensity neutron and gamma rays. This research was performed to shield from various strong radiation. A shielding evaluation was carried out with a facilities model of LSDS system. The MCNPX 2.5 code was used and a shielding evaluation was performed for the shielding structure and location. The radiation dose based on the hole structure and location of the wall was evaluated. The shielding evaluation was performed to satisfy the safety standard for a normal person (1 μSv/h) and to use enough interior space. The MCNPX2.5 code was used and a dose evaluation was performed for the location of the shielding material, shielding structure, and hole structure. The evaluation result differs according to the shielding material location. The dose rate was small when the shielding material was positioned at the center. The dose evaluation result regarding the location of the shielding material was applied to the facility and the shielding thickness was determined (In 50 cm + Borax 5 cm + Out 45cm). In the existing hole structure, the radiation leak is higher than the standard. A hole structure model to prevent leakage of radiation was proposed. The general public dose limit was satisfied when using the concrete reinforcement and a zigzag structure. The shielding result will be of help to the facility shielding optimization
18. Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Science.gov (United States)
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
19. Conformal field theories and tensor categories. Proceedings
Energy Technology Data Exchange (ETDEWEB)
Bai, Chengming [Nankai Univ., Tianjin (China). Chern Institute of Mathematics; Fuchs, Juergen [Karlstad Univ. (Sweden). Theoretical Physics; Huang, Yi-Zhi [Rutgers Univ., Piscataway, NJ (United States). Dept. of Mathematics; Kong, Liang [Tsinghua Univ., Beijing (China). Inst. for Advanced Study; Runkel, Ingo; Schweigert, Christoph (eds.) [Hamburg Univ. (Germany). Dept. of Mathematics
2014-08-01
First book devoted completely to the mathematics of conformal field theories, tensor categories and their applications. Contributors include both mathematicians and physicists. Some long expository articles are especially suitable for beginners. The present volume is a collection of seven papers that are either based on the talks presented at the workshop ''Conformal field theories and tensor categories'' held June 13 to June 17, 2011 at the Beijing International Center for Mathematical Research, Peking University, or are extensions of the material presented in the talks at the workshop. These papers present new developments beyond rational conformal field theories and modular tensor categories and new applications in mathematics and physics. The topics covered include tensor categories from representation categories of Hopf algebras, applications of conformal field theories and tensor categories to topological phases and gapped systems, logarithmic conformal field theories and the corresponding non-semisimple tensor categories, and new developments in the representation theory of vertex operator algebras. Some of the papers contain detailed introductory material that is helpful for graduate students and researchers looking for an introduction to these research directions. The papers also discuss exciting recent developments in the area of conformal field theories, tensor categories and their applications and will be extremely useful for researchers working in these areas.
20. Tensor network method for reversible classical computation
Science.gov (United States)
Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.
2018-03-01
We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.
1. A Novel Radiation Shielding Material Project
Data.gov (United States)
National Aeronautics and Space Administration — Radiation shielding simulations showed that epoxy loaded with 10-70% polyethylene would be an excellent shielding material against GCRs and SEPs. Milling produced an...
2. Simulation of a Shielded Thermocouple
African Journals Online (AJOL)
performance of shielded thermocouple designs. A mathematical model of the thermocouple is obtained by derivation of the heat propagation equation in cylindrical coordinates and by considering the ... Here k is the thermal conductivity, p c is the specific heat capacity, ρ is the density,. ∞. T is the ambient temperature, and μ ...
3. Local Tensor Radiation Conditions For Elastic Waves
DEFF Research Database (Denmark)
Krenk, S.; Kirkegaard, Poul Henning
2001-01-01
A local boundary condition is formulated, representing radiation of elastic waves from an arbitrary point source. The boundary condition takes the form of a tensor relation between the stress at a point on an arbitrarily oriented section and the velocity and displacement vectors at the point....... The tensor relation generalizes the traditional normal incidence impedance condition by accounting for the angle between wave propagation and the surface normal and by including a generalized stiffness term due to spreading of the waves. The effectiveness of the local tensor radiation condition...
4. Surface tensor estimation from linear sections
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
5. Abelian gauge theories with tensor gauge fields
International Nuclear Information System (INIS)
Kapuscik, E.
1984-01-01
Gauge fields of arbitrary tensor type are introduced. In curved space-time the gravitational field serves as a bridge joining different gauge fields. The theory of second order tensor gauge field is developed on the basis of close analogy to Maxwell electrodynamics. The notion of tensor current is introduced and an experimental test of its detection is proposed. The main result consists in a coupled set of field equations representing a generalization of Maxwell theory in which the Einstein equivalence principle is not satisfied. (author)
6. Surface tensor estimation from linear sections
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
2015-01-01
From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
7. Why are tensor field theories asymptotically free?
Science.gov (United States)
Rivasseau, V.
2015-09-01
In this pedagogic letter we explain the combinatorics underlying the generic asymptotic freedom of tensor field theories. We focus on simple combinatorial models with a 1/p2 propagator and quartic interactions and on the comparison between the intermediate field representations of the vector, matrix and tensor cases. The transition from asymptotic freedom (tensor case) to asymptotic safety (matrix case) is related to the crossing symmetry of the matrix vertex, whereas in the vector case, the lack of asymptotic freedom (“Landau ghost”), as in the ordinary scalar φ^44 case, is simply due to the absence of any wave function renormalization at one loop.
8. Measuring proton shift tensors with ultrafast MAS NMR.
Science.gov (United States)
Miah, Habeeba K; Bennett, David A; Iuga, Dinu; Titman, Jeremy J
2013-10-01
A new proton anisotropic-isotropic shift correlation experiment is described which operates with ultrafast MAS, resulting in good resolution of isotropic proton shifts in the detection dimension. The new experiment makes use of a recoupling sequence designed using symmetry principles which reintroduces the proton chemical shift anisotropy in the indirect dimension. The experiment has been used to measure the proton shift tensor parameters for the OH hydrogen-bonded protons in tyrosine·HCl and citric acid at Larmor frequencies of up to 850 MHz. Copyright © 2013 Elsevier Inc. All rights reserved.
9. Tucker tensor analysis of Matern functions in spatial statistics
KAUST Repository
Litvinenko, Alexander
2018-04-20
Low-rank Tucker tensor methods in spatial statistics 1. Motivation: improve statistical models 2. Motivation: disadvantages of matrices 3. Tools: Tucker tensor format 4. Tensor approximation of Matern covariance function via FFT 5. Typical statistical operations in Tucker tensor format 6. Numerical experiments
10. Minimal Gersgorin tensor eigenvalue inclusion set and its numerical approximation
OpenAIRE
Li, Chaoqian; Li, Yaotang
2015-01-01
For a complex tensor A, Minimal Gersgorin tensor eigenvalue inclusion set of A is presented, and its sufficient and necessary condition is given. Furthermore, we study its boundary by the spectrums of the equimodular set and the extended equimodular set for A. Lastly, for an irreducible tensor, a numerical approximation to Minimal Gersgorin tensor eigenvalue inclusion set is given.
11. TensorFlow Agents: Efficient Batched Reinforcement Learning in TensorFlow
OpenAIRE
Hafner, Danijar; Davidson, James; Vanhoucke, Vincent
2017-01-01
We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel witho...
12. C%2B%2B tensor toolbox user manual.
Energy Technology Data Exchange (ETDEWEB)
Plantenga, Todd D.; Kolda, Tamara Gibson
2012-04-01
The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.
13. Survivor shielding. Part A. Nagasaki factory worker shielding
International Nuclear Information System (INIS)
Santoro, Robert T.; Barnes, John M.; Azmy, Yousry Y.; Kerr, George D.; Egbert, Stephen D.; Cullings, Harry M.
2005-01-01
Recent investigations based on conventional chromosome aberration data by the RERF suggest that the DS86 doses received by many Nagasaki factory workers may have been overestimated by as much as 40% relative to those for other survivors in Japanese-type houses and other shielding configurations (Kodama et al. 2001). Since the factory workers represent about 25% of the Nagasaki survivors with DS86 doses in excess of 0.5 Gy (50 rad), systematic errors in their dose estimates can have a major impact on the risk coefficients from RERF studies. The factory worker doses may have been overestimated for a number of reasons. The calculation techniques, including the factory building modeling, weapon source spectra and cross-section data used in the DS86 shielding calculations were not detailed enough to replicate actual conditions. The models used did not take into account local shielding provided by machinery, tools, and the internal structure in the buildings. In addition, changes in the disposition of shielding following collapse of the building by the blast wave were not considered. The location of large factory complexes may be uncertain, causing large numbers of factory survivors, correctly located relative to each other, to be uniformly too close to the hypocenter. Any or all of these reasons are sufficient to result in an overestimate of the factory worker doses. During the DS02 studies, factory worker doses have been reassessed by more carefully modeling the factory buildings, incorporating improved radiation transport methods and cross-section data and using the most recent bomb leakage spectra (Chapter 2). Two-dimensional discrete ordinates calculations were carried out initially to estimate the effects of workbenches and tools on worker doses to determine if the inclusion of these components would, in fact, reduce the dose by amounts consistent with the RERF observations (Kodama et al. 2001). (author)
14. Unsupervised Tensor Mining for Big Data Practitioners.
Science.gov (United States)
Papalexakis, Evangelos E; Faloutsos, Christos
2016-09-01
Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry.
15. Potentials for transverse trace-free tensors
International Nuclear Information System (INIS)
2014-01-01
In constructing and understanding initial conditions in the 3 + 1 formalism for numerical relativity, the transverse and trace-free (TT) part of the extrinsic curvature plays a key role. We know that TT tensors possess two degrees of freedom per space point. However, finding an expression for a TT tensor depending on only two scalar functions is a non-trivial task. Assuming either axial or translational symmetry, expressions depending on two scalar potentials alone are derived here for all TT tensors in flat 3-space. In a more general spatial slice, only one of these potentials is found, the same potential given in (Baker and Puzio 1999 Phys. Rev. D 59 044030) and (Dain 2001 Phys. Rev. D 64 124002), with the remaining equations reduced to a partial differential equation, depending on boundary conditions for a solution. As an exercise, we also derive the potentials which give the Bowen-York curvature tensor in flat space. (paper)
16. Correlators in tensor models from character calculus
Directory of Open Access Journals (Sweden)
A. Mironov
2017-11-01
Full Text Available We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.
17. Energy-momentum tensor in scalar QED
International Nuclear Information System (INIS)
Joglekar, S.D.; Misra, A.
1988-01-01
We consider the renormalization of the energy-momentum tensor in scalar quantum electrodynamics. We show the need for adding an improvement term to the conventional energy-momentum tensor. We consider two possible forms for the improvement term: (i) one in which the improvement coefficient is a finite function of bare parameters of the theory (so that the energy-momentum tensor can be obtained from an action that is a finite function of bare quantities); (ii) one in which the improvement coefficient is a finite quantity, i.e., a finite function of renormalized parameters. We establish a negative result; viz., neither form leads to a finite energy-momentum tensor to O(e 2 λ/sup n/). .AE
18. Reconstruction of convex bodies from surface tensors
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus
. The output of the reconstruction algorithm is a polytope P, where the surface tensors of P and K are identical up to rank s. We establish a stability result based on a generalization of Wirtinger’s inequality that shows that for large s, two convex bodies are close in shape when they have identical surface...... that are translates of each other. An algorithm for reconstructing an unknown convex body in R 2 from its surface tensors up to a certain rank is presented. Using the reconstruction algorithm, the shape of an unknown convex body can be approximated when only a finite number s of surface tensors are available...... tensors up to rank s. This is used to establish consistency of the developed reconstruction algorithm....
19. Reconstruction of convex bodies from surface tensors
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus
2016-01-01
We present two algorithms for reconstruction of the shape of convex bodies in the two-dimensional Euclidean space. The first reconstruction algorithm requires knowledge of the exact surface tensors of a convex body up to rank s for some natural number s. When only measurements subject to noise...... of surface tensors are available for reconstruction, we recommend to use certain values of the surface tensors, namely harmonic intrinsic volumes instead of the surface tensors evaluated at the standard basis. The second algorithm we present is based on harmonic intrinsic volumes and allows for noisy...... measurements. From a generalized version of Wirtinger's inequality, we derive stability results that are utilized to ensure consistency of both reconstruction procedures. Consistency of the reconstruction procedure based on measurements subject to noise is established under certain assumptions on the noise...
20. An introduction to linear algebra and tensors
CERN Document Server
Akivis, M A; Silverman, Richard A
1978-01-01
Eminently readable, completely elementary treatment begins with linear spaces and ends with analytic geometry, covering multilinear forms, tensors, linear transformation, and more. 250 problems, most with hints and answers. 1972 edition.
1. Correlators in tensor models from character calculus
Science.gov (United States)
Mironov, A.; Morozov, A.
2017-11-01
We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz) character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.
2. Shifted power method for computing tensor eigenpairs.
Energy Technology Data Exchange (ETDEWEB)
Mayo, Jackson R.; Kolda, Tamara Gibson
2010-10-01
Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.
3. Calculus of tensors and differential forms
CERN Document Server
Sinha, Rajnikant
2014-01-01
Calculus of tensors and differential forms is an introductory-level textbook. Through this book, students will familiarize themselves with tools they need in order to use for further study on general relativity and research, such as affine tensors, tensor calculus on manifolds, relative tensors, Lie derivatives, wedge products, differential forms, and Stokes' theorem. The treatment is concrete and in detail, so that abstract concepts do not deter even physics and engineering students. This self contained book requires undergraduate-level calculus of several variables and linear algebra as prerequisite. Fubini's theorem in real analysis, to be used in Stokes' theorem, has been proved earlier than Stokes' theorem so that students don't have to search elsewhere.
4. The energy–momentum tensor(s in classical gauge theories
Directory of Open Access Journals (Sweden)
Daniel N. Blaschke
2016-11-01
Full Text Available We give an introduction to, and review of, the energy–momentum tensors in classical gauge field theories in Minkowski space, and to some extent also in curved space–time. For the canonical energy–momentum tensor of non-Abelian gauge fields and of matter fields coupled to such fields, we present a new and simple improvement procedure based on gauge invariance for constructing a gauge invariant, symmetric energy–momentum tensor. The relationship with the Einstein–Hilbert tensor following from the coupling to a gravitational field is also discussed.
5. The Energy-Momentum Tensor(s) in Classical Gauge Theories
OpenAIRE
Blaschke, Daniel N.; Gieres, Francois; Reboud, Meril; Schweda, Manfred
2016-01-01
We give an introduction to, and review of, the energy–momentum tensors in classical gauge field theories in Minkowski space, and to some extent also in curved space–time. For the canonical energy–momentum tensor of non-Abelian gauge fields and of matter fields coupled to such fields, we present a new and simple improvement procedure based on gauge invariance for constructing a gauge invariant, symmetric energy–momentum tensor. The relationship with the Einstein–Hilbert tensor following from t...
6. The energy–momentum tensor(s) in classical gauge theories
Energy Technology Data Exchange (ETDEWEB)
Blaschke, Daniel N., E-mail: [email protected] [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Gieres, François, E-mail: [email protected] [Institut de Physique Nucléaire de Lyon, Université de Lyon, Université Claude Bernard Lyon 1 and CNRS/IN2P3, Bat. P. Dirac, 4 rue Enrico Fermi, F-69622 Villeurbanne (France); Reboud, Méril, E-mail: [email protected] [Institut de Physique Nucléaire de Lyon, Université de Lyon, Université Claude Bernard Lyon 1 and CNRS/IN2P3, Bat. P. Dirac, 4 rue Enrico Fermi, F-69622 Villeurbanne (France); Ecole Normale Supérieure de Lyon, 46 allée d' Italie, F-69364 Lyon CEDEX 07 (France); Schweda, Manfred, E-mail: [email protected] [Institute for Theoretical Physics, Vienna University of Technology, Wiedner Hauptstraße 8-10, A-1040 Vienna (Austria)
2016-11-15
We give an introduction to, and review of, the energy–momentum tensors in classical gauge field theories in Minkowski space, and to some extent also in curved space–time. For the canonical energy–momentum tensor of non-Abelian gauge fields and of matter fields coupled to such fields, we present a new and simple improvement procedure based on gauge invariance for constructing a gauge invariant, symmetric energy–momentum tensor. The relationship with the Einstein–Hilbert tensor following from the coupling to a gravitational field is also discussed.
7. Geometric decomposition of the conformation tensor in viscoelastic turbulence
Science.gov (United States)
Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer A.; Gayme, Dennice F.
2018-05-01
This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive-definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive-definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive-definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.
8. Development of mechanical shield docking method; MSD (mekanikaru/shirudo/dokkingu) koho no kaihatsu
Energy Technology Data Exchange (ETDEWEB)
Yokota, I. [Tokyo Metroplitan Government Water Supply Bureau, Tokyo (Japan); Watanabe, T.; Hagiwara, H. [Shimizu Corp., Tokyo (Japan); Nishitake, S. [Mitsubishi Heavy Industries Ltd., Tokyo (Japan); Endo, M. [Obayashi Corp., Osaka (Japan)
1993-09-20
For the construction works of underground tunnels, mainly the shield method has so far been adopted, but in order to make underground junction of shield machines, the method of utilizing a shaft or the method of improving the earth by the auxiliary methods such as chemical feeding have been adopted. However, either method has restriction for its practical application. The MSD method uses no auxiliary method at all, can join directly two shield machines mechanically underground, has high water stoppability at its junction, is applicable for either of shield machines of slush type or mud pressure type, and is the method to solve totally various problems in the existing joining methods. This method is the one that two shield machines, one on the out-pushing side and another on the in-receiving side, progress from both sides and face each other, then the both are joined mechanically for unification by pushing a steel penetration ring built-in the out-pushing shield machine to the rubber ring built-in the penetration chamber of the in-receiving shield machine. After joining, the shield machines are disassembled for removal leaving the junction only, and the secondary lining is done with concrete. 6 figs.
9. Extended obstruction tensors and renormalized volume coefficients
OpenAIRE
Graham, C. Robin
2009-01-01
The behavior under conformal change of the renormalized volume coefficients associated to a pseudo-Riemannian metric is investigated. It is shown that they define second order fully nonlinear operators in the conformal factor whose algebraic structure is elucidated via the introduction of "extended obstruction tensors". These together with the Schouten tensor constitute building blocks for the coefficients in the ambient metric expansion. The renormalized volume coefficients have recently bee...
10. Higher-Order Tensors in Diffusion Imaging
OpenAIRE
Schultz, Thomas; Fuster, Andrea; Ghosh, Aurobrata; Deriche, Rachid; Florack, Luc; Lek-Heng, Lim
2013-01-01
International audience; Diffusion imaging is a noninvasive tool for probing the microstructure of fibrous nerve and muscle tissue. Higher-order tensors provide a powerful mathematical language to model and analyze the large and complex data that is generated by its modern variants such as High Angular Resolution Diffusion Imaging (HARDI) or Diffusional Kurtosis Imaging. This survey gives a careful introduction to the foundations of higher-order tensor algebra, and explains how some concepts f...
11. A Tour of TensorFlow
OpenAIRE
Goldsborough, Peter
2016-01-01
Deep learning is a branch of artificial intelligence employing deep neural network architectures that has significantly advanced the state-of-the-art in computer vision, speech recognition, natural language processing and other domains. In November 2015, Google released $\\textit{TensorFlow}$, an open source deep learning software library for defining, training and deploying machine learning models. In this paper, we review TensorFlow and put it in context of modern deep learning concepts and ...
12. Neutronic design of MYRRHA reactor hall shielding
Directory of Open Access Journals (Sweden)
Celik Yurdunaz
2017-01-01
Full Text Available The lateral shielding of a 600 MeV proton linear accelerator beam line in the MYRRHA reactor hall has been assessed using neutronic calculations by the MCNPX code complemented with analytical predictions. Continuous beam losses were considered to define the required shielding thickness that meets the requirements for the dose rate limits. Required shielding thicknesses were investigated from the viewpoint of accidental full beam loss as well as beam loss on collimator. The results confirm that the required shielding thicknesses are highly sensitive to the spatial shape of the beam and strongly divergent beam losses. Therefore shielding barrier should be designed according to the more conservative assumptions.
13. Comparision of γ -ray shielding properties of some borate glasses
International Nuclear Information System (INIS)
Thind, K.S.
2003-01-01
Several new glasses have been prepared in recent years to suit their increasing number of applications. Some of the glass compositions have distinct properties which make them the most preferred materials for certain applications such as shielding, optical fibers, electronics displays etc. The information of composition, processing and effect of environment on the glass properties is of great importance for their design and application. The shielding ability of pure elements and some mixtures have already been studied but limited attempts have been made on glasses. A good shielding glass should have high absorption cross - section for radiation and at the same time irradiation effects on its mechanical and optical properties should be small. By keeping in view of the importance of shielding ability of borate glasses, we have studied two series of different glass type: x PbO - (1-x) B 2 O 3 and x ZnO - 2xPbO - (1-3x) B 2 O 3 (where x is the mole fraction) by using narrow beam transmission method. A 2' x 2' NaI(Tl) crystal with an energy resolution of 12.5% at 662 keV of 137 Cs was used for the determination of attenuation coefficients and hence interaction cross-sections. Glass samples were prepared by using melt-quenching technique. Thickness measurement was carried out by micrometer and density was measured by Archimede's Principle using benzene as the immersion liquid. The densities of the glasses were found to increase linearly with the increase in the chemical composition of heavy metal oxide. Variations in mass attenuation coefficients and interaction cross ' sections were observed with the change in chemical composition and photon energy. It is found that these glasses have potential applications to be used as radiation shielding materials
14. Diffusion tensor MRI: clinical applications
International Nuclear Information System (INIS)
Meli, Francisco; Romero, Carlos; Carpintiero, Silvina; Salvatico, Rosana; Lambre, Hector; Vila, Jose
2005-01-01
Purpose: To evaluate the usefulness of diffusion-tensor imaging (DTI) on different neurological diseases, and to know if this technique shows additional information than conventional Magnetic Resonance Imaging (MRI). Materials and method: Eight patients, with neurological diseases (five patients with brain tumors, one with multiple sclerosis (MS), one with variant Creutzfeldt-Jakob disease (vCJD) and the other with delayed CO intoxication were evaluated. A MR scanner of 1.5 T was used and conventional sequences and DTI with twenty-five directions were done. Quantitative maps were gotten, where the fractional anisotropy (FA) through regions of interest (ROIs) in specific anatomic area were quantified (i.e.: internal and external capsules, frontal and temporal bundles, corpus fibers). Results: In the patients with brain tumors, there was a decrease of FA on intra and peritumoral fibers. Some of them had a disruption in their pattern. In patients with MS and CO intoxication, partial interruption along white matter bundles was demonstrated. However, a 'mismatch' between the findings of FLAIR, Diffusion-weighted images (DWI) and DTI, in the case of CO intoxication, was seen. Conclusions: DTI gave more information compared to conventional sequences about ultrastructural brain tissue in almost all the diseases above mentioned. Therefore, there is a work in progress about DTI acquisition, to evaluate a new technique, called tractography. (author)
15. Diffusion Tensor Imaging of Pedophilia.
Science.gov (United States)
Cantor, James M; Lafaille, Sophie; Soh, Debra W; Moayedi, Massieh; Mikulis, David J; Girard, Todd A
2015-11-01
Pedophilia is a principal motivator of child molestation, incurring great emotional and financial burdens on victims and society. Even among pedophiles who never commit any offense,the condition requires lifelong suppression and control. Previous comparison using voxel-based morphometry (VBM)of MR images from a large sample of pedophiles and controls revealed group differences in white matter. The present study therefore sought to verify and characterize white matter involvement using diffusion tensor imaging (DTI), which better captures the microstructure of white matter than does VBM. Pedophilics ex offenders (n=24) were compared with healthy, age-matched controls with no criminal record and no indication of pedophilia (n=32). White matter microstructure was analyzed with Tract-Based Spatial Statistics, and the trajectories of implicated fiber bundles were identified by probabilistic tractography. Groups showed significant, highly focused differences in DTI parameters which related to participants’ genital responses to sexual depictions of children, but not to measures of psychopathy or to childhood histories of physical abuse, sexual abuse, or neglect. Some previously reported gray matter differences were suggested under highly liberal statistical conditions (p(uncorrected)pedophilia is characterized by neuroanatomical differences in white matter microstructure, over and above any neural characteristics attributable to psychopathy and childhood adversity, which show neuroanatomic footprints of their own. Although some gray matter structures were implicated previously, only few have emerged reliably.
16. (Ln-bar, g)-spaces. Special tensor fields
International Nuclear Information System (INIS)
Manoff, S.; Dimitrov, B.
1998-01-01
The Kronecker tensor field, the contraction tensor field, as well as the multi-Kronecker and multi-contraction tensor fields are determined and the action of the covariant differential operator, the Lie differential operator, the curvature operator, and the deviation operator on these tensor fields is established. The commutation relations between the operators Sym and Asym and the covariant and Lie differential operators are considered acting on symmetric and antisymmetric tensor fields over (L n bar, g)-spaces
17. On the concircular curvature tensor of Riemannian manifolds
International Nuclear Information System (INIS)
Rahman, M.S.; Lal, S.
1990-06-01
Definition of the concircular curvature tensor, Z hijk , along with Z-tensor, Z ij , is given and some properties of Z hijk are described. Tensors identical with Z hijk are shown. A necessary and sufficient condition that a Riemannian V n has zero Z-tensor is found. A number of theorems on concircular symmetric space, concircular recurrent space (Z n -space) and Z n -space with zero Z-tensor are deduced. (author). 6 refs
18. Tensor Toolbox for MATLAB v. 3.0
Energy Technology Data Exchange (ETDEWEB)
2017-03-07
Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors using MATLAB's object-oriented features. It also provides algorithms for tensor decomposition and factorization, algorithms for computing tensor eigenvalues, and methods for visualization of results.
19. Measuring Nematic Susceptibilities from the Elastoresistivity Tensor
Science.gov (United States)
Hristov, A. T.; Shapiro, M. C.; Hlobil, Patrick; Maharaj, Akash; Chu, Jiun-Haw; Fisher, Ian
The elastoresistivity tensor mijkl relates changes in resistivity to the strain on a material. As a fourth-rank tensor, it contains considerably more information about the material than the simpler (second-rank) resistivity tensor; in particular, certain elastoresistivity coefficients can be related to thermodynamic susceptibilities and serve as a direct probe of symmetry breaking at a phase transition. The aim of this talk is twofold. First, we enumerate how symmetry both constrains the structure of the elastoresistivity tensor into an easy-to-understand form and connects tensor elements to thermodynamic susceptibilities. In the process, we generalize previous studies of elastoresistivity to include the effects of magnetic field. Second, we describe an approach to measuring quantities in the elastoresistivity tensor with a novel transverse measurement, which is immune to relative strain offsets. These techniques are then applied to BaFe2As2 in a proof of principle measurement. This work is supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515.
20. Improved Assembly for Gas Shielding During Welding or Brazing
Science.gov (United States)
Gradl, Paul; Baker, Kevin; Weeks, Jack
2009-01-01
An improved assembly for inert-gas shielding of a metallic joint is designed to be useable during any of a variety of both laser-based and traditional welding and brazing processes. The basic purpose of this assembly or of a typical prior related assembly is to channel the flow of a chemically inert gas to a joint to prevent environmental contamination of the joint during the welding or brazing process and, if required, to accelerate cooling upon completion of the process.
1. Facility target insert shielding assessment
Energy Technology Data Exchange (ETDEWEB)
Mocko, Michal [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-10-06
Main objective of this report is to assess the basic shielding requirements for the vertical target insert and retrieval port. We used the baseline design for the vertical target insert in our calculations. The insert sits in the 12”-diameter cylindrical shaft extending from the service alley in the top floor of the facility all the way down to the target location. The target retrieval mechanism is a long rod with the target assembly attached and running the entire length of the vertical shaft. The insert also houses the helium cooling supply and return lines each with 2” diameter. In the present study we focused on calculating the neutron and photon dose rate fields on top of the target insert/retrieval mechanism in the service alley. Additionally, we studied a few prototypical configurations of the shielding layers in the vertical insert as well as on the top.
2. Radiation shield vest and skirt
International Nuclear Information System (INIS)
Maine, G.J.
1982-01-01
A two-piece garment is described which provides shielding for female workers exposed to radiation. The upper part is a vest, overlapping and secured in the front by adjustable closures. The bottom part is a wraparound skirt, also secured by adjustable closures. The two parts overlap, thus providing continuous protection from shoulder to knee and ensuring that the back part of the body is protected as well as the front
3. Handbook of radiation shielding data
International Nuclear Information System (INIS)
Courtney, J.C.
1976-07-01
This handbook is a compilation of data on units, conversion factors, geometric considerations, sources of radiation, and the attenuation of photons, neutrons, and charged particles. It also includes related topics in health physics. Data are presented in tabular and graphical form with sufficient narrative for a least first-approximation solutions to a variety of problems in nuclear radiation protection. Members of the radiation shielding community contributed the information in this document from unclassified and uncopyrighted sources, as referenced
4. Shielding Idiosyncrasy from Isomorphic Pressures
DEFF Research Database (Denmark)
Alvarez, José Luis; Mazza, Carmelo; Strandgaard Pedersen, Jesper
2003-01-01
with legitimacyin the field. Our theory of creative actionfor optimal distinctiveness suggests thatfilm directors increase their control bypersonally consolidating artistic andproduction roles, by forming closepartnership with committed producer, andby establishing own production company.Ironically, to escape......Abstract. This paper advances a microtheory of creative action by examining howdistinctive artists shield their idiosyncraticstyles from the isomorphic pressures of afield. It draws on the cases of threeinternationally recognized, distinctiveEuropean film directors - Pedro Almodóvar(Spain), Nanni...
5. Shielding wall for thermonuclear device
International Nuclear Information System (INIS)
Uchida, Takaho.
1989-01-01
This invention concerns shielding walls opposing to plasmas of a thermonuclear device and it is an object thereof to conduct reactor operation with no troubles even if a portion of shielding wall tiles should be damaged. That is, the shielding wall tiles are constituted as a dual layer structure in which the lower base tiles are connected by means of bolts to first walls. Further, the upper surface tiles are bolt-connected to the layer base tiles. In this structure, the plasma thermal loads are directly received by the surface layer tiles and heat is conducted by means of conduction and radiation to the underlying base tiles and the first walls. Even upon occurrence of destruction accidents to the surface layer tiles caused by incident heat or electromagnetic force upon elimination of plasmas, since the underlying base tiles remain as they are, the first walls constituted with stainless steels, etc. are not directly exposed to the plasmas. Accordingly, the integrity of the first walls having cooling channels can be maintained and sputtering intrusion of atoms of high atom number into the plasmas can be prevented. (I.S.)
6. ATLAS Award for Shield Supplier
CERN Multimedia
2004-01-01
ATLAS technical coordinator Dr. Marzio Nessi presents the ATLAS supplier award to Vojtech Novotny, Director General of Skoda Hute.On 3 November, the ATLAS experiment honoured one of its suppliers, Skoda Hute s.r.o., of Plzen, Czech Republic, for their work on the detector's forward shielding elements. These huge and very massive cylinders surround the beampipe at either end of the detector to block stray particles from interfering with the ATLAS's muon chambers. For the shields, Skoda Hute produced 10 cast iron pieces with a total weight of 780 tonnes at a cost of 1.4 million CHF. Although there are many iron foundries in the CERN member states, there are only a limited number that can produce castings of the necessary size: the large pieces range in weight from 59 to 89 tonnes and are up to 1.5 metres thick.The forward shielding was designed by the ATLAS Technical Coordination in close collaboration with the ATLAS groups from the Czech Technical University and Charles University in Prague. The Czech groups a...
7. Towards overcoming the Monte Carlo sign problem with tensor networks
Energy Technology Data Exchange (ETDEWEB)
Banuls, Mari Carmen; Cirac, J. Ignacio; Kuehn, Stefan [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, Krzysztof [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Saito, Hana [AISIN AW Co., Ltd., Aichi (Japan)
2016-11-15
The study of lattice gauge theories with Monte Carlo simulations is hindered by the infamous sign problem that appears under certain circumstances, in particular at non-zero chemical potential. So far, there is no universal method to overcome this problem. However, recent years brought a new class of non-perturbative Hamiltonian techniques named tensor networks, where the sign problem is absent. In previous work, we have demonstrated that this approach, in particular matrix product states in 1+1 dimensions, can be used to perform precise calculations in a lattice gauge theory, the massless and massive Schwinger model. We have computed the mass spectrum of this theory, its thermal properties and real-time dynamics. In this work, we review these results and we extend our calculations to the case of two flavours and non-zero chemical potential. We are able to reliably reproduce known analytical results for this model, thus demonstrating that tensor networks can tackle the sign problem of a lattice gauge theory at finite density.
8. The Modified Socket Shield Technique.
Science.gov (United States)
Han, Chang-Hun; Park, Kwang-Bum; Mangano, Francesco Guido
2018-03-20
In the anterior regions, the resorption of the buccal bone after tooth extraction leads to a contraction of the overlying soft tissues, resulting in an esthetic problem, particularly with immediate implant placement. In the socket shield technique, the buccal root section of the tooth is maintained, to preserve the buccal bone for immediate implant placement. The aim of this prospective study was to investigate the survival, stability, and complication rates of implants placed using a "modified" socket shield technique. Over a 2-year period, all patients referred to a dental clinic for treatment with oral implants were considered for inclusion in this study. Inclusion criteria were healthy adult patients who presented nonrestorable single teeth with intact buccal periodontal tissues in the anterior regions of both jaws. Exclusion criteria were teeth with present/past periodontal disease, vertical root fractures on the buccal aspect, horizontal fractures below bone level, and external/internal resorptions. The buccal portion of the root was retained to prevent the resorption of the buccal bone; the shield was 1.5 mm thick with the most coronal portion at the bone crest level. All patients then underwent immediate implants. In the patient with a gap between the implant and shield, no graft material was placed. All implants were immediately restored with single crowns and followed for 1 year. The main outcomes were implant survival, stability, and complications. Thirty patients (15 males, 15 females; mean age was 48.2 ± 15.0 years) were enrolled in the study and installed with 40 immediate implants. After 1 year, all implants were functioning, for a survival rate of 100%; excellent implant stability was reported (mean implant stability quotient at placement: 72.9 ± 5.9; after 1 year: 74.6 ± 2.7). No biologic complications were reported, and the incidence of prosthetic complications was low (2.5%). The "modified" socket shield technique seems to be a
9. An Adaptive Spectrally Weighted Structure Tensor Applied to Tensor Anisotropic Nonlinear Diffusion for Hyperspectral Images
Science.gov (United States)
Marin Quintero, Maider J.
2013-01-01
The structure tensor for vector valued images is most often defined as the average of the scalar structure tensors in each band. The problem with this definition is the assumption that all bands provide the same amount of edge information giving them the same weights. As a result non-edge pixels can be reinforced and edges can be weakened…
10. Generalized tensor-based morphometry of HIV/AIDS using multivariate statistics on deformation tensors.
Science.gov (United States)
Lepore, N; Brun, C; Chou, Y Y; Chiang, M C; Dutton, R A; Hayashi, K M; Luders, E; Lopez, O L; Aizenstein, H J; Toga, A W; Becker, J T; Thompson, P M
2008-01-01
This paper investigates the performance of a new multivariate method for tensor-based morphometry (TBM). Statistics on Riemannian manifolds are developed that exploit the full information in deformation tensor fields. In TBM, multiple brain images are warped to a common neuroanatomical template via 3-D nonlinear registration; the resulting deformation fields are analyzed statistically to identify group differences in anatomy. Rather than study the Jacobian determinant (volume expansion factor) of these deformations, as is common, we retain the full deformation tensors and apply a manifold version of Hotelling's $T(2) test to them, in a Log-Euclidean domain. In 2-D and 3-D magnetic resonance imaging (MRI) data from 26 HIV/AIDS patients and 14 matched healthy subjects, we compared multivariate tensor analysis versus univariate tests of simpler tensor-derived indices: the Jacobian determinant, the trace, geodesic anisotropy, and eigenvalues of the deformation tensor, and the angle of rotation of its eigenvectors. We detected consistent, but more extensive patterns of structural abnormalities, with multivariate tests on the full tensor manifold. Their improved power was established by analyzing cumulative p-value plots using false discovery rate (FDR) methods, appropriately controlling for false positives. This increased detection sensitivity may empower drug trials and large-scale studies of disease that use tensor-based morphometry. 11. Tensor network state correspondence and holography Science.gov (United States) Singh, Sukhwinder 2018-01-01 In recent years, tensor network states have emerged as a very useful conceptual and simulation framework to study quantum many-body systems at low energies. In this paper, we describe a particular way in which any given tensor network can be viewed as a representation of two different quantum many-body states. The two quantum many-body states are said to correspond to each other by means of the tensor network. We apply this "tensor network state correspondence"—a correspondence between quantum many-body states mediated by tensor networks as we describe—to the multi-scale entanglement renormalization ansatz (MERA) representation of ground states of one dimensional (1D) quantum many-body systems. Since the MERA is a 2D hyperbolic tensor network (the extra dimension is identified as the length scale of the 1D system), the two quantum many-body states obtained from the MERA, via tensor network state correspondence, are seen to live in the bulk and on the boundary of a discrete hyperbolic geometry. The bulk state so obtained from a MERA exhibits interesting features, some of which caricature known features of the holographic correspondence of String theory. We show how (i) the bulk state admits a description in terms of "holographic screens", (ii) the conformal field theory data associated with a critical ground state can be obtained from the corresponding bulk state, in particular, how pointlike boundary operators are identified with extended bulk operators. (iii) We also present numerical results to illustrate that bulk states, dual to ground states of several critical spin chains, have exponentially decaying correlations, and that the bulk correlation length generally decreases with increase in central charge for these spin chains. 12. Susceptibility tensor imaging (STI) of the brain. Science.gov (United States) Li, Wei; Liu, Chunlei; Duong, Timothy Q; van Zijl, Peter C M; Li, Xu 2017-04-01 Susceptibility tensor imaging (STI) is a recently developed MRI technique that allows quantitative determination of orientation-independent magnetic susceptibility parameters from the dependence of gradient echo signal phase on the orientation of biological tissues with respect to the main magnetic field. By modeling the magnetic susceptibility of each voxel as a symmetric rank-2 tensor, individual magnetic susceptibility tensor elements as well as the mean magnetic susceptibility and magnetic susceptibility anisotropy can be determined for brain tissues that would still show orientation dependence after conventional scalar-based quantitative susceptibility mapping to remove such dependence. Similar to diffusion tensor imaging, STI allows mapping of brain white matter fiber orientations and reconstruction of 3D white matter pathways using the principal eigenvectors of the susceptibility tensor. In contrast to diffusion anisotropy, the main determinant factor of the susceptibility anisotropy in brain white matter is myelin. Another unique feature of the susceptibility anisotropy of white matter is its sensitivity to gadolinium-based contrast agents. Mechanistically, MRI-observed susceptibility anisotropy is mainly attributed to the highly ordered lipid molecules in the myelin sheath. STI provides a consistent interpretation of the dependence of phase and susceptibility on orientation at multiple scales. This article reviews the key experimental findings and physical theories that led to the development of STI, its practical implementations, and its applications for brain research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd. 13. Primordial tensor modes of the early Universe Science.gov (United States) Martínez, Florencia Benítez; Olmedo, Javier 2016-06-01 We study cosmological tensor perturbations on a quantized background within the hybrid quantization approach. In particular, we consider a flat, homogeneous and isotropic spacetime and small tensor inhomogeneities on it. We truncate the action to second order in the perturbations. The dynamics is ruled by a homogeneous scalar constraint. We carry out a canonical transformation in the system where the Hamiltonian for the tensor perturbations takes a canonical form. The new tensor modes now admit a standard Fock quantization with a unitary dynamics. We then combine this representation with a generic quantum scheme for the homogeneous sector. We adopt a Born-Oppenheimer ansatz for the solutions to the constraint operator, previously employed to study the dynamics of scalar inhomogeneities. We analyze the approximations that allow us to recover, on the one hand, a Schrödinger equation similar to the one emerging in the dressed metric approach and, on the other hand, the ones necessary for the effective evolution equations of these primordial tensor modes within the hybrid approach to be valid. Finally, we consider loop quantum cosmology as an example where these quantization techniques can be applied and compare with other approaches. 14. TensorPack: a Maple-based software package for the manipulation of algebraic expressions of tensors in general relativity International Nuclear Information System (INIS) Huf, P A; Carminati, J 2015-01-01 In this paper we: (1) introduce TensorPack, a software package for the algebraic manipulation of tensors in covariant index format in Maple; (2) briefly demonstrate the use of the package with an orthonormal tensor proof of the shearfree conjecture for dust. TensorPack is based on the Riemann and Canon tensor software packages and uses their functions to express tensors in an indexed covariant format. TensorPack uses a string representation as input and provides functions for output in index form. It extends the functionality to basic algebra of tensors, substitution, covariant differentiation, contraction, raising/lowering indices, symmetry functions and other accessory functions. The output can be merged with text in the Maple environment to create a full working document with embedded dynamic functionality. The package offers potential for manipulation of indexed algebraic tensor expressions in a flexible software environment. (paper) 15. (Ln-bar, g)-spaces. Ordinary and tensor differentials International Nuclear Information System (INIS) Manoff, S.; Dimitrov, B. 1998-01-01 Different types of differentials as special cases of differential operators acting on tensor fields over (L n bar, g)-spaces are considered. The ordinary differential, the covariant differential as a special case of the covariant differential operator, and the Lie differential as a special case of the Lie differential operator are investigated. The tensor differential and its special types (Covariant tensor differential, and Lie tensor differential) are determined and their properties are discussed. Covariant symmetric and antisymmetric (external) tensor differentials, Lie symmetric, and Lie antisymmetric (external) tensor differentials are determined and considered over (L n bar, g)-spaces 16. Energy-momentum tensor in the fermion-pairing model International Nuclear Information System (INIS) Kawati, S.; Miyata, H. 1980-01-01 The symmetric energy-momentum tensor for the self-interacting fermion theory (psi-barpsi) 2 is expressed in terms of the collective mode within the Hartree approximation. The divergent part of the energy-momentum tensor for the fermion theory induces an effective energy-momentum tensor for the collective mode, and this effective energy-momentum tensor automatically has the Callan-Coleman-Jackiw improved form. The renormalized energy-momentum tensor is structurally equivalent to the Callan-Coleman-Jackiw improved tensor for the Yukawa theory 17. Radiation shielding activities at IDOM International Nuclear Information System (INIS) Ordóñez, César Hueso; Gurpegui, Unai Cano; Valiente, Yelko Chento; Poveda, Imanol Zamora 2017-01-01 When human activities have to be performed under ionising radiation environments the safety of the workers must be guaranteed. Usually three principles are used to accomplish with ALARA (As Low As Reasonably Achievable) requirements: the more distance between the source term and the worker, the better; the less time spent to arrange any task, the better; and, once the previous principles are optimized should the exposure of the workers continues being above the regulatory limits, shielding has to be implemented. Through this paper some different examples of IDOM's shielding design activities are presented. Beginning with the gamma collimators for the Jules Horowitz Reactor, nuclear fuel's behaviour researching facility, where the beam path crosses the reactor's containment walls and is steered up to a gamma detector where the fuel spectrum is analysed and where the beam has to be attenuated several orders of magnitude in a short distance. Later it is shown IDOM’s approach for the shielding of the Emergency Control Management Center of Asociación Nuclear Ascó-Vandellòs-II NPPs, a bunker designed to withstand severe accident conditions and to support the involved staff during 30 days, considering the outside radioactive cloud and the inside source term that filtering units become as they filter the incoming air. And finally, a general approach to this kind of problems is presented, since the study of the source term considering all the possible contributions, passing through the material selection and the thicknesses calculation until the optimization of the materials. (author) 18. Shielded room measurements, Final report Energy Technology Data Exchange (ETDEWEB) Stanton, J.S. 1949-02-22 The attenuation of electro-statically and electro-magnetically shielded rooms in the E, R, I, and T Buildings was measured so that corrective measure could be taken if the attenuation was found to be low. If remedial measures could not be taken, the shortcomings of the rooms would be known. Also, the men making the measurements should oversee construction and correct errors at the time. The work was performed by measuring the attenuation at spot frequencies over the range of from 150 kilocycles to 1280 megacycles with suitable equipment mounted in small rubber-tried trucks. The attenuation was determined by before and after shielding and/or door open and door closed measurements after installation of copper shielding. In general, attenuation in the frequency range of approximately 10 to 150 mc. was good and was of the order expected. At frequencies in the range of 150 mc. to 1280 mc., the attenuation curve was more erratic; that is, at certain frequencies a severe loss of attenuation was noted, while at others, the attenuation was very good. This was mainly due to poor or faulty seals around doors and pass windows. These poor seals existed in the T, E, and I Buildings because the doors were fitted improperly and somewhat inferior material was used. By experience from these difficulties, both causes were corrected in the R Building, which resulted in the improvement of the very high frequency (v.h.f.) range in this building. In some specific cases, however, the results were about the same. For the range of frequencies below approximately 10 mc., the attenuation, in almost all cases, gradually decreased as the frequency decreased and reached a minimum at .3 to 1.0 mc. This loss of attenuation was attributed to multiple grounding caused by moisture in the insulating timbers and will gradually decrease as the wood dries out. 19. Radiation shielding activities at IDOM Energy Technology Data Exchange (ETDEWEB) Ordóñez, César Hueso; Gurpegui, Unai Cano; Valiente, Yelko Chento; Poveda, Imanol Zamora, E-mail: [email protected] [IDOM, Consulting, Engineering and Architecture, S.A.U, Vizcaya (Spain) 2017-07-01 When human activities have to be performed under ionising radiation environments the safety of the workers must be guaranteed. Usually three principles are used to accomplish with ALARA (As Low As Reasonably Achievable) requirements: the more distance between the source term and the worker, the better; the less time spent to arrange any task, the better; and, once the previous principles are optimized should the exposure of the workers continues being above the regulatory limits, shielding has to be implemented. Through this paper some different examples of IDOM's shielding design activities are presented. Beginning with the gamma collimators for the Jules Horowitz Reactor, nuclear fuel's behaviour researching facility, where the beam path crosses the reactor's containment walls and is steered up to a gamma detector where the fuel spectrum is analysed and where the beam has to be attenuated several orders of magnitude in a short distance. Later it is shown IDOM’s approach for the shielding of the Emergency Control Management Center of Asociación Nuclear Ascó-Vandellòs-II NPPs, a bunker designed to withstand severe accident conditions and to support the involved staff during 30 days, considering the outside radioactive cloud and the inside source term that filtering units become as they filter the incoming air. And finally, a general approach to this kind of problems is presented, since the study of the source term considering all the possible contributions, passing through the material selection and the thicknesses calculation until the optimization of the materials. (author) 20. Magnetic shielding of a limiter International Nuclear Information System (INIS) Brevnov, N.N.; Stepanov, S.B.; Khimchenko, L.N.; Matthews, G.F.; Goodal, D.H.J. 1991-01-01 Localization of plasma interaction with material surfaces in a separate chamber, from where the escape of impurities is hardly realized, i.e. application of magnetic divertors or pump limiters, is the main technique for reduction of the impurity content in a plasma. In this case, the production of a divertor configuration requires a considerable power consumption and results in a less effective utilization of the magnetic field volume. Utilization of a pump limiter, for example the ICL-type, under tokamak-reactor conditions would result in the extremely high and forbidden local heat loadings onto the limiter surface. Moreover, the magnetically-shielded pump limiter (MSL) was proposed to combine positive properties of the divertor and the pump limiter. The idea of magnetic shielding is to locate the winding with current inside the limiter head so that the field lines of the resultant magnetic field do not intercept the limiter surface. In this case the plasma flows around the limiter leading edges and penetrates into the space under the limiter. The shielding magnetic field can be directed either counter the toroidal field or counter the poloidal one of a tokamak, dependent on the concrete diagram of the device. Such a limiter has a number of advantages: -opportunity to control over the particle and impurity recycling without practical influence upon the plasma column geometry, - perturbation of a plasma column magnetic configuration from the side of such a limiter is less than that from the side of the divertor coils. The main deficiency is the necessity to locate active windings inside the discharge chamber. (author) 5 refs., 3 figs 1. Dynamic rotating-shield brachytherapy Energy Technology Data Exchange (ETDEWEB) Liu, Yunlong [Department of Electrical and Computer Engineering, University of Iowa, 4016 Seamans Center, Iowa City, Iowa 52242 (United States); Flynn, Ryan T.; Kim, Yusung [Department of Radiation Oncology, University of Iowa, 200 Hawkins Drive, Iowa City, Iowa 52242 (United States); Yang, Wenjun [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Wu, Xiaodong [Department of Electrical and Computer Engineering, University of Iowa, 4016 Seamans Center, Iowa City, Iowa 52242 and Department of Radiation Oncology, University of Iowa, 200 Hawkins Drive, Iowa City, Iowa 52242 (United States) 2013-12-15 Purpose: To present dynamic rotating shield brachytherapy (D-RSBT), a novel form of high-dose-rate brachytherapy (HDR-BT) with electronic brachytherapy source, where the radiation shield is capable of changing emission angles during the radiation delivery process.Methods: A D-RSBT system uses two layers of independently rotating tungsten alloy shields, each with a 180° azimuthal emission angle. The D-RSBT planning is separated into two stages: anchor plan optimization and optimal sequencing. In the anchor plan optimization, anchor plans are generated by maximizing the D{sub 90} for the high-risk clinical-tumor-volume (HR-CTV) assuming a fixed azimuthal emission angle of 11.25°. In the optimal sequencing, treatment plans that most closely approximate the anchor plans under the delivery-time constraint will be efficiently computed. Treatment plans for five cervical cancer patients were generated for D-RSBT, single-shield RSBT (S-RSBT), and {sup 192}Ir-based intracavitary brachytherapy with supplementary interstitial brachytherapy (IS + ICBT) assuming five treatment fractions. External beam radiotherapy doses of 45 Gy in 25 fractions of 1.8 Gy each were accounted for. The high-risk clinical target volume (HR-CTV) doses were escalated such that the D{sub 2cc} of the rectum, sigmoid colon, or bladder reached its tolerance equivalent dose in 2 Gy fractions (EQD2 with α/β= 3 Gy) of 75 Gy, 75 Gy, or 90 Gy, respectively.Results: For the patients considered, IS + ICBT had an average total dwell time of 5.7 minutes/fraction (min/fx) assuming a 10 Ci{sup 192}Ir source, and the average HR-CTV D{sub 90} was 78.9 Gy. In order to match the HR-CTV D{sub 90} of IS + ICBT, D-RSBT required an average of 10.1 min/fx more delivery time, and S-RSBT required 6.7 min/fx more. If an additional 20 min/fx of delivery time is allowed beyond that of the IS + ICBT case, D-RSBT and S-RSBT increased the HR-CTV D{sub 90} above IS + ICBT by an average of 16.3 Gy and 9.1 Gy, respectively 2. Irrigoscopy - irrigography method, dosimetry and radiation shielding International Nuclear Information System (INIS) Zubanov, Z.; Kolarevic, G. 1999-01-01 Use of patient's radiation shielding during radiology diagnostic procedures in our country is insufficiently represent, so patients needlessly receive very high entrance skin doses in body areas which are not in direct x-ray beam. During irrigoscopy, patient's radiation shielding is very complex problem, because of the organs position. In the future that problem must be solved. We hope that some of our suggestions about patient's radiation shielding during irrigoscopy, can be a small step in that way. (author) 3. Encoding !-tensors as !-graphs with neighbourhood orders Directory of Open Access Journals (Sweden) David Quick 2015-11-01 Full Text Available Diagrammatic reasoning using string diagrams provides an intuitive language for reasoning about morphisms in a symmetric monoidal category. To allow working with infinite families of string diagrams, !-graphs were introduced as a method to mark repeated structure inside a diagram. This led to !-graphs being implemented in the diagrammatic proof assistant Quantomatic. Having a partially automated program for rewriting diagrams has proven very useful, but being based on !-graphs, only commutative theories are allowed. An enriched abstract tensor notation, called !-tensors, has been used to formalise the notion of !-boxes in non-commutative structures. This work-in-progress paper presents a method to encode !-tensors as !-graphs with some additional structure. This will allow us to leverage the existing code from Quantomatic and quickly provide various tools for non-commutative diagrammatic reasoning. 4. Federated Tensor Factorization for Computational Phenotyping Science.gov (United States) Kim, Yejin; Sun, Jimeng; Yu, Hwanjo; Jiang, Xiaoqian 2017-01-01 Tensor factorization models offer an effective approach to convert massive electronic health records into meaningful clinical concepts (phenotypes) for data analysis. These models need a large amount of diverse samples to avoid population bias. An open challenge is how to derive phenotypes jointly across multiple hospitals, in which direct patient-level data sharing is not possible (e.g., due to institutional policies). In this paper, we developed a novel solution to enable federated tensor factorization for computational phenotyping without sharing patient-level data. We developed secure data harmonization and federated computation procedures based on alternating direction method of multipliers (ADMM). Using this method, the multiple hospitals iteratively update tensors and transfer secure summarized information to a central server, and the server aggregates the information to generate phenotypes. We demonstrated with real medical datasets that our method resembles the centralized training model (based on combined datasets) in terms of accuracy and phenotypes discovery while respecting privacy. PMID:29071165 5. Exploring extra dimensions through inflationary tensor modes Science.gov (United States) Im, Sang Hui; Nilles, Hans Peter; Trautner, Andreas 2018-03-01 Predictions of inflationary schemes can be influenced by the presence of extra dimensions. This could be of particular relevance for the spectrum of gravitational waves in models where the extra dimensions provide a brane-world solution to the hierarchy problem. Apart from models of large as well as exponentially warped extra dimensions, we analyze the size of tensor modes in the Linear Dilaton scheme recently revived in the discussion of the "clockwork mechanism". The results are model dependent, significantly enhanced tensor modes on one side and a suppression on the other. In some cases we are led to a scheme of "remote inflation", where the expansion is driven by energies at a hidden brane. In all cases where tensor modes are enhanced, the requirement of perturbativity of gravity leads to a stringent upper limit on the allowed Hubble rate during inflation. 6. Permittivity and permeability tensors for cloaking applications CERN Document Server Choudhury, Balamati; Jha, Rakesh Mohan 2016-01-01 This book is focused on derivations of analytical expressions for stealth and cloaking applications. An optimal version of electromagnetic (EM) stealth is the design of invisibility cloak of arbitrary shapes in which the EM waves can be controlled within the cloaking shell by introducing a prescribed spatial variation in the constitutive parameters. The promising challenge in design of invisibility cloaks lies in the determination of permittivity and permeability tensors for all the layers. This book provides the detailed derivation of analytical expressions of the permittivity and permeability tensors for various quadric surfaces within the eleven Eisenhart co-ordinate systems. These include the cylinders and the surfaces of revolutions. The analytical modeling and spatial metric for each of these surfaces are provided along with their tensors. This mathematical formulation will help the EM designers to analyze and design of various quadratics and their hybrids, which can eventually lead to design of cloakin... 7. Tensor calculus for engineers and physicists CERN Document Server de Souza Sánchez Filho, Emil 2016-01-01 This textbook provides a rigorous approach to tensor manifolds in several aspects relevant for Engineers and Physicists working in industry or academia. With a thorough, comprehensive, and unified presentation, this book offers insights into several topics of tensor analysis, which covers all aspects of N dimensional spaces. The main purpose of this book is to give a self-contained yet simple, correct and comprehensive mathematical explanation of tensor calculus for undergraduate and graduate students and for professionals. In addition to many worked problems, this book features a selection of examples, solved step by step. Although no emphasis is placed on special and particular problems of Engineering or Physics, the text covers the fundamentals of these fields of science. The book makes a brief introduction into the basic concept of the tensorial formalism so as to allow the reader to make a quick and easy review of the essential topics that enable having the grounds for the subsequent themes, without need... 8. Tensor pressure tokamak equilibrium and stability Energy Technology Data Exchange (ETDEWEB) Cooper, W.A. 1981-03-01 We investigate the equilibrium and magnetohydrodynamic (MHD) stability of tokamaks with tensor pressure and examine, in particular, the effects of anisotropies induced by neutral beam injection. Perpendicular and parallel beam pressure components are evaluated by taking moments of a distribution function obtained from the solution of a Fokker-Planck equation that models the injection of high-energy neutral beams into a tokamak. We numerically generate D-shaped beam-induced tensor pressure equilibria. A double adiabatic energy principle is derived from a modified version of the guiding center plasma energy principle. Finally, we apply the tensor pressure ballooning mode equation to computed equilibria that model experimentally determined ISX-B discharge profiles with high-power neutral beam injection. We predict that the plasma is unstable to flutelike modes in the central core of the discharge as a result of the pressure profile peakedness induced by the beams. 9. Isotopic age of enderbites of Ukrainian shield International Nuclear Information System (INIS) Bartnitskij, E.N.; Bojko, V.L.; Levkovskaya, N.Yu.; Lesnaya, I.M.; Siroshtan, R.I.; Sharkin, O.P. 1987-01-01 Results of determining U-Pb isotopic age of accessory zircons from enderbites of Azov, Dniestrovo-Bug and Ingulo-Ingultsk regions of the Ukrainian shield are presented. The isotopic age values obtained make up from 3400 millions of years for enderbites of Novo-Pavlovsk complex of the Ukrainian shield up to 2100 millions of years for enderbites and charnockites of Berdichev complex. So, enderbites of both Archean and Proterozoic age are found in the Ukrainian shield area which points out to diversification of granulites metamorphism manifestation in various blocks of the Ukrainian shield 10. Problems of the power plant shield optimization International Nuclear Information System (INIS) Abagyan, A.A.; Dubinin, A.A.; Zhuravlev, V.I.; Kurachenko, Yu.A.; Petrov, Eh.E. 1981-01-01 General approaches to the solution of problems on the nuclear power plant radiation shield optimization are considered. The requirements to the shield parameters are formulated in a form of restrictions on a number of functionals, determined by the solution of γ quantum and neutron transport equations or dimensional and weight characteristics of shield components. Functional determined by weight-dimensional parameters (shield cost, mass and thickness) and functionals, determined by radiation fields (equivalent dose rate, produced by neutrons and γ quanta, activation functional, radiation functional, heat flux, integral heat flux in a particular part of the shield volume, total energy flux through a particular shield surface are considered. The following methods of numerical solution of simplified optimization problems are discussed: semiempirical methods using radiation transport physical leaks, numerical solution of approximate transport equations, numerical solution of transport equations for the simplest configurations making possible to decrease essentially a number of variables in the problem. The conclusion is drawn that the attained level of investigations on the problem of nuclear power plant shield optimization gives the possibility to pass on at present to the solution of problems with a more detailed account of the real shield operating conditions (shield temperature field account, its strength and other characteristics) [ru 11. Radiation dose reduction by water shield International Nuclear Information System (INIS) Zeb, J.; Arshed, W.; Ahmad, S.S. 2007-06-01 This report is an operational manual of shielding software W-Shielder, developed at Health Physics Division (HPD), Pakistan Institute of Nuclear Science and Technology (PINSTECH), Pakistan Atomic Energy Commission. The software estimates shielding thickness for photons having their energy in the range 0.5 to 10 MeV. To compute the shield thickness, self absorption in the source has been neglected and the source has been assumed as a point source. Water is used as a shielding material in this software. The software is helpful in estimating the water thickness for safe handling, storage of gamma emitting radionuclide. (author) 12. Optimization of the CMS forward shielding CERN Document Server Huhtinen, Mika 2000-01-01 A first realistic version of the CMS forward shielding was presented in the 1999 Engineering Design Review. It was discovered that the background increased by a factor of 2 with respect to the TDR where an idealized shielding had been assumed. This note describes the optimizations implemented in the realistic shielding with the aim to recover the TDR performance. An optimization of the shielding geometry, the beam pipe and a filling of major cracks has allowed to achieve the goal. Although the differences to the TDR are very minor, these new calculations should be understood as an update to those presented in the TDR. 13. Quantum-Chemical Approach to NMR Chemical Shifts in Paramagnetic Solids Applied to LiFePO4and LiCoPO4. Science.gov (United States) Mondal, Arobendo; Kaupp, Martin 2018-03-09 A novel protocol to compute and analyze NMR chemical shifts for extended paramagnetic solids, accounting comprehensively for Fermi-contact (FC), pseudocontact (PC), and orbital shifts, is reported and applied to the important lithium ion battery cathode materials LiFePO 4 and LiCoPO 4 . Using an EPR-parameter-based ansatz, the approach combines periodic (hybrid) DFT computation of hyperfine and orbital-shielding tensors with an incremental cluster model for g- and zero-field-splitting (ZFS) D-tensors. The cluster model allows the use of advanced multireference wave function methods (such as CASSCF or NEVPT2). Application of this protocol shows that the 7 Li shifts in the high-voltage cathode material LiCoPO 4 are dominated by spin-orbit-induced PC contributions, in contrast with previous assumptions, fundamentally changing interpretations of the shifts in terms of covalency. PC contributions are smaller for the 7 Li shifts of the related LiFePO 4 , where FC and orbital shifts dominate. The 31 P shifts of both materials finally are almost pure FC shifts. Nevertheless, large ZFS contributions can give rise to non-Curie temperature dependences for both 7 Li and 31 P shifts. 14. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery. Science.gov (United States) Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben 2017-08-02 It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts. 15. Diffusion tensor smoothing through weighted Karcher means Science.gov (United States) Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie 2014-01-01 Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors– 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264 16. Diffusion tensor imaging in spinal cord compression International Nuclear Information System (INIS) Wang, Wei; Qin, Wen; Hao, Nanxin; Wang, Yibin; Zong, Genlin 2012-01-01 Background Although diffusion tensor imaging has been successfully applied in brain research for decades, several main difficulties have hindered its extended utilization in spinal cord imaging. Purpose To assess the feasibility and clinical value of diffusion tensor imaging and tractography for evaluating chronic spinal cord compression. Material and Methods Single-shot spin-echo echo-planar DT sequences were scanned in 42 spinal cord compression patients and 49 healthy volunteers. The mean values of the apparent diffusion coefficient and fractional anisotropy were measured in region of interest at the cervical and lower thoracic spinal cord. The patients were divided into two groups according to the high signal on T2WI (the SCC-HI group and the SCC-nHI group for with or without high signal). A one-way ANOVA was used. Diffusion tensor tractography was used to visualize the morphological features of normal and impaired white matter. Results There were no statistically significant differences in the apparent diffusion coefficient and fractional anisotropy values between the different spinal cord segments of the normal subjects. All of the patients in the SCC-HI group had increased apparent diffusion coefficient values and decreased fractional anisotropy values at the lesion level compared to the normal controls. However, there were no statistically significant diffusion index differences between the SCC-nHI group and the normal controls. In the diffusion tensor imaging maps, the normal spinal cord sections were depicted as fiber tracts that were color-encoded to a cephalocaudal orientation. The diffusion tensor images were compressed to different degrees in all of the patients. Conclusion Diffusion tensor imaging and tractography are promising methods for visualizing spinal cord tracts and can provide additional information in clinical studies in spinal cord compression 17. Field maintenance of radiation-shielding windows at HFEF International Nuclear Information System (INIS) Tobias, D.A. 1983-01-01 The achievement of excellent viewing through hot-cell shielding windows does not occur by chance. Instead, it requires a well planned and executed program of field maintenance. The lack of such a program is a major factor when a hot-cell facility has poor window viewing. At HFEF, all preventive maintenance is performed by one group of trained technical-support personnel under the immediate direction of a Systems Engineer, who has responsibility for the shielding windows. Window maintenance is prescheduled and recorded by being incorporated into the computerized Maintenance Data System (MDS). Measurements of window light transmission are scheduled annually to determine glass browning or oil cloudiness conditions within the window tank. The tank oil is sampled and chemically analyzed annually to determine the moisture content, the acidity, and the probable deterioration rate caused by irradiation 18. Tensor network models of multiboundary wormholes Science.gov (United States) Peach, Alex; Ross, Simon F. 2017-05-01 We study the entanglement structure of states dual to multiboundary wormhole geometries using tensor network models. Perfect and random tensor networks tiling the hyperbolic plane have been shown to provide good models of the entanglement structure in holography. We extend this by quotienting the plane by discrete isometries to obtain models of the multiboundary states. We show that there are networks where the entanglement structure is purely bipartite, extending results obtained in the large temperature limit. We analyse the entanglement structure in a range of examples. 19. Tensor modes in pure natural inflation Science.gov (United States) Nomura, Yasunori; Yamazaki, Masahito 2018-05-01 We study tensor modes in pure natural inflation [1], a recently-proposed inflationary model in which an axionic inflaton couples to pure Yang-Mills gauge fields. We find that the tensor-to-scalar ratio r is naturally bounded from below. This bound originates from the finiteness of the number of metastable branches of vacua in pure Yang-Mills theories. Details of the model can be probed by future cosmic microwave background experiments and improved lattice gauge theory calculations of the θ-angle dependence of the vacuum energy. 20. Reconstruction of convex bodies from surface tensors DEFF Research Database (Denmark) Kousholt, Astrid; Kiderlen, Markus We present two algorithms for reconstruction of the shape of convex bodies in the two-dimensional Euclidean space. The first reconstruction algorithm requires knowledge of the exact surface tensors of a convex body up to rank s for some natural number s. The second algorithm uses harmonic intrinsic...... volumes which are certain values of the surface tensors and allows for noisy measurements. From a generalized version of Wirtinger's inequality, we derive stability results that are utilized to ensure consistency of both reconstruction procedures. Consistency of the reconstruction procedure based... 1. Improving Tensor Based Recommenders with Clustering DEFF Research Database (Denmark) Leginus, Martin; Dolog, Peter; Zemaitis, Valdas 2012-01-01 Social tagging systems (STS) model three types of entities (i.e. tag-user-item) and relationships between them are encoded into a 3-order tensor. Latent relationships and patterns can be discovered by applying tensor factorization techniques like Higher Order Singular Value Decomposition (HOSVD...... of the recommendations and execution time are improved and memory requirements are decreased. The clustering is motivated by the fact that many tags in a tag space are semantically similar thus the tags can be grouped. Finally, promising experimental results are presented... 2. TPX remote maintenance and shielding International Nuclear Information System (INIS) Rennich, M.J.; Nelson, B.E. 1994-01-01 The Tokamak Physics Experiment machine design incorporates comprehensive planning for efficient and safe component maintenance. Three programmatic decisions have been made to insure the successful implementation of this objective. First, the tokamak incorporates radiation shielding to reduce activation of components and limit the dose rate to personnel working on the outside of the machine. This allows most of the ex-vessel equipment to be maintained through conventional ''hands-on'' procedures. Second, to the maximum extent possible, low activation materials will be used inside the shielding volume. This resulted in the selection of Titanium (Ti-6Al-4V) for the vacuum vessel and PFC structures. The third decision stipulated that the primary in-vessel components will be replaced or repaired via remote maintenance tools specifically provided for the task. The component designers have been given the responsibility of incorporating maintenance design and for proving the maintainability of the design concepts in full-scale mockup tests prior to the initiation of final fabrication. Remote maintenance of the TPX machine is facilitated by general purpose tools provided by a special purpose design team. Major tools will include an in-vessel transporter, a vessel transfer system and a large component transfer container. In addition, tools such as manipulators and remotely operable impact wrenches will be made available to the component designers by this group. Maintenance systems will also provide the necessary controls for this equipment 3. Channels in tokamak reactor shields International Nuclear Information System (INIS) Shchipakin, O.L. 1981-01-01 The results of calculations of neutron transport through the channels in the tokamak reactor radiation shields, obtained by the Monte Carlo method and by the method of discrete ordinates, are considered. The given data show that the structural materials of the channel and that of the blanket and shields in the regions close to it are subjected to almost the same irradiation as the first wall and therefore they should satisfy the technical requirements. The radiation energy release in the injector channel wall, caused by neutron shooting, substantially depends on the channel dimensions. At the channel large diameter (0.7-10 m) this dependence noticeably decreases. The investigation of the effect of the injector channel cross section form on the neutron flux density through the channel, testifies to weak dependence of shooting radiation intensity on the form of the channel cross section. It is concluded that measures to decrease unfavourable effect of the channels on the safety of the power tokamak reactor operation and maintenance cause substantial changes in reactor design due to which the channel protection must be developed at first stages. The Monte Carlo method is recommended to be used for variant calculations and when calculating the neutron flux functionals in specific points of the system the discrete ordinate method is preferred [ru 4. Isotope effects on nuclear shielding International Nuclear Information System (INIS) Hansen, P.E. 1983-01-01 This review concentrates upon empirical trends and practical uses of mostly secondary isotope effects, both of the intrinsic and equilibrium types. The text and the tables are arranged in the following fashion. The most 'popular' isotope effect is treated first, deuterium isotope effects on 13 C nuclear shielding, followed by deuterium on 1 H nuclear shieldings, etc. Focus is thus on the isotopes producing the effect rather than on the nuclei suffering the effect. After a brief treatment of each type of isotope effect, general trends are dealt with. Basic trends of intrinsic isotope effects such as additivity, solvent effects, temperature effects, steric effects, substituent effects and hyperconjugation are discussed. Uses of isotope effects for assignment purposes, in stereochemical studies, in hydrogen bonding and in isotopic tracer studies are dealt with. Kinetic studies, especially of phosphates, are frequently performed by utilizing isotope effects. In addition, equilibrium isotope effects are treated in great detail as these are felt to be new and very important and may lead to new uses of isotope effects. Techniques used to obtain isotope effects are briefly surveyed at the end of the chapter. (author) 5. Observations About the Projective Tensor Product of Banach Spaces African Journals Online (AJOL) , 46B, 46E, 47B. Keywords: tensor, Banach, banach space, tensor product, projective norm, greatest crossnorm, semi-embedding, Radon-Nikodym property, absolutely p-summable sequence, strongly p-summable sequence, topological linear ... 6. Tensor completion for PDEs with uncertain coefficients and Bayesian Update KAUST Repository Litvinenko, Alexander 2017-03-05 In this work, we tried to show connections between Bayesian update and tensor completion techniques. Usually, only a small/sparse vector/tensor of measurements is available. The typical measurement is a function of the solution. The solution of a stochastic PDE is a tensor, the measurement as well. The idea is to use completion techniques to compute all "missing" values of the measurement tensor and only then apply the Bayesian technique. 7. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping. Energy Technology Data Exchange (ETDEWEB) Bader, Brett William; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA) 2004-07-01 We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature. 8. Effect of neutrons scattered from boundary of neutron field on shielding experiment International Nuclear Information System (INIS) Ogawa, Tatsuhiko; Abe, Takuya; Kosako, Toshiso; Iimoto, Takeshi 2009-01-01 Neutron shielding experiment with 49 cm-thick ordinary concrete was carried out at the reactor 'Yayoi' The University of Tokyo. System of this experiment is enclosed by heavy concrete where neutrons backscattered from heavy concrete likely affected neutron flux on the back surface of shielding concrete. Reaction rate of 197 Au(n, γ), cadmium covered 197 Au(n, γ) and 115 In(n, n') in the shielding concrete was measured using foil activation method. Neutron transport calculation was carried out in order to simulate reaction rate by calculating neutron spectra and convoluting with neutron capture cross-section in neutron shielding concrete. Comparison was made between calculated reaction rate and experimental one, and almost satisfactory agreement was found except for the back surface of shielding. To compose adequate simulation model, description of heavy concrete behind the shielding was thought to be of importance. For example, disregarding neutrons backscattered from heavy concrete, calculation underestimated reaction rate by the factor of 10. In another example, assuming that chemical composition of heavy concrete is equal to the composition adopted from a literature, the reaction rate was overestimated by factor of 5. By making the composition of heavy concrete equal to that based on facility design, overestimation was found to be the factor of 2. Therefore, adequate description of chemical composition of heavy concrete is found to be of importance in order to simulate neutron induced reaction rate on the back surface of neutron shielding concrete in shielding experiment performed in a system enclosed by heavy concrete. (author) 9. Collineations of the curvature tensor in general relativity Indian Academy of Sciences (India) physics pp. 43–48. Collineations of the curvature tensor in general relativity. RISHI KUMAR TIWARI. Department of Mathematics and Computer Application, ... and kinematical properties of the models. Keywords. Collineation; Killing vectors; Ricci tensor; Riemannian curvature tensor. PACS No. 98.80. 1. Introduction. 10. Efficient MATLAB computations with sparse and factored tensors. Energy Technology Data Exchange (ETDEWEB) Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA) 2006-12-01 In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB. 11. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging]. Science.gov (United States) Xu, Yonghong; Gao, Shangce; Hao, Xiaofei 2016-04-01 Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing. 12. Tensor based structure estimation in multi-channel images DEFF Research Database (Denmark) Schou, Jesper; Dierking, Wolfgang; Skriver, Henning 2000-01-01 . In the second part tensors are used for representing the structure information. This approach has the advantage, that tensors can be averaged either spatially or by applying several images, and the resulting tensor provides information of the average strength as well as orientation of the structure... 13. The nonabelian tensor square of a bieberbach group with ... African Journals Online (AJOL) The main objective of this paper is to compute the nonabelian tensor square of one Bieberbach group with elementary abelian 2-group point group of dimension three by using the computational method of the nonabelian tensor square for polycyclic groups. The finding of the computation showed that the nonabelian tensor ... 14. Relativistic particles with spin and antisymmetric tensor fields International Nuclear Information System (INIS) Sandoval Junior, L. 1990-09-01 A study is made on antisymmetric tensor fields particularly on second order tensor field as far as his equivalence to other fields and quantization through the path integral are concerned. Also, a particle model is studied which has been recently proposed and reveals to be equivalent to antisymmetric tensor fields of any order. (L.C.J.A.) 15. Magnetic hydrodynamics with asymmetric stress tensor Science.gov (United States) Billig, Yuly 2005-04-01 In this paper we study equations of magnetic hydrodynamics with a stress tensor. We interpret this system as the generalized Euler equation associated with an Abelian extension of the Lie algebra of vector fields with a nontrivial 2-cocycle. We use the Lie algebra approach to prove the energy conservation law and the conservation of cross-helicity. 16. Magnetic hydrodynamics with asymmetric stress tensor OpenAIRE Billig, Yuly 2004-01-01 In this paper we study equations of magnetic hydrodynamics with a stress tensor. We interpret this system as the generalized Euler equation associated with an abelian extension of the Lie algebra of vector fields with a non-trivial 2-cocycle. We use the Lie algebra approach to prove the energy conservation law and the conservation of cross-helicity. 17. Superstrings with tensor degrees of freedom Energy Technology Data Exchange (ETDEWEB) Amorim, R. (Inst. de Fisica, Univ. Federal do Rio de Janeiro, Rio de Janeiro, RJ (Brazil)); Barcelos-Neto, J. (Inst. de Fisica, Univ. Federal do Rio de Janeiro, Rio de Janeiro, RJ (Brazil)) 1994-10-01 We add antisymmetric tensor degrees of freedom to the usual superstring coordinates. We show that super and kappa symmetries are only achieved for the spacetime dimension D = 4. We also address problems related to the quantization of the model and discuss the influences of this extended spacetime in the usual quantum field theory. (orig.) 18. Norm of the Riemannian Curvature Tensor Indian Academy of Sciences (India) We consider the Riemannian functional R p ( g ) = ∫ M | R ( g ) | p d v g defined on the space of Riemannian metrics with unit volume on a closed smooth manifold where R ( g ) and d v g denote the corresponding Riemannian curvature tensor and volume form and p ∈ ( 0 , ∞ ) . First we prove that the Riemannian metrics ... 19. Abelian tensor models on the lattice Science.gov (United States) Chaudhuri, Soumyadeep; Giraldo-Rivera, Victor I.; Joseph, Anosh; Loganayagam, R.; Yoon, Junggi 2018-04-01 We consider a chain of Abelian Klebanov-Tarnopolsky fermionic tensor models coupled through quartic nearest-neighbor interactions. We characterize the gauge-singlet spectrum for small chains (L =2 ,3 ,4 ,5 ) and observe that the spectral statistics exhibits strong evidence in favor of quasi-many-body localization. 20. Primordial tensor modes from quantum corrected inflation DEFF Research Database (Denmark) Joergensen, Jakob; Sannino, Francesco; Svendsen, Ole 2014-01-01 . Finally we confront these theories with the Planck and BICEP2 data. We demonstrate that the discovery of primordial tensor modes by BICEP2 require the presence of sizable quantum departures from the$\\phi^4$-Inflaton model for the non-minimally coupled scenario which we parametrize and quantify. We... 1. Magnetotelluric impedance tensor analysis for identification of ... Indian Academy of Sciences (India) G Pavan Kumar 2017-07-18 Jul 18, 2017 ... Magnetotelluric impedance tensor analysis for identification of transverse tectonic feature in the Wagad uplift, Kachchh, northwest India. G Pavan Kumar*, Virender Kumar, Mehul Nagar, Dilip Singh,. E Mahendar, Pruthul Patel and P Mahesh. Institute of Seismological Research (ISR), Raisan, Gandhinagar ... 2. Dark energy in scalar-tensor theories International Nuclear Information System (INIS) Moeller, J. 2007-12-01 We investigate several aspects of dynamical dark energy in the framework of scalar-tensor theories of gravity. We provide a classification of scalar-tensor coupling functions admitting cosmological scaling solutions. In particular, we recover that Brans-Dicke theory with inverse power-law potential allows for a sequence of background dominated scaling regime and scalar field dominated, accelerated expansion. Furthermore, we compare minimally and non-minimally coupled models, with respect to the small redshift evolution of the dark energy equation of state. We discuss the possibility to discriminate between different models by a reconstruction of the equation-of-state parameter from available observational data. The non-minimal coupling characterizing scalar-tensor models can - in specific cases - alleviate fine tuning problems, which appear if (minimally coupled) quintessence is required to mimic a cosmological constant. Finally, we perform a phase-space analysis of a family of biscalar-tensor models characterized by a specific type of σ-model metric, including two examples from recent literature. In particular, we generalize an axion-dilaton model of Sonner and Townsend, incorporating a perfect fluid background consisting of (dark) matter and radiation. (orig.) 3. Dark energy in scalar-tensor theories Energy Technology Data Exchange (ETDEWEB) Moeller, J. 2007-12-15 We investigate several aspects of dynamical dark energy in the framework of scalar-tensor theories of gravity. We provide a classification of scalar-tensor coupling functions admitting cosmological scaling solutions. In particular, we recover that Brans-Dicke theory with inverse power-law potential allows for a sequence of background dominated scaling regime and scalar field dominated, accelerated expansion. Furthermore, we compare minimally and non-minimally coupled models, with respect to the small redshift evolution of the dark energy equation of state. We discuss the possibility to discriminate between different models by a reconstruction of the equation-of-state parameter from available observational data. The non-minimal coupling characterizing scalar-tensor models can - in specific cases - alleviate fine tuning problems, which appear if (minimally coupled) quintessence is required to mimic a cosmological constant. Finally, we perform a phase-space analysis of a family of biscalar-tensor models characterized by a specific type of {sigma}-model metric, including two examples from recent literature. In particular, we generalize an axion-dilaton model of Sonner and Townsend, incorporating a perfect fluid background consisting of (dark) matter and radiation. (orig.) 4. Tensor network methods for invariant theory Science.gov (United States) Biamonte, Jacob; Bergholm, Ville; Lanzagorta, Marco 2013-11-01 Invariant theory is concerned with functions that do not change under the action of a given group. Here we communicate an approach based on tensor networks to represent polynomial local unitary invariants of quantum states. This graphical approach provides an alternative to the polynomial equations that describe invariants, which often contain a large number of terms with coefficients raised to high powers. This approach also enables one to use known methods from tensor network theory (such as the matrix product state (MPS) factorization) when studying polynomial invariants. As our main example, we consider invariants of MPSs. We generate a family of tensor contractions resulting in a complete set of local unitary invariants that can be used to express the Rényi entropies. We find that the graphical approach to representing invariants can provide structural insight into the invariants being contracted, as well as an alternative, and sometimes much simpler, means to study polynomial invariants of quantum states. In addition, many tensor network methods, such as MPSs, contain excellent tools that can be applied in the study of invariants. 5. Seamless warping of diffusion tensor fields DEFF Research Database (Denmark) Xu, Dongrong; Hao, Xuejun; Bansal, Ravi 2008-01-01 transfer that information to the template space. To combine the advantages of forward and backward mapping, we propose a novel method for the spatial normalization of diffusion tensor (DT) fields that uses a bijection (a bidirectional mapping with one-to-one correspondences between image spaces) to warp DT... 6. Visualization and processing of tensor fields CERN Document Server Weickert, Joachim 2007-01-01 Presents information on the visualization and processing of tensor fields. This book serves as an overview for the inquiring scientist, as a basic foundation for developers and practitioners, and as a textbook for specialized classes and seminars for graduate and doctoral students. 7. Magnetotelluric impedance tensor analysis for identification of ... Indian Academy of Sciences (India) We present the results of magnetotelluric (MT) impedance tensors analyses of 18 sites located along a profile cutting various faults in the uplifted Wagad block of the Kachchh basin. The MT time series of 4–5 days recording duration have been processed and the earth response functions are estimated in broad frequency ... 8. Radiation Forces and Torques without Stress (Tensors) Science.gov (United States) Bohren, Craig F. 2011-01-01 To understand radiation forces and torques or to calculate them does not require invoking photon or electromagnetic field momentum transfer or stress tensors. According to continuum electromagnetic theory, forces and torques exerted by radiation are a consequence of electric and magnetic fields acting on charges and currents that the fields induce… 9. Fermionic topological quantum states as tensor networks Science.gov (United States) Wille, C.; Buerschaper, O.; Eisert, J. 2017-06-01 Tensor network states, and in particular projected entangled pair states, play an important role in the description of strongly correlated quantum lattice systems. They do not only serve as variational states in numerical simulation methods, but also provide a framework for classifying phases of quantum matter and capture notions of topological order in a stringent and rigorous language. The rapid development in this field for spin models and bosonic systems has not yet been mirrored by an analogous development for fermionic models. In this work, we introduce a tensor network formalism capable of capturing notions of topological order for quantum systems with fermionic components. At the heart of the formalism are axioms of fermionic matrix-product operator injectivity, stable under concatenation. Building upon that, we formulate a Grassmann number tensor network ansatz for the ground state of fermionic twisted quantum double models. A specific focus is put on the paradigmatic example of the fermionic toric code. This work shows that the program of describing topologically ordered systems using tensor networks carries over to fermionic models. 10. Tensor B mode and stochastic Faraday mixing CERN Document Server Giovannini, Massimo 2014-01-01 This paper investigates the Faraday effect as a different source of B mode polarization. The E mode polarization is Faraday rotated provided a stochastic large-scale magnetic field is present prior to photon decoupling. In the first part of the paper we discuss the case where the tensor modes of the geometry are absent and we argue that the B mode recently detected by the Bicep2 collaboration cannot be explained by a large-scale magnetic field rotating, through the Faraday effect, the well established E mode polarization. In this case, the observed temperature autocorrelations would be excessively distorted by the magnetic field. In the second part of the paper the formation of Faraday rotation is treated as a stationary, random and Markovian process with the aim of generalizing a set of scaling laws originally derived in the absence of the tensor modes of the geometry. We show that the scalar, vector and tensor modes of the brightness perturbations can all be Faraday rotated even if the vector and tensor par... 11. Introduction to vector and tensor analysis CERN Document Server Wrede, Robert C 1972-01-01 A broad introductory treatment, this volume examines general Cartesian coordinates, the cross product, Einstein's special theory of relativity, bases in general coordinate systems, maxima and minima of functions of two variables, line integrals, integral theorems, fundamental notions in n-space, Riemannian geometry, algebraic properties of the curvature tensor, and more. 1963 edition. 12. Tensor algebra and tensor analysis for engineers with applications to continuum mechanics CERN Document Server Itskov, Mikhail 2015-01-01 This is the fourth and revised edition of a well-received book that aims at bridging the gap between the engineering course of tensor algebra on the one side and the mathematical course of classical linear algebra on the other side. In accordance with the contemporary way of scientific publications, a modern absolute tensor notation is preferred throughout. The book provides a comprehensible exposition of the fundamental mathematical concepts of tensor calculus and enriches the presented material with many illustrative examples. In addition, the book also includes advanced chapters dealing with recent developments in the theory of isotropic and anisotropic tensor functions and their applications to continuum mechanics. Hence, this monograph addresses graduate students as well as scientists working in this field. In each chapter numerous exercises are included, allowing for self-study and intense practice. Solutions to the exercises are also provided. 13. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach. Science.gov (United States) Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir 2016-08-01 In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup. 14. Optically-transparent radiation-shielding composition International Nuclear Information System (INIS) Bolles, T.F.; Fleming, P.B. 1976-01-01 An optically transparent, essentially colorless radiation shielding material for high energy radiation contains a combination of lead or thallium salts of C 1 to C 5 organic acids and may contain lead or thallium salts of mineral acids. Shields of complex shapes are easily constructed 15. Improvements in or relating to nuclear shields International Nuclear Information System (INIS) Hawkins, R.J.; Riley, K.; Powell, C. 1981-01-01 A nuclear radiation shield comprises two pieces of steel held together edge to edge by a weld, the depth of which is less than the thickness of either of the edges. As the radiaion shielding effect of the weld will be less than the steel, an insert is bolted or welded over the weld. (U.K.) 16. Several problems in accelerator shielding study International Nuclear Information System (INIS) Nakamura, Takashi; Hirayama, Hideo; Ban, Shuichi. 1980-01-01 Recently, the utilization of accelerators has increased rapidly, and the increase of accelerating energy and beam intensity is also remarkable. The studies on accelerator shielding have become important, because the amount of radiation emitted from accelerators increased, the regulation of the dose of environmental radiation was tightened, and the cost of constructing shielding rose. As the plans of constructing large accelerators have been made successively, the survey on the present state and the problems of the studies on accelerator shielding was carried out. Accelerators are classified into electron accelerators and proton accelerators in view of the studies on shielding. In order to start the studies on accelerator shielding, first, the preparation of the cross section data is indispensable. The cross sections for generating Bremsstrahlung, photonuclear reactions generating neutrons, generation of neutrons by hadrons, nuclear reaction of neutrons and generation of gamma-ray by hadrons are described. The generation of neutrons and gamma-ray as the problems of thick targets is explained. The shielding problems are complex and diversified, but in this paper, the studies on the shielding, by which basic data are obtainable, are taken up, such as beam damping and side wall shielding. As for residual radioactivity, main nuclides and the difference of residual radioactivity according to substances have been studied. (J.P.N.) 17. Actively shielded low level gamma - spectrometric system International Nuclear Information System (INIS) Mrdja, D.; Bikit, I.; Forkapic, S.; Slivka, J.; Veskovic, M. 2005-01-01 The results of the adjusting and testing of the actively shielded low level gamma-spectrometry system are presented. The veto action of the shield reduces the background in the energy region of 50 keV to the 2800 keV for about 3 times. (author) [sr 18. Fundamental Parameters of the SHIELD II Galaxies Science.gov (United States) Cannon, John 2014-10-01 The "Survey of HI in Extremely Low-mass Dwarfs" ("SHIELD") is a multiwavelength, legacy-class observational campaign that is facilitating the study of both internal and global evolutionary processes in 12 low-mass dwarf galaxies discovered in early Arecibo Legacy Fast ALFA (ALFALFA) survey data products. Cycle 19 HST observations of the 12 SHIELD galaxies have allowed us to determine their TRGB distances, thus anchoring the physical scales on which our ongoing analysis is based. Since the inception of SHIELD, the ALFALFA survey has completed data acquisition, thereby populating the faint end of the HI mass function with dozens of SHIELD analogs. In this proposal we request ACS imaging of 18 of these "SHIELD II" galaxies that have already been imaged in the HI spectral line with the WSRT. These data will enable a holistic HST imaging study of the fundamental parameters and characteristics of a statistically robust sample of 30 extremely low-mass galaxies (including 12 SHIELD and 18 SHIELD II systems). The primary science goal is the derivation of TRGB distances; the distance dependence of many fundamental parameters makes HST observations critical for the success of SHIELD II. Additional science goals include an accurate census of the dark matter contents of these galaxies, a spatial and temporal study of star formation within them, and a characterization of the fundamental parameters that change as galaxy masses range from "mini-halo" to star-forming dwarf. 19. Shielding effectiveness of superconductive particles in plastics International Nuclear Information System (INIS) Pienkowski, T.; Kincaid, J.; Lanagan, M.T.; Poeppel, R.B.; Dusek, J.T.; Shi, D.; Goretta, K.C. 1988-09-01 The ability to cool superconductors with liquid nitrogen instead of liquid helium has opened the door to a wide range of research. The well known Meissner effect, which states superconductors are perfectly diamagnetic, suggests shielding applications. One of the drawbacks to the new ceramic superconductors is the brittleness of the finished material. Because of this drawback, any application which required flexibility (e.g., wire and cable) would be impractical. Therefore, this paper presents the results of a preliminary investigation into the shielding effectiveness of YBa 2 Cu 3 O/sub 7-x/ both as a composite and as a monolithic material. Shielding effectiveness was measured using two separate test methods. One tested the magnetic (near field) shielding, and the other tested the electromagnetic (far field) shielding. No shielding was seen in the near field measurements on the composite samples, and only one heavily loaded sample showed some shielding in the far field. The monolithic samples showed a large amount of magnetic shielding. 5 refs., 5 figs 20. Infrared shield facilitates optical pyrometer measurements Science.gov (United States) Eichenbrenner, F. F.; Illg, W. 1965-01-01 Water-cooled shield facilitates optical pyrometer high temperature measurements of small sheet metal specimens subjected to tensile stress in fatigue tests. The shield excludes direct or reflected radiation from one face of the specimen and permits viewing of the infrared radiation only. 1. Generalized Tensor-Based Morphometry of HIV/AIDS Using Multivariate Statistics on Deformation Tensors OpenAIRE Lepore, Natasha; Brun, Caroline; Chou, Yi-Yu; Chiang, Ming-Chang; Dutton, Rebecca A.; Hayashi, Kiralee M.; Luders, Eileen; Lopez, Oscar L.; Aizenstein, Howard J.; Toga, Arthur W.; Becker, James T.; Thompson, Paul M. 2008-01-01 This paper investigates the performance of a new multivariate method for tensor-based morphometry (TBM). Statistics on Riemannian manifolds are developed that exploit the full information in deformation tensor fields. In TBM, multiple brain images are warped to a common neuroanatomical template via 3-D nonlinear registration; the resulting deformation fields are analyzed statistically to identify group differences in anatomy. Rather than study the Jacobian determinant (volume expansion factor... 2. PC based temporary shielding administrative procedure (TSAP) Energy Technology Data Exchange (ETDEWEB) Olsen, D.E.; Pederson, G.E. [Sargent & Lundy, Chicago, IL (United States); Hamby, P.N. [Commonwealth Edison Co., Downers Grove, IL (United States) 1995-03-01 A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison`s six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative procedure and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met. 3. Technology development for radiation shielding analysis International Nuclear Information System (INIS) Ha, Jung Woo; Lee, Jae Kee; Kim, Jong Kyung 1986-12-01 Radiation shielding analysis in nuclear engineering fields is an important technology which is needed for the calculation of reactor shielding as well as radiation related safety problems in nuclear facilities. Moreover, the design technology required in high level radioactive waste management and disposal facilities is faced on serious problems with rapidly glowing nuclear industry development, and more advanced technology has to be developed for tomorrow. The main purpose of this study is therefore to build up the self supporting ability of technology development for the radiation shielding analysis in order to achieve successive development of nuclear industry. It is concluded that basic shielding calculations are possible to handle and analyze by using our current technology, but more advanced technology is still needed and has to be learned for the degree of accuracy in two-dimensional shielding calculation. (Author) 4. GFR Sub-Assembly Shielding Design Studies Energy Technology Data Exchange (ETDEWEB) J. R. Parry 2006-01-01 This report presents the methodology and results for a preliminary study for Gas-Cooled Fast Reactor (GFR) subassembly fast neutron shielding configurations. The purpose of the shielding in the subassembly is to protect reactor components from fast (E>0.1 MeV) neutrons. The subassembly is modeled in MCNP version 5 release 1.30. Parametric studies were performed varying the thickness of the shielding and calculating the fast neutron flux at the vessel head and the core grid plate. This data was used to determine the minimum thickness needed to protect the vessel head and the core grid plate. These thicknesses were used to analyze different shielding configurations incorporating coolant passages and also to estimate the neutron and photon energy deposition in the shielding material. 5. PC based temporary shielding administrative procedure (TSAP) International Nuclear Information System (INIS) Olsen, D.E.; Pederson, G.E.; Hamby, P.N. 1995-01-01 A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison's six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative procedure and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met 6. Practical radiation shielding for biomedical research International Nuclear Information System (INIS) Klein, R.C.; Reginatto, M.; Party, E.; Gershey, E.L. 1990-01-01 This paper reports on calculations which exist for estimating shielding required for radioactivity; however, they are often not applicable for the radionuclides and activities common in biomedical research. A variety of commercially available Lucite shields are being marketed to the biomedical community. Their advertisements may lead laboratory workers to expect better radiation protection than these shields can provide or to assume erroneously that very weak beta emitters require extensive shielding. The authors have conducted a series of shielding experiments designed to simulate exposures from the amounts of 32 P, 51 Cr and 125 I typically used in biomedical laboratories. For most routine work, ≥0.64 cm of Lucite covered with various thicknesses of lead will reduce whole-body occupational exposure rates of < 1mR/hr at the point of contact 7. Radiation Shielding Systems Using Nanotechnology Science.gov (United States) Chen, Bin (Inventor); McKay, Christoper P. (Inventor) 2011-01-01 A system for shielding personnel and/or equipment from radiation particles. In one embodiment, a first substrate is connected to a first array or perpendicularly oriented metal-like fingers, and a second, electrically conducting substrate has an array of carbon nanostructure (CNS) fingers, coated with an electro-active polymer extending toward, but spaced apart from, the first substrate fingers. An electric current and electric charge discharge and dissipation system, connected to the second substrate, receives a current and/or voltage pulse initially generated when the first substrate receives incident radiation. In another embodiment, an array of CNSs is immersed in a first layer of hydrogen-rich polymers and in a second layer of metal-like material. In another embodiment, a one- or two-dimensional assembly of fibers containing CNSs embedded in a metal-like matrix serves as a radiation-protective fabric or body covering. 8. Neutron shielding heat insulation material International Nuclear Information System (INIS) Aoki, Susumu; Asaumi, Hiroshi; Take, Shigeo; Miyakoshi, Jun-ichi; Takemoto, Hiroshi. 1979-01-01 Purpose: To improve decceleration and absorption of neutrons by incorporating neutron moderators and neutron absorbers in asbestos to thereby increase hydrogen concentration. Constitution: A mixture consisting of crysotile asbestos, surface active agent and water is well stirred and compounded to open the crysotile asbestos filaments and prepare a high viscosity slurry. After adding hydroxides such as magnesium hydroxide, hydrated salts such as magnesium borate hydrate or water containing minerals such as alumina cement hydrate, or boron compound to the slurry, the slurry is charged in a predetermined die, and dried and compressed to prepare shielding heat insulation products. The crysotile asbestos has 18 - 15 wt.% of water of crystallinity in the structure and contains a considerably high hydrogen concentration that acts as neutron moderators. (Kawakami, Y.) 9. The use of nipple shields: A review Directory of Open Access Journals (Sweden) Selina Chow 2016-11-01 Full Text Available A nipple shield is a breastfeeding aid with a nipple-shaped shield that is positioned over the nipple and areola prior to nursing. Nipple shields are usually recommended to mothers with flat nipples or in cases in which there is a failure of the baby to effectively latch onto the breast within the first two days postpartum. The use of nipple shields is a controversial topic in the field of lactation. Its use has been an issue in the clinical literature since some older studies discovered reduced breast milk transfer when using nipple shields, while more recent studies reported successful breastfeeding outcomes. The purpose of this review was to examine the evidence and outcomes with nipple shield use. Methods: A literature search was conducted in Ovid MEDLINE, OLDMEDLINE, EMBASE Classic, EMBASE, Cochrane Central Register of Controlled Trials and CINAHL. The primary endpoint was any breastfeeding outcome following nipple shield use. Secondary endpoints included the reasons for nipple shield use and the average/median length of use. For the analysis, we examined the effect of nipple shield use on physiological responses, premature infants, mothers’ experiences, and health professionals’ experiences. Results: The literature search yielded 261 articles, 14 of which were included in this review. Of these 14 articles, three reported on physiological responses, two reported on premature infants, eight reported on mothers’ experiences, and one reported on health professionals’ experiences. Conclusion: Through examining the use of nipple shields, further insight is provided on the advantages and disadvantages of this practice, thus allowing clinicians and researchers to address improvements on areas that will benefit mothers and infants the most. 10. Diffusion Tensor Tractography Reveals Disrupted Structural Connectivity during Brain Aging Science.gov (United States) Lin, Lan; Tian, Miao; Wang, Qi; Wu, Shuicai 2017-10-01 Brain aging is one of the most crucial biological processes that entail many physical, biological, chemical, and psychological changes, and also a major risk factor for most common neurodegenerative diseases. To improve the quality of life for the elderly, it is important to understand how the brain is changed during the normal aging process. We compared diffusion tensor imaging (DTI)-based brain networks in a cohort of 75 healthy old subjects by using graph theory metrics to describe the anatomical networks and connectivity patterns, and network-based statistic (NBS) analysis was used to identify pairs of regions with altered structural connectivity. The NBS analysis revealed a significant network comprising nine distinct fiber bundles linking 10 different brain regions showed altered white matter structures in young-old group compare with middle-aged group (p < .05, family-wise error-corrected). Our results might guide future studies and help to gain a better understanding of brain aging. 11. Massless and massive quanta resulting from a mediumlike metric tensor International Nuclear Information System (INIS) Soln, J. 1985-01-01 A simple model of the ''primordial'' scalar field theory is presented in which the metric tensor is a generalization of the metric tensor from electrodynamics in a medium. The radiation signal corresponding to the scalar field propagates with a velocity that is generally less than c. This signal can be associated simultaneously with imaginary and real effective (momentum-dependent) masses. The requirement that the imaginary effective mass vanishes, which we take to be the prerequisite for the vacuumlike signal propagation, leads to the ''spontaneous'' splitting of the metric tensor into two distinct metric tensors: one metric tensor gives rise to masslesslike radiation and the other to a massive particle. (author) 12. CHESS upgrade 1995: Improved radiation shielding International Nuclear Information System (INIS) Finkelstein, K. 1996-01-01 The Cornell Electron Storage Ring (CESR) stores electrons and positrons at 5.3 GeV for the production and study of B mesons, and, in addition, it supplies synchrotron radiation for CHESS. The machine has been upgraded for 300 mA operation. It is planned that each beam will be injected in about 5 minutes and that particle beam lifetimes will be several hours. In a cooperative effort, staff members at CHESS and LNS have studied sources in CESR that produce radiation in the user areas. The group has been responsible for the development and realization of new tunnel shielding walls that provide a level of radiation protection from 20 to approx-gt 100 times what was previously available. Our experience has indicated that a major contribution to the environmental radiation is not from photons, but results from neutrons that are generated by particle beam loss in the ring. Neutrons are stopped by inelastic scattering and absorption in thick materials such as heavy concrete. The design for the upgraded walls, the development of a mix for our heavy concrete, and all the concrete casting was done by CHESS and LNS personnel. The concrete incorporates a new material for this application, one that has yielded a significant cost saving in the production of over 200 tons of new wall sections. The material is an artificially enriched iron oxide pellet manufactured in vast quantities from hematite ore for the steel-making industry. Its material and chemical properties (iron and impurity content, strength, size and uniformity) make it an excellent substitute for high grade Brazilian ore, which is commonly used as heavy aggregate in radiation shielding. Its cost is about a third that of the natural ore. The concrete has excellent workability, a 28 day compressive strength exceeding 6000 psi and a density of 220 lbs/cu.ft (3.5 gr/cc). The density is limited by an interesting property of the pellets that is motivated by efficiency in the steel-making application. (Abstract Truncated) 13. Improved Metal-Polymeric Laminate Radiation Shielding, Phase I Data.gov (United States) National Aeronautics and Space Administration — In this proposed Phase I program, a multifunctional lightweight radiation shield composite will be developed and fabricated. This structural radiation shielding will... 14. Foam-Reinforced Polymer Matrix Composite Radiation Shields Project Data.gov (United States) National Aeronautics and Space Administration — New and innovative lightweight radiation shielding materials are needed to protect humans in future manned exploration vehicles. Radiation shielding materials are... 15. Foam-Reinforced Polymer Matrix Composite Radiation Shields, Phase I Data.gov (United States) National Aeronautics and Space Administration — New and innovative lightweight radiation shielding materials are needed to protect humans in future manned exploration vehicles. Radiation shielding materials are... 16. Feature Surfaces in Symmetric Tensor Fields Based on Eigenvalue Manifold. Science.gov (United States) Palacios, Jonathan; Yeh, Harry; Wang, Wenping; Zhang, Yue; Laramee, Robert S; Sharma, Ritesh; Schultz, Thomas; Zhang, Eugene 2016-03-01 Three-dimensional symmetric tensor fields have a wide range of applications in solid and fluid mechanics. Recent advances in the (topological) analysis of 3D symmetric tensor fields focus on degenerate tensors which form curves. In this paper, we introduce a number of feature surfaces, such as neutral surfaces and traceless surfaces, into tensor field analysis, based on the notion of eigenvalue manifold. Neutral surfaces are the boundary between linear tensors and planar tensors, and the traceless surfaces are the boundary between tensors of positive traces and those of negative traces. Degenerate curves, neutral surfaces, and traceless surfaces together form a partition of the eigenvalue manifold, which provides a more complete tensor field analysis than degenerate curves alone. We also extract and visualize the isosurfaces of tensor modes, tensor isotropy, and tensor magnitude, which we have found useful for domain applications in fluid and solid mechanics. Extracting neutral and traceless surfaces using the Marching Tetrahedra method can cause the loss of geometric and topological details, which can lead to false physical interpretation. To robustly extract neutral surfaces and traceless surfaces, we develop a polynomial description of them which enables us to borrow techniques from algebraic surface extraction, a topic well-researched by the computer-aided design (CAD) community as well as the algebraic geometry community. In addition, we adapt the surface extraction technique, called A-patches, to improve the speed of finding degenerate curves. Finally, we apply our analysis to data from solid and fluid mechanics as well as scalar field analysis. 17. Glyph-Based Comparative Visualization for Diffusion Tensor Fields. Science.gov (United States) Zhang, Changgong; Schultz, Thomas; Lawonn, Kai; Eisemann, Elmar; Vilanova, Anna 2016-01-01 Diffusion Tensor Imaging (DTI) is a magnetic resonance imaging modality that enables the in-vivo reconstruction and visualization of fibrous structures. To inspect the local and individual diffusion tensors, glyph-based visualizations are commonly used since they are able to effectively convey full aspects of the diffusion tensor. For several applications it is necessary to compare tensor fields, e.g., to study the effects of acquisition parameters, or to investigate the influence of pathologies on white matter structures. This comparison is commonly done by extracting scalar information out of the tensor fields and then comparing these scalar fields, which leads to a loss of information. If the glyph representation is kept, simple juxtaposition or superposition can be used. However, neither facilitates the identification and interpretation of the differences between the tensor fields. Inspired by the checkerboard style visualization and the superquadric tensor glyph, we design a new glyph to locally visualize differences between two diffusion tensors by combining juxtaposition and explicit encoding. Because tensor scale, anisotropy type, and orientation are related to anatomical information relevant for DTI applications, we focus on visualizing tensor differences in these three aspects. As demonstrated in a user study, our new glyph design allows users to efficiently and effectively identify the tensor differences. We also apply our new glyphs to investigate the differences between DTI datasets of the human brain in two different contexts using different b-values, and to compare datasets from a healthy and HIV-infected subject. 18. Tensoral for post-processing users and simulation authors Science.gov (United States) Dresselhaus, Eliot 1993-01-01 The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide. 19. Energy-momentum tensor of the electromagnetic field International Nuclear Information System (INIS) Horndeski, G.W.; Wainwright, J. 1977-01-01 In this paper we investigate the energy-momentum tensor of the most general second-order vector-tensor theory of gravitation and electromagnetism which has field equations which are (i) derivable from a variational principle, (ii) consistent with the notion of conservation of charge, and (iii) compatible with Maxwell's equations in a flat space. This energy-momentum tensor turns out to be quadratic in the first partial derivatives of the electromagnetic field tensor and depends upon the curvature tensor. The asymptotic behavior of this energy-momentum tensor is examined for solutions to Maxwell's equations in Minkowski space, and it is demonstrated that this energy-momentum tensor predicts regions of negative energy density in the vicinity of point sources 20. Quantum mechanics of Yano tensors: Dirac equation in curved spacetime International Nuclear Information System (INIS) Cariglia, Marco 2004-01-01 In spacetimes admitting Yano tensors, the classical theory of the spinning particle possesses enhanced worldline supersymmetry. Quantum mechanically generators of extra supersymmetries correspond to operators that in the classical limit commute with the Dirac operator and generate conserved quantities. We show that the result is preserved in the full quantum theory, that is, Yano symmetries are not anomalous. This was known for Yano tensors of rank 2, but our main result is to show that it extends to Yano tensors of arbitrary rank. We also describe the conformal Yano equation and show that is invariant under Hodge duality. There is a natural relationship between Yano tensors and supergravity theories. As the simplest possible example, we show that when the spacetime admits a Killing spinor then this generates Yano and conformal Yano tensors. As an application, we construct Yano tensors on maximally symmetric spaces: they are spanned by tensor products of Killing vectors 1. Algebraic and computational aspects of real tensor ranks CERN Document Server Sakata, Toshio; Miyazaki, Mitsuhiro 2016-01-01 This book provides comprehensive summaries of theoretical (algebraic) and computational aspects of tensor ranks, maximal ranks, and typical ranks, over the real number field. Although tensor ranks have been often argued in the complex number field, it should be emphasized that this book treats real tensor ranks, which have direct applications in statistics. The book provides several interesting ideas, including determinant polynomials, determinantal ideals, absolutely nonsingular tensors, absolutely full column rank tensors, and their connection to bilinear maps and Hurwitz-Radon numbers. In addition to reviews of methods to determine real tensor ranks in details, global theories such as the Jacobian method are also reviewed in details. The book includes as well an accessible and comprehensive introduction of mathematical backgrounds, with basics of positive polynomials and calculations by using the Groebner basis. Furthermore, this book provides insights into numerical methods of finding tensor ranks through... 2. Development of neutron shielding material for cask International Nuclear Information System (INIS) Najima, K.; Ohta, H.; Ishihara, N.; Matsuoka, T.; Kuri, S.; Ohsono, K.; Hode, S. 2001-01-01 Since 1980's Mitsubishi Heavy Industries, Ltd (MHI) has established transport and storage cask design 'MSF series' which makes higher payload and reliability for long term storage. MSF series transport and storage cask uses new-developed neutron shielding material. This neutron shielding material has been developed for improving durability under high condition for long term. Since epoxy resin contains a lot of hydrogen and is comparatively resistant to heat, many casks employ epoxy base neutron shielding material. However, if the epoxy base neutron shielding material is used under high temperature condition for a long time, the material deteriorates and the moisture contained in it is released. The loss of moisture is in the range of several percents under more than 150 C. For this reason, our purpose was to develop a high durability epoxy base neutron shielding material which has the same self-fire-extinction property, high hydrogen content and so on as conventional. According to the long-time heating test, the weight loss of this new neutron shielding material after 5000 hours heating has been lower than 0.04% at 150 C and 0.35% at 170 C. A thermal test was also performed: a specimen of neutron shielding material covered with stainless steel was inserted in a furnace under condition of 800 C temperature for 30 minutes then was left to cool down in ambient conditions. The external view of the test piece shows that only a thin layer was carbonized 3. Thermal testing of solid neutron shielding materials International Nuclear Information System (INIS) Boonstra, R.H. 1992-09-01 Two legal-weight truck casks the GA-4 and GA-9, will carry four PWR and nine BWR spent fuel assemblies, respectively. Each cask has a solid neutron shielding material separating the steel body and the outer steel skin. In the thermal accident specified by NRC regulations in 10CFR Part 71, the cask is subjected to an 800 degree C environment for 30 minutes. The neutron shield need not perform any shielding function during or after the thermal accident, but its behavior must not compromise the ability of the cask to contain the radioactive contents. In May-June 1989 the first series of full-scale thermal tests was performed on three shielding materials: Bisco Products NS-4-FR, and Reactor Experiments RX-201 and RX-207. The tests are described in Thermal Testing of Solid Neutron Shielding Materials, GA-AL 9897, R. H. Boonstra, General Atomics (1990), and demonstrated the acceptability of these materials in a thermal accident. Subsequent design changes to the cask rendered these materials unattractive in terms of weight or adequate service temperature margin. For the second test series, a material specification was developed for a polypropylene based neutron shield with a softening point of at least 280 degree F. The neutron shield materials tested were boronated (0.8--4.5%) polymers (polypropylene, HDPE, NS-4). The Envirotech and Bisco materials are not polypropylene, but were tested as potential backup materials in the event that a satisfactory polypropylene could not be found 4. Shielding concerns at a spallation source International Nuclear Information System (INIS) Russell, G.J.; Robinson, H.; Legate, G.L.; Woods, R. 1989-01-01 Neutrons produced by 800-MeV proton reactions at the Los Alamos Neutron Scattering Center spallation neutron source cause a variety of challenging shielding problems. We identify several characteristics distinctly different from reactor shielding and compute the dose attenuation through an infinite slab/shield composed of iron (100 cm) and borated polyethylene (15 cm). Our calculations show that (for an incident spallation spectrum characteristic of neutrons leaking from a tungsten target at 90/degree/) the dose through the shield is a complex mixture of neutrons and gamma rays. High-energy (> 20 MeV) neutron production from the target is ≅5% of the total, yet causes ≅68% of the dose at the shield surface. Primary low-energy (< 20 MeV) neutrons from the target contribute negligibly (≅0.5%) to the dose at the shield surface yet cause gamma rays, which contribute ≅31% to the total dose at the shield surface. Low-energy neutrons from spallation reactions behave similarly to neutrons with a fission spectrum distribution. 6 refs., 8 figs., 1 tab 5. Integrated Solar Concentrator and Shielded Radiator Science.gov (United States) Clark, David Larry 2010-01-01 A shielded radiator is integrated within a solar concentrator for applications that require protection from high ambient temperatures with little convective heat transfer. This innovation uses a reflective surface to deflect ambient thermal radiation, shielding the radiator. The interior of the shield is also reflective to provide a view factor to deep space. A key feature of the shield is the parabolic shape that focuses incoming solar radiation to a line above the radiator along the length of the trough. This keeps the solar energy from adding to the radiator load. By placing solar cells along this focal line, the concentration of solar energy reduces the number and mass of required cells. By shielding the radiator, the effective reject temperature is much lower, allowing lower radiator temperatures. This is particularly important for lower-temperature processes, like habitat heat rejection and fuel cell operations where a high radiator temperature is not feasible. Adding the solar cells in the focal line uses the concentrating effect of the shield to advantage to accomplish two processes with a single device. This shield can be a deployable, lightweight Mylar structure for compact transport. 6. PWR upper/lower internals shield Energy Technology Data Exchange (ETDEWEB) Homyk, W.A. [Indian Point Station, Buchanan, NY (United States) 1995-03-01 During refueling of a nuclear power plant, the reactor upper internals must be removed from the reactor vessel to permit transfer of the fuel. The upper internals are stored in the flooded reactor cavity. Refueling personnel working in containment at a number of nuclear stations typically receive radiation exposure from a portion of the highly contaminated upper intervals package which extends above the normal water level of the refueling pool. This same issue exists with reactor lower internals withdrawn for inservice inspection activities. One solution to this problem is to provide adequate shielding of the unimmersed portion. The use of lead sheets or blankets for shielding of the protruding components would be time consuming and require more effort for installation since the shielding mass would need to be transported to a support structure over the refueling pool. A preferable approach is to use the existing shielding mass of the refueling pool water. A method of shielding was devised which would use a vacuum pump to draw refueling pool water into an inverted canister suspended over the upper internals to provide shielding from the normally exposed components. During the Spring 1993 refueling of Indian Point 2 (IP2), a prototype shield device was demonstrated. This shield consists of a cylindrical tank open at the bottom that is suspended over the refueling pool with I-beams. The lower lip of the tank is two feet below normal pool level. After installation, the air width of the natural shielding provided by the existing pool water. This paper describes the design, development, testing and demonstration of the prototype device. 7. Seismic proof test of shielding block walls International Nuclear Information System (INIS) Ohte, Yukio; Watanabe, Takahide; Watanabe, Hiroyuki; Maruyama, Kazuhide 1989-01-01 Most of the shielding block walls used for building nuclear facilities are built by dry process. When a nuclear facility is designed, seismic waves specific at each site are set as input seismic motions and they are adopted in the design. Therefore, it is necessary to assure safety of the shielding block walls for earthquake by performing anti-seismic experiments under the conditions at each site. In order to establish the normal form that can be applied to various seismic conditions in various areas, Shimizu Corp. made an actual-size test samples for the shielding block wall and confirmed the safety for earthquake and validity of normalization. (author) 8. Tensor coupling and pseudospin symmetry in nuclei International Nuclear Information System (INIS) Alberto, P.; Castro, A.S. de; Lisboa, R.; Malheiro, M. 2005-01-01 In this work we study the contribution of the isoscalar tensor coupling to the realization of pseudospin symmetry in nuclei. Using realistic values for the tensor coupling strength, we show that this coupling reduces noticeably the pseudospin splittings, especially for single-particle levels near the Fermi surface. By using an energy decomposition of the pseudospin energy splittings, we show that the changes in these splittings come mainly through the changes induced in the lower radial wave function for the low-lying pseudospin partners and through changes in the expectation value of the pseudospin-orbit coupling term for surface partners. This allows us to confirm the conclusion already reached in previous studies, namely that the pseudospin symmetry in nuclei is of a dynamical nature 9. Tensor modes on the string theory landscape Energy Technology Data Exchange (ETDEWEB) Westphal, Alexander 2012-06-15 We attempt an estimate for the distribution of the tensor mode fraction r over the landscape of vacua in string theory. The dynamics of eternal inflation and quantum tunneling lead to a kind of democracy on the landscape, providing no bias towards large-field or small-field inflation regardless of the class of measure. The tensor mode fraction then follows the number frequency distributions of inflationary mechanisms of string theory over the landscape. We show that an estimate of the relative number frequencies for small-field vs large-field inflation, while unattainable on the whole landscape, may be within reach as a regional answer for warped Calabi-Yau flux compactifications of type IIB string theory. 10. On the SU2 unit tensor International Nuclear Information System (INIS) Kibler, M.; Grenet, G. 1979-07-01 The SU 2 unit tensor operators tsub(k,α) are studied. In the case where the spinor point group G* coincides with U 1 , then tsub(k α) reduces up to a constant to the Wigner-Racah-Schwinger tensor operator tsub(kqα), an operator which produces an angular momentum state. One first investigates those general properties of tsub(kα) which are independent of their realization. The tsub(kα) in terms of two pairs of boson creation and annihilation operators are realized. This leads to look at the Schwinger calculus relative to one angular momentum of two coupled angular momenta. As a by-product, a procedure is given for producing recursion relationships between SU 2 Wigner coefficients. Finally, some of the properties of the Wigner and Racah operators for an arbitrary compact group and the SU 2 coupling coefficients are studied 11. Proton hyperfine tensors in nitroxide radicals Energy Technology Data Exchange (ETDEWEB) Brustolon, M.; Maniero, A.L.; Segre, U. (Universita di Padova (Italy)); Ottaviani, M.F. (Universita di Firenze (Italy)); Romanelli, M. (Universita della Basilicata (Italy)) 1990-08-23 The proton hyperfine tensors of five nitroxide radicals have been obtained by ENDOR spectroscopy in frozen solution. The spectra are interpreted by computing the dipolar hyperfine interaction and simulating the spectra. EPR spectra in solution of the same radicals have been simulated by taking into account the effects of the proton hyperfine tensors. We have been able to reproduce accurately the line broadening effects of the proton hyperfine structures inside each nitrogen hyperfine component and we have determined the correlation times for the rotational motion. In the case of the radical Tempol, our analysis allows discrimination between the effects due to the protons of the axial and equatorial methyl groups. On the basis of experimental evidence we can attribute the larger isotropic hyperfine coupling constant to the axial methyl protons. The possible use of the present results for interpreting the spectra of other nitroxide radicals is discussed. 12. Tensor modes on the string theory landscape International Nuclear Information System (INIS) Westphal, Alexander 2012-06-01 We attempt an estimate for the distribution of the tensor mode fraction r over the landscape of vacua in string theory. The dynamics of eternal inflation and quantum tunneling lead to a kind of democracy on the landscape, providing no bias towards large-field or small-field inflation regardless of the class of measure. The tensor mode fraction then follows the number frequency distributions of inflationary mechanisms of string theory over the landscape. We show that an estimate of the relative number frequencies for small-field vs large-field inflation, while unattainable on the whole landscape, may be within reach as a regional answer for warped Calabi-Yau flux compactifications of type IIB string theory. 13. Sasakian manifolds with purely transversal Bach tensor Science.gov (United States) Ghosh, Amalendu; Sharma, Ramesh 2017-10-01 We show that a (2n + 1)-dimensional Sasakian manifold (M, g) with a purely transversal Bach tensor has constant scalar curvature ≥2 n (2 n +1 ) , equality holding if and only if (M, g) is Einstein. For dimension 3, M is locally isometric to the unit sphere S3. For dimension 5, if in addition (M, g) is complete, then it has positive Ricci curvature and is compact with finite fundamental group π1(M). 14. Anisotropic diffusion tensor applied to temporal mammograms DEFF Research Database (Denmark) Karemore, Gopal; Brandt, Sami; Sporring, Jon 2010-01-01 changes related to specific effects like Hormonal Replacement Therapy (HRT) and aging. Given effect-grouped patient data, we demonstrated how anisotropic diffusion tensor and its coherence features computed in an anatomically oriented breast coordinate system followed by statistical learning... 15. Tensor Networks and Quantum Error Correction Science.gov (United States) Ferris, Andrew J.; Poulin, David 2014-07-01 We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way. 16. Numerical CP Decomposition of Some Difficult Tensors Czech Academy of Sciences Publication Activity Database Tichavský, Petr; Phan, A. H.; Cichocki, A. 2017-01-01 Roč. 317, č. 1 (2017), s. 362-370 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA14-13713S Institutional support: RVO:67985556 Keywords : Small matrix multiplication * Canonical polyadic tensor decomposition * Levenberg-Marquardt method Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/tichavsky-0468385.pdf 17. Bayesian approach to magnetotelluric tensor decomposition Czech Academy of Sciences Publication Activity Database Červ, Václav; Pek, Josef; Menvielle, M. 2010-01-01 Roč. 53, č. 2 (2010), s. 21-32 ISSN 1593-5213 R&D Projects: GA AV ČR IAA200120701; GA ČR GA205/04/0746; GA ČR GA205/07/0292 Institutional research plan: CEZ:AV0Z30120515 Keywords : galvanic distortion * telluric distortion * impedance tensor * basic procedure * inversion * noise Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.336, year: 2010 18. Monte Carlo Volcano Seismic Moment Tensors Science.gov (United States) Waite, G. P.; Brill, K. A.; Lanza, F. 2015-12-01 Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment. 19. FABRIC TENSOR FOR DISCONTINUOUS GEOLOGICAL MATERIALS OpenAIRE 小田, 匡寛 1982-01-01 Geometrical property (fabric) of discontinuity in geological materials is discussed in terms of (1) position and density, (2) shape and dimension and (3) orientation of related discontinuities such as joint, fault and discrete particle. By taking into account these geometrical elements, a unique measure called fabric tensor F_ is definitely introduced to embody the fabric concept without loss of generality.The first invariant of F_ is important as an index measure to evaluate the crack intens... 20. User-transparent Distributed TensorFlow OpenAIRE Vishnu, Abhinav; Manzano, Joseph; Siegel, Charles; Daily, Jeff 2017-01-01 Deep Learning (DL) algorithms have become the {\\em de facto} choice for data analysis. Several DL implementations -- primarily limited to a single compute node -- such as Caffe, TensorFlow, Theano and Torch have become readily available. Distributed DL implementations capable of execution on large scale systems are becoming important to address the computational needs of large data produced by scientific simulations and experiments. Yet, the adoption of distributed DL implementations faces si... 1. Tensor Fusion Network for Multimodal Sentiment Analysis OpenAIRE Zadeh, Amir; Chen, Minghai; Poria, Soujanya; Cambria, Erik; Morency, Louis-Philippe 2017-01-01 Multimodal sentiment analysis is an increasingly popular research area, which extends the conventional language-based definition of sentiment analysis to a multimodal setup where other relevant modalities accompany language. In this paper, we pose the problem of multimodal sentiment analysis as modeling intra-modality and inter-modality dynamics. We introduce a novel model, termed Tensor Fusion Network, which learns both such dynamics end-to-end. The proposed approach is tailored for the vola... 2. Probing white-matter microstructure with higher-order diffusion tensors and susceptibility tensor MRI Science.gov (United States) Liu, Chunlei; Murphy, Nicole E.; Li, Wei 2012-01-01 Diffusion MRI has become an invaluable tool for studying white matter microstructure and brain connectivity. The emergence of quantitative susceptibility mapping and susceptibility tensor imaging (STI) has provided another unique tool for assessing the structure of white matter. In the highly ordered white matter structure, diffusion MRI measures hindered water mobility induced by various tissue and cell membranes, while susceptibility sensitizes to the molecular composition and axonal arrangement. Integrating these two methods may produce new insights into the complex physiology of white matter. In this study, we investigated the relationship between diffusion and magnetic susceptibility in the white matter. Experiments were conducted on phantoms and human brains in vivo. Diffusion properties were quantified with the diffusion tensor model and also with the higher order tensor model based on the cumulant expansion. Frequency shift and susceptibility tensor were measured with quantitative susceptibility mapping and susceptibility tensor imaging. These diffusion and susceptibility quantities were compared and correlated in regions of single fiber bundles and regions of multiple fiber orientations. Relationships were established with similarities and differences identified. It is believed that diffusion MRI and susceptibility MRI provide complementary information of the microstructure of white matter. Together, they allow a more complete assessment of healthy and diseased brains. PMID:23507987 3. Particle Tracing Modeling with SHIELDS Science.gov (United States) Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K. 2017-12-01 The near-Earth inner magnetosphere, where most of the nation's civilian and military space assets operate, is an extremely hazardous region of the space environment which poses major risks to our space infrastructure. Failure of satellite subsystems or even total failure of a spacecraft can arise for a variety of reasons, some of which are related to the space environment: space weather events like single-event-upsets and deep dielectric charging caused by high energy particles, or surface charging caused by low to medium energy particles; other space hazards are collisions with natural or man-made space debris, or intentional hostile acts. A recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program aims at developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to understand the dynamics of the surface charging environment (SCE), the hot (keV) electrons on both macro- and microscale. These challenging problems are addressed using a team of world-class experts and state-of-the-art physics-based models and computational facilities. We present first results of a coupled BATS-R-US/RAM-SCB/Particle Tracing Model to evaluate particle fluxes in the inner magnetosphere. We demonstrate that this setup is capable of capturing the earthward particle acceleration process resulting from dipolarization events in the tail region of the magnetosphere. 4. REPOSITORY RADIATION SHIELDING DESIGN GUIDE International Nuclear Information System (INIS) M. Haas; E.M. Fortsch 1997-01-01 The scope of this document includes radiation safety considerations used in the design of facilities for the Yucca Mountain Site Characterization Project (YMP). The purpose of the Repository Radiation Shielding Design Guide is to document the approach used in the radiological design of the Mined Geologic Disposal System (MGDS) surface and subsurface facilities for the protection of workers, the public, and the environment. This document is intended to ensure that a common methodology is used by all groups that may be involved with Radiological Design. This document will also assist in ensuring the long term survivability of the information basis used for radiological safety design and will assist in satisfying the documentation requirements of the licensing body, the Nuclear Regulatory Commission (NRC). This design guide provides referenceable information that is current and maintained under the YMP Quality Assurance (QA) Program. Furthermore, this approach is consistent with maintaining continuity in spite of a changing design environment. This approach also serves to ensure common inter-disciplinary interpretation and application of data 5. REPOSITORY RADIATION SHIELDING DESIGN GUIDE Energy Technology Data Exchange (ETDEWEB) M. Haas; E.M. Fortsch 1997-09-12 The scope of this document includes radiation safety considerations used in the design of facilities for the Yucca Mountain Site Characterization Project (YMP). The purpose of the Repository Radiation Shielding Design Guide is to document the approach used in the radiological design of the Mined Geologic Disposal System (MGDS) surface and subsurface facilities for the protection of workers, the public, and the environment. This document is intended to ensure that a common methodology is used by all groups that may be involved with Radiological Design. This document will also assist in ensuring the long term survivability of the information basis used for radiological safety design and will assist in satisfying the documentation requirements of the licensing body, the Nuclear Regulatory Commission (NRC). This design guide provides referenceable information that is current and maintained under the YMP Quality Assurance (QA) Program. Furthermore, this approach is consistent with maintaining continuity in spite of a changing design environment. This approach also serves to ensure common inter-disciplinary interpretation and application of data. 6. Shielding Calculations for PUSPATI TRIGA Reactor (RTP) Fuel Transfer Cask with Micro shield International Nuclear Information System (INIS) Nurhayati Ramli; Ahmad Nabil Abdul Rahim; Ariff Shah Ismail 2011-01-01 The shielding calculations for RTP fuel transfer cask was performed by using computer code Micro shield 7.02. Micro shield is a computer code designed to provide a model to be used for shielding calculations. The results of the calculations can be obtained fast but the code is not suitable for complex geometries with a shielding composed of more than one material. Nevertheless, the program is sufficient for As Low As Reasonable Achievable (ALARA) optimization calculations. In this calculation, a geometry based on the conceptual design of RTP fuel transfer cask was modeled. Shielding material used in the calculations were lead (Pb) and stainless steel 304 (SS304). The results obtained from these calculations are discussed in this paper. (author) 7. Direct tensor rendering using a bidirectional reflectance model Science.gov (United States) Nagasawa, Mikio; Suzuki, Yoshio 2000-02-01 For the multi variable volumetric tensor field visualization, an efficient direct rendering technique without using geometrical primitive is proposed. The bi- directional reflectance shading model is used to map the anisotropy stress shear tensor components in direct volume rendering. We model the sub-pixel-sized microfacet at tensor sampling points. The nine component of 3D tensor field are mapped onto grid deformation, opacity mapping, color specification, and normal directions of these microfacets. The ray integration is executed though these irregular infinitesimal microfacets distribution. This direct tensor rendering was applied for at-a-glance tensor visualization of earthquake simulation. That realized a view of deformed structure, stress distribution, local shear discontinuity and the shock front, integrated in a single image. The characteristic P- and S-wave modes are distinguished in the rendered earthquake simulations. Compared with the glyph representation of tensor features, the direct tensor rendering gives the general and total image of tensor field even for the low resolution pixel planes, because the sampling object is assumed as infinitesimally small. the computational cost of direct tensor rendering is not so high than that of scalar volume rendering because the modifications are only ins hading calculation but not in the ray integration. 8. Tensor interaction in heavy-ion scattering. Pt. 1 International Nuclear Information System (INIS) Nishioka, H.; Johnson, R.C. 1985-01-01 The Heidelberg shape-effect model for heavy-ion tensor interactions is reformulated and generalized using the Hooton-Johnson formulation. The generalized semiclassical model (the turning-point model) predicts that the components of the tensor analysing power anti Tsub(2q) have certain relations with each other for each type of tensor interaction (Tsub(R), Tsub(P) and Tsub(L) types). The predicted relations between the anti Tsub(2q) are very simple and have a direct connection with the properties of the tensor interaction at the turning point. The model predictions are satisfied in quantum-mechanical calculations for 7 Li and 23 Na elastic scattering from 58 Ni in the Fresnel-diffraction energy region. As a consequence of this model, it becomes possible to single out effects from a Tsub(P)- or Tsub(L)-type tensor interaction in polarized heavy-ion scattering. The presence of a Tsub(P)-type tensor interaction is suggested by measured anti T 20 /anti T 22 ratios for 7 Li + 58 Ni scattering. In the turning-point model the three types of tensor operator are not independent, and this is found to be true also in a quantum-mechanical calculation. The model also predicts relations between the components of higher-rank tensor analysing power in the presence of a higher-rank tensor interaction. The rank-3 tensor case is discussed in detail. (orig.) 9. Shielding properties of fibre cement wallboard. Science.gov (United States) Thiele, D L; Godwin, G A; Coakley, K S 1998-09-01 Transmission data for a fibre cement wallboard (villaboard) are determined for use in diagnostic shielding designs. Villaboard is found to be more attenuating than plasterboard e.g. 9 mm of villaboard is equivalent to 16 mm of plasterboard. 10. Thermal testing of solid neutron shielding materials International Nuclear Information System (INIS) Boonstra, R.H. 1990-03-01 The GA-4 and GA-9 spent fuel shipping casks employ a solid neutron shielding material. During a hypothetical thermal accident, any combustion of the neutron shield must not compromise the ability of the cask to contain the radioactive contents. A two-phase thermal testing program was carried out to assist in selecting satisfactory shielding materials. In the first phase, small-scale screening tests were performed on nine candidate materials using ASTM procedures. From these initial results, three of the nine candidates were chosen for inclusion in the second phase of testing, These materials were Bisco Products NS-4-FR, Reactor Experiments 201-1, and Reactor Experiments 207. In the second phase, each selected material was fabricated into a test article which simulated a full-scale of neutron shield from the cask. The test article was heated in an environmental prescribed by NRC regulations. Results of this second testing phase showed that all three materials are thermally acceptable 11. Multifunctional BHL Radiation Shield, Phase I Data.gov (United States) National Aeronautics and Space Administration — Advances in radiation shielding technology remain an important challenge for NASA in order to protect their astronauts, particularly as NASA grows closer to manned... 12. Thermal testing of solid neutron shielding materials International Nuclear Information System (INIS) Boonstra, R.N. 1990-01-01 The GA-4 and GA-9 spent fuel shipping casks employ a solid neutron shielding material. During a hypothetical thermal accident, any combustion of the neutron shield must not compromise the ability of the cask to contain the radioactive contents. A two-phase thermal testing program was carried out to assist in selecting satisfactory shielding materials. In the first phase, small-scale screening tests were performed on nine candidate materials using ASTM procedures. From these initial results, three of the nine candidates were chosen for inclusion in the second phase of testing. These materials were Bisco Products NS-4-FR, Reactor Experiments 201-1, and Reactor Experiments 207. In the second phase, each selected material was fabricated into a test article which simulated a full-scale section of neutron shield from the cask. The test article was heated in an environment prescribed by NRC regulations. Results of this second testing phase show that all three materials are thermally acceptable 13. Long Duration Space Shelter Shielding, Phase I Data.gov (United States) National Aeronautics and Space Administration — Physical Sciences Inc. (PSI) has developed fiber reinforced ceramic composites for radiation shielding that can be used for external walls in long duration manned... 14. Radiation shielding method for pipes, etc International Nuclear Information System (INIS) Nagao, Tetsuya; Takahashi, Shuichi. 1988-01-01 Purpose: To constitute shielding walls of a dense structure around pipes and enable to reduce the wall thickness thereof upon periodical inspection, etc. for nuclear power plants. Constitution: For those portions of pipes requring shieldings, cylindrical vessels surrounding the portions are disposed and connected to a mercury supply system, a mercury discharge system and a freezing system for solidifying mercury. After charging mercury in a tank by way of a supply hose to the cylindrical vessels, the temperature of the mercury is lowered below the freezing point thereof to solidify the mercury while circulating cooling medium, to thereby form dense cylindrical radioactive-ray shielding walls. The specific gravity of mercury is greater than that of lead and, accordingly, the thickness of the shielding walls can be reduced as compared with the conventional wall thickness of the entire laminates. (Takahashi, M.) 15. Long Duration Space Shelter Shielding, Phase II Data.gov (United States) National Aeronautics and Space Administration — Physical Sciences Inc. (PSI) has developed a ceramic composite material system that is more effective for shielding both GCR and SPE than aluminum. The composite... 16. Shielding design for better plant availability International Nuclear Information System (INIS) Biro, G.G. 1975-01-01 Design methods are described for providing a shield system for nuclear power plants that will facilitate maintenance and inspection, increase overall plant availability, and ensure that man-rem exposures are as low as practicable 17. Radiation shielding structure for concrete structure International Nuclear Information System (INIS) Oya, Hiroshi 1998-01-01 Crack inducing members for inducing cracks in a predetermined manner are buried in a concrete structure. Namely, a crack-inducing member comprises integrally a shielding plate and extended plates situated at the center of a wall and inducing plates vertically disposed to the boundary portion between them with the inducing plates being disposed each in a direction perforating the wall. There are disposed integrally a pair of the inducing plate spaced at a predetermined horizontal distance on both sides of the shielding plate so as to form a substantially crank-shaped cross section and extended plates formed in the extending direction of the shielding plate, and the inducing plates are disposed each in a direction perforating the wall. Then, cracks generated when stresses are exerted can be controlled, and generation of cracks passing through the concrete structure can be prevented reliably. The reliability of a radiation shielding effect can be enhanced remarkably. (N.H.) 18. Technical specifications for the bulk shielding reactor International Nuclear Information System (INIS) 1986-05-01 This report provides information concerning the technical specifications for the Bulk Shielding Reactor. Areas covered include: safety limits and limiting safety settings; limiting conditions for operation; surveillance requirements; design features; administrative controls; and monitoring of airborne effluents. 10 refs 19. Passive Shielding for Low Frequency Magnetic Films National Research Council Canada - National Science Library Damaskos, Nickander 1997-01-01 Report developed under SBIR Contract. An approach to low frequency shielding is shown with application to suppression of electromagnetic fields emanating from rail gun barrels and power cable busses. Damaskos, Inc... 20. Discriminating induced seismicity from natural earthquakes using moment tensors and source spectra Science.gov (United States) Zhang, Hongliang; Eaton, David W.; Li, Ge; Liu, Yajing; Harrington, Rebecca M. 2016-02-01 Earthquake source mechanisms and spectra can provide important clues to aid in discriminating between natural and induced events. In this study, we calculate moment tensors and stress drop values for eight recent induced earthquakes in the Western Canadian Sedimentary Basin with magnitudes between 3.2 and 4.4, as well as a nearby magnitude 5.3 event that is interpreted as a natural earthquake. We calculate full moment tensor solutions by performing a waveform-fitting procedure based on a 1-D transversely isotropic velocity model. In addition to a dominant double-couple (DC) signature that is common to nearly all events, most induced events exhibit significant non-double-couple components. A parameter sensitivity analysis indicates that spurious non-DC components are negligible if the signal to noise ratio (SNR) exceeds 10 and if the 1-D model differs from the true velocity structure by less than 5%. Estimated focal depths of induced events are significantly shallower than the typical range of focal depths for intraplate earthquakes in the Canadian Shield. Stress drops of the eight induced events were estimated using a generalized spectral-fitting method and fall within the typical range of 2 to 90 MPa for tectonic earthquakes. Elastic moduli changes due to the brittle damage production at the source, presence of multiple intersecting fractures, dilatant jogs created at the overlapping areas of multiple fractures, or non-planar pre-existing faults may explain the non-DC components for induced events. 1. Preparation of polymers suitable for radiation shielding and studying its properties (polyester composites with heavy metals salts) International Nuclear Information System (INIS) Kharita, M. H.; Al-Ajji, Z.; Yousef, S. 2010-12-01 Four composites were prepared in this work, based on polyester and heavy metals oxides and salts. The attenuation properties, as well as mechanical properties were studied, and the chemical stability was evaluated. It has been shown, that these composites can be used in radiation shielding for X-rays successfully, and the exact composition of these composites can be optimized according to the radiation energy to prepare the lightest possible shield. (author) 2. Method to produce a neutron shielding International Nuclear Information System (INIS) Merkle, H.J. 1978-01-01 The neutron shielding for armoured vehicles consists of preshaped plastic plates which are coated on the armoured vehicle walls by conversion of the thermoplast. Suitable plastics or thermoplasts are PVC, PVC acetate, or mixtures of these, into which more than 50% B, B 4 C, or BN is embedded. The colour of the shielding may be determined by the choice of the neutron absorber, e.g. a white colour for BN. The plates are produced using an extruder or calender. (DG) [de 3. Shield structure for a nuclear reactor International Nuclear Information System (INIS) Rouse, C.A.; Simnad, M.T. 1979-01-01 An improved nuclear reactor shield structure is described for use where there are significant amounts of fast neutron flux above an energy level of approximately 70 keV. The shield includes structural supports and neutron moderator and absorber systems. A portion at least of the neutron moderator material is magnesium oxide either alone or in combination with other moderator materials such as graphite and iron. (U.K.) 4. ANALISIS KESELAMATAN TERMOHIDROLIK BULK SHIELDING REAKTOR KARTINI Directory of Open Access Journals (Sweden) Azizul Khakim 2015-10-01 Full Text Available ABSTRAK ANALISIS KESELAMATAN TERMOHIDROLIK BULK SHIELDING REAKTOR KARTINI. Bulk shielding merupakan fasilitas yang terintegrasi dengan reaktor Kartini yang berfungsi sebagai penyimpanan sementara bahan bakar bekas. Fasilitas ini merupakan fasilitas yang termasuk dalam struktur, sistem dan komponen (SSK yang penting bagi keselamatan. Salah satu fungsi keselamatan dari sistem penanganan dan penyimpanan bahan bakar adalah mencegah kecelakaan kekritisan yang tak terkendali dan membatasi naiknya temperatur bahan bakar. Analisis keselamatan paling kurang harus mencakup analisis keselamatan dari sisi neutronik dan termo hidrolik Bulk shielding. Analisis termo hidrolik ditujukan untuk memastikan perpindahan panas dan proses pendinginan bahan bakar bekas berjalan baik dan tidak terjadi akumulasi panas yang mengancam integritas bahan bakar. Code tervalidasi PARET/ANL digunakan untuk analisis pendinginan dengan mode konveksi alam. Hasil perhitungan menunjukkan bahwa mode pendinginan konvekasi alam cukup memadai dalam mendinginkan panas sisa tanpa mengakibatkan kenaikan temperatur bahan bakar yang signifikan. Kata kunci: Bulk shielding, bahan bakar bekas, konveksi alam, PARET. ABSTRACT THERMAL HYDRAULIC SAFETY ANALYSIS OF BULK SHIELDING KARTINI REACTOR. Bulk shielding is an integrated facility to Kartini reactor which is used for temporary spent fuels storage. The facility is one of the structures, systems and components (SSCs important to safety. Among the safety functions of fuel handling and storage are to prevent any uncontrolable criticality accidents and to limit the fuel temperature increase. Safety analyses should, at least, cover neutronic and thermal hydraulic calculations of the bulk shielding. Thermal hydraulic analyses were intended to ensure that heat removal and the process of the spent fuels cooling takes place adequately and no heat accumulation that challenges the fuel integrity. Validated code, PARET/ANL was used for analysing the 5. Radiation shielding of the main injector International Nuclear Information System (INIS) Bhat, C.M.; Martin, P.S. 1995-05-01 The radiation shielding in the Fermilab Main Injector (FMI) complex has been carried out by adopting a number of prescribed stringent guidelines established by a previous safety analysis. Determination of the required amount of radiation shielding at various locations of the FMI has been done using Monte Carlo computations. A three dimensional ray tracing code as well as a code based upon empirical observations have been employed in certain cases 6. The Absolute Shielding Constants of Heavy Nuclei: Resolving the Enigma of the (119)Sn Absolute Shielding. Science.gov (United States) Malkin, Elena; Komorovsky, Stanislav; Repisky, Michal; Demissie, Taye B; Ruud, Kenneth 2013-02-07 We demonstrate that the apparent disagreement between experimental determinations and four-component relativistic calculations of the absolute shielding constants of heavy nuclei is due to the breakdown of the commonly assumed relation between the electronic contribution to the nuclear spin-rotation constants and the paramagnetic contribution to the NMR shielding constants. We demonstrate that this breakdown has significant consequences for the absolute shielding constant of (119)Sn, leading to errors of about 1000 ppm. As a consequence, we expect that many absolute shielding constants of heavy nuclei will be in need of revision. 7. Reliability Methods for Shield Design Process Science.gov (United States) Tripathi, R. K.; Wilson, J. W. 2002-01-01 Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed. 8. Reactor shielding. Report of a panel International Nuclear Information System (INIS) 1964-01-01 Reactor shielding is necessary that people may work and live in the vicinity of reactors without receiving detrimental biological effects and that the necessary materials and instrumentation for reactor operation may function properly. Much of the necessary theoretical work and experimental measurement has been accomplished in recent years. Scientists have developed some very sophisticated methods which have contributed to a more thorough understanding of the problems involved and have produced some very reliable results leading to significant reductions in shield configurations. A panel of experts was convened from 9 to 13 March 1964 in Vienna at the Headquarters of the International Atomic Energy Agency to discuss the present status of reactor shielding. The participants were prominent shielding experts from most of the laboratories engaged in this field throughout the world. They presented status reports describing the past history and plans for further development of reactor shielding in their countries and much valuable discussion took place on some of the most relevant aspects of reactor shielding. All this material is presented in this report, together with abstracts of the supporting papers read to the Panel 9. Shielding requirements for particle bed propulsion systems Science.gov (United States) Gruneisen, S. J. 1991-06-01 Nuclear Thermal Propulsion systems present unique challenges in reliability and safety. Due to the radiation incident upon all components of the propulsion system, shielding must be used to keep nuclear heating in the materials within limits; in addition, electronic control systems must be protected. This report analyzes the nuclear heating due to the radiation and the shielding required to meet the established criteria while also minimizing the shield mass. Heating rates were determined in a 2000 MWt Particle Bed Reactor (PBR) system for all materials in the interstage region, between the reactor vessel and the propellant tank, with special emphasis on meeting the silicon dose criteria. Using a Lithium Hydride/Tungsten shield, the optimum shield design was found to be: 50 cm LiH/2 cm W on the axial reflector in the reactor vessel and 50 cm LiH/2 cm W in a collar extension of the inside shield outside of the pressure vessel. Within these parameters, the radiation doses in all of the components in the interstage and lower tank regions would be within acceptable limits for mission requirements. 10. Innovative technologies for Faraday shield cooling International Nuclear Information System (INIS) Rosenfeld, J.H.; Lindemuth, J.E.; North, M.T.; Goulding, R.H. 1995-01-01 Alternative advanced technologies are being evaluated for use in cooling the Faraday shields used for protection of ion cyclotron range of frequencies (ICR) antennae in Tokamaks. Two approaches currently under evaluation include heat pipe cooling and gas cooling. A Monel/water heat pipe cooled Faraday shield has been successfully demonstrated. Heat pipe cooling offers the advantage of reducing the amount of water discharged into the Tokamak in the event of a tube weld failure. The device was recently tested on an antenna at Oak Ridge National Laboratory. The heat pipe design uses inclined water heat pipes with warm water condensers located outside of the plasma chamber. This approach can passively remove absorbed heat fluxes in excess of 200 W/cm 2 ;. Helium-cooled Faraday shields are also being evaluated. This approach offers the advantage of no liquid discharge into the Tokamak in the event of a tube failure. Innovative internal cooling structures based on porous metal cooling are being used to develop a helium-cooled Faraday shield structure. This approach can dissipate the high heat fluxes typical of Faraday shield applications while minimizing the required helium blower power. Preliminary analysis shows that nominal helium flow and pressure drop can sufficiently cool a Faraday shield in typical applications. Plans are in progress to fabricate and test prototype hardware based on this approach 11. Potential of Nanocellulose Composite for Electromagnetic Shielding Directory of Open Access Journals (Sweden) Nabila Yah Nurul Fatihah 2017-01-01 Full Text Available Nowadays, most people rely on the electronic devices for work, communicating with friends and family, school and personal enjoyment. As a result, more new equipment or devices operates in higher frequency were rapidly developed to accommodate the consumers need. However, the demand of using wireless technology and higher frequency in new devices also brings the need to shield the unwanted electromagnetic signals from those devices for both proper operation and human health concerns. This paper highlights the potential of nanocellulose for electromagnetic shielding using the organic environmental nanocellulose composite materials. In addition, the theory of electromagnetic shielding and recent development of green and organic material in electromagnetic shielding application has also been reviewed in this paper. The use of the natural fibers which is nanocelllose instead of traditional reinforcement materials provides several advantages including the natural fibers are renewable, abundant and low cost. Furthermore, added with other advantages such as lightweight and high electromagnetic shielding ability, nanocellulose has a great potential as an alternative material for electromagnetic shielding application. 12. Diffusion tensor imaging tensor shape analysis for assessment of regional white matter differences. Science.gov (United States) Middleton, Dana M; Li, Jonathan Y; Lee, Hui J; Chen, Steven; Dickson, Patricia I; Ellinwood, N Matthew; White, Leonard E; Provenzale, James M 2017-08-01 Purpose The purpose of this study was to investigate a novel tensor shape plot analysis technique of diffusion tensor imaging data as a means to assess microstructural differences in brain tissue. We hypothesized that this technique could distinguish white matter regions with different microstructural compositions. Methods Three normal canines were euthanized at seven weeks old. Their brains were imaged using identical diffusion tensor imaging protocols on a 7T small-animal magnetic resonance imaging system. We examined two white matter regions, the internal capsule and the centrum semiovale, each subdivided into an anterior and posterior region. We placed 100 regions of interest in each of the four brain regions. Eigenvalues for each region of interest triangulated onto tensor shape plots as the weighted average of three shape metrics at the plot's vertices: CS, CL, and CP. Results The distribution of data on the plots for the internal capsule differed markedly from the centrum semiovale data, thus confirming our hypothesis. Furthermore, data for the internal capsule were distributed in a relatively tight cluster, possibly reflecting the compact and parallel nature of its fibers, while data for the centrum semiovale were more widely distributed, consistent with the less compact and often crossing pattern of its fibers. This indicates that the tensor shape plot technique can depict data in similar regions as being alike. Conclusion Tensor shape plots successfully depicted differences in tissue microstructure and reflected the microstructure of individual brain regions. This proof of principle study suggests that if our findings are reproduced in larger samples, including abnormal white matter states, the technique may be useful in assessment of white matter diseases. 13. An introduction to tensors and group theory for physicists CERN Document Server Jeevanjee, Nadir 2011-01-01 An Introduction to Tensors and Group Theory for Physicists provides both an intuitive and rigorous approach to tensors and groups and their role in theoretical physics and applied mathematics. A particular aim is to demystify tensors and provide a unified framework for understanding them in the context of classical and quantum physics. Connecting the component formalism prevalent in physics calculations with the abstract but more conceptual formulation found in many mathematical texts, the work will be a welcome addition to the literature on tensors and group theory. Part I of the text begins with linear algebraic foundations, follows with the modern component-free definition of tensors, and concludes with applications to classical and quantum physics through the use of tensor products. Part II introduces abstract groups along with matrix Lie groups and Lie algebras, then intertwines this material with that of Part I by introducing representation theory. Exercises and examples are provided throughout for go... 14. Radiative corrections in a vector-tensor model International Nuclear Information System (INIS) Chishtie, F.; Gagne-Portelance, M.; Hanif, T.; Homayouni, S.; McKeon, D.G.C. 2006-01-01 In a recently proposed model in which a vector non-Abelian gauge field interacts with an antisymmetric tensor field, it has been shown that the tensor field possesses no physical degrees of freedom. This formal demonstration is tested by computing the one-loop contributions of the tensor field to the self-energy of the vector field. It is shown that despite the large number of Feynman diagrams in which the tensor field contributes, the sum of these diagrams vanishes, confirming that it is not physical. Furthermore, if the tensor field were to couple with a spinor field, it is shown at one-loop order that the spinor self-energy is not renormalizable, and hence this coupling must be excluded. In principle though, this tensor field does couple to the gravitational field 15. Numerical Models for the Study of Electromagnetic Shielding Directory of Open Access Journals (Sweden) POPA Monica 2012-10-01 Full Text Available The paper presents 2D and 3D models for the study of electromagnetic shielding of a coil. The magnetic fields are computed for defining the shielding effectiveness. Parametrized numerical studies were performed in order to established the influence of shield thickness and height on magnetic field in certain points located in the exterior of coil – shield setup and on induced power within the shield. 16. Space Shielding Materials for Prometheus Application Energy Technology Data Exchange (ETDEWEB) R. Lewis 2006-01-20 At the time of Prometheus program restructuring, shield material and design screening efforts had progressed to the point where a down-selection from approximately eighty-eight materials to a set of five ''primary'' materials was in process. The primary materials were beryllium (Be), boron carbide (B{sub 4}C), tungsten (W), lithium hydride (LiH), and water (H{sub 2}O). The primary materials were judged to be sufficient to design a Prometheus shield--excluding structural and insulating materials, that had not been studied in detail. The foremost preconceptual shield concepts included: (1) a Be/B{sub 4}C/W/LiH shield; (2) a Be/B{sub 4}C/W shield; (3) and a Be/B{sub 4}C/H{sub 2}O shield. Since the shield design and materials studies were still preliminary, alternative materials (e.g., {sup nal}B or {sup 10}B metal) were still being screened, but at a low level of effort. Two competing low mass neutron shielding materials are included in the primary materials due to significant materials uncertainties in both. For LiH, irradiation-induced swelling was the key issue, whereas for H{sub 2}O, containment corrosion without active chemistry control was key, Although detailed design studies are required to accurately estimate the mass of shields based on either hydrogenous material, both are expected to be similar in mass, and lower mass than virtually any alternative. Unlike Be, W, and B{sub 4}C, which are not expected to have restrictive temperature limits, shield temperature limits and design accommodations are likely to be needed for either LiH or H{sub 2}O. The NRPCT focused efforts on understanding swelting of LiH, and observed, from approximately fifty prior irradiation tests, that either casting ar thorough out-gassing should reduce swelling. A potential contributor to LiH swelling appears to be LiOH contamination due to exposure to humid air, that can be eliminated by careful processing. To better understand LiH irradiation performance and 17. Comparison of two global digital algorithms for Minkowski tensor estimation DEFF Research Database (Denmark) The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties...... are confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate estimators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions.... 18. Energy-momentum tensor in the quantum field theory International Nuclear Information System (INIS) Azakov, S.I. 1977-01-01 An energy-momentum tensor in the scalar field theory is built. The tensor must satisfy the finiteness requirement of the Green function. The Green functions can always be made finite by renormalizations in the S-matrix by introducing counter terms into the Hamiltonian (or Lagrangian) of the interaction. Such a renormalization leads to divergencies in the Green functions. Elimination of these divergencies requires the introduction of new counter terms, which must be taken into account in the energy-momentum tensor 19. Introduction to Tensor Decompositions and their Applications in Machine Learning OpenAIRE Rabanser, Stephan; Shchur, Oleksandr; Günnemann, Stephan 2017-01-01 Tensors are multidimensional arrays of numerical values and therefore generalize matrices to multiple dimensions. While tensors first emerged in the psychometrics community in the$20^{\\text{th}}$century, they have since then spread to numerous other disciplines, including machine learning. Tensors and their decompositions are especially beneficial in unsupervised learning settings, but are gaining popularity in other sub-disciplines like temporal and multi-relational data analysis, too. The... 20. Scattering of charged tensor bosons in gauge and superstring theories CERN Document Server Antoniadis, Ignatios 2010-01-01 We calculate the leading-order scattering amplitude of one vector and two tensor gauge bosons in a recently proposed non-Abelian tensor gauge field theory and open superstring theory. The linear in momenta part of the superstring amplitude has identical Lorentz structure with the gauge theory, while its cubic in momenta part can be identified with an effective Lagrangian which is constructed using generalized non-Abelian field strength tensors. 1. Supergravity tensor calculus in 5D from 6D International Nuclear Information System (INIS) Kugo, Taichiro; Ohashi, Keisuke 2000-01-01 Supergravity tensor calculus in five spacetime dimensions is derived by dimensional reduction from the d=6 superconformal tensor calculus. In particular, we obtain an off-shell hypermultiplet in 5D from the on-shell hypermultiplet in 6D. Our tensor calculus retains the dilatation gauge symmetry, so that it is a trivial gauge fixing to make the Einstein term canonical in a general matter-Yang-Mills-supergravity coupled system. (author) 2. As polar ozone mends, UV shield closer to equator thins Science.gov (United States) Reese, April 2018-02-01 Thirty years after nations banded together to phase out chemicals that destroy stratospheric ozone, the gaping hole in Earth's ultraviolet radiation shield above Antarctica is shrinking. But new findings suggest that at midlatitudes, where most people live, the ozone layer in the lower stratosphere is growing more tenuous—for reasons that scientists are struggling to fathom. In an analysis published this week, researchers found that from 1998 to 2016, ozone in the lower stratosphere ebbed by 2.2 Dobson units—a measure of ozone thickness—even as concentrations in the upper stratosphere rose by about 0.8 Dobson units. The culprit may be ozone-eating chemicals such as dichloromethane that break down within 6 months after escaping into the air. 3. Shielding technology for high energy radiation production facility International Nuclear Information System (INIS) Lee, Byung Chul; Kim, Heon Il 2004-06-01 In order to develop shielding technology for high energy radiation production facility, references and data for high energy neutron shielding are searched and collected, and calculations to obtain the characteristics of neutron shield materials are performed. For the evaluation of characteristics of neutron shield material, it is chosen not only general shield materials such as concrete, polyethylene, etc., but also KAERI developed neutron shields of High Density PolyEthylene (HDPE) mixed with boron compound (B 2 O 3 , H 2 BO 3 , Borax). Neutron attenuation coefficients for these materials are obtained for later use in shielding design. The effect of source shape and source angular distribution on the shielding characteristics for several shield materials is examined. This effect can contribute to create shielding concept in case of no detail source information. It is also evaluated the effect of the arrangement of shield materials using current shield materials. With these results, conceptual shielding design for PET cyclotron is performed. The shielding composite using HDPE and concrete is selected to meet the target dose rate outside the composite, and the dose evaluation is performed by configuring the facility room conceptually. From the result, the proper shield configuration for this PET cyclotron is proposed 4. The classification of the Ricci tensor in the general theory of relativity International Nuclear Information System (INIS) Cormack, W.J. 1979-10-01 A comprehensive classification of the Ricci tensor in General Relativity using several techniques is given and their connection with existing classification studied under the headings; canonical forms for the Ricci tensor, invariant 2-spaces in the classification of the Ricci tensor, Riemannian curvature and the classification of the Riemann and Ricci tensors, and spinor classifications of the Ricci tensor. (U.K.) 5. CONSTRUCTION A CORING FROM TENSOR PRODUCT OF BIALGEBRA Directory of Open Access Journals (Sweden) Nikken Prima Puspita 2015-01-01 Full Text Available In this Paper introduced a coring from tensor product of bialgebra. An algebra with compatible coalgebrastructure are known as bialgebra. For any bialgebra B we can obtained tensor product between B anditself. Defined a right and left B -action on the tensor product of bialgebra B such that we have tensorproduct of B and itself is a bimodule over B. In this note we expect that the tensor product B anditself becomes a B -coring with comultiplication and counit.Keywords : action, algebra, coalgebra, coring. 6. Airborne LIDAR Points Classification Based on Tensor Sparse Representation Science.gov (United States) Li, N.; Pfeifer, N.; Liu, C. 2017-09-01 The common statistical methods for supervised classification usually require a large amount of training data to achieve reasonable results, which is time consuming and inefficient. This paper proposes a tensor sparse representation classification (SRC) method for airborne LiDAR points. The LiDAR points are represented as tensors to keep attributes in its spatial space. Then only a few of training data is used for dictionary learning, and the sparse tensor is calculated based on tensor OMP algorithm. The point label is determined by the minimal reconstruction residuals. Experiments are carried out on real LiDAR points whose result shows that objects can be distinguished by this algorithm successfully. 7. p-Norm SDD tensors and eigenvalue localization Directory of Open Access Journals (Sweden) Qilong Liu 2016-07-01 Full Text Available Abstract We present a new class of nonsingular tensors (p-norm strictly diagonally dominant tensors, which is a subclass of strong H$\\mathcal{H}\$ -tensors. As applications of the results, we give a new eigenvalue inclusion set, which is tighter than those provided by Li et al. (Linear Multilinear Algebra 64:727-736, 2016 in some case. Based on this set, we give a checkable sufficient condition for the positive (semidefiniteness of an even-order symmetric tensor.
8. Joint Tensor Feature Analysis For Visual Object Recognition.
Science.gov (United States)
Wong, Wai Keung; Lai, Zhihui; Xu, Yong; Wen, Jiajun; Ho, Chu Po
2015-11-01
Tensor-based object recognition has been widely studied in the past several years. This paper focuses on the issue of joint feature selection from the tensor data and proposes a novel method called joint tensor feature analysis (JTFA) for tensor feature extraction and recognition. In order to obtain a set of jointly sparse projections for tensor feature extraction, we define the modified within-class tensor scatter value and the modified between-class tensor scatter value for regression. The k-mode optimization technique and the L(2,1)-norm jointly sparse regression are combined together to compute the optimal solutions. The convergent analysis, computational complexity analysis and the essence of the proposed method/model are also presented. It is interesting to show that the proposed method is very similar to singular value decomposition on the scatter matrix but with sparsity constraint on the right singular value matrix or eigen-decomposition on the scatter matrix with sparse manner. Experimental results on some tensor datasets indicate that JTFA outperforms some well-known tensor feature extraction and selection algorithms.
9. TENSOR MODELING BASED FOR AIRBORNE LiDAR DATA CLASSIFICATION
Directory of Open Access Journals (Sweden)
N. Li
2016-06-01
Full Text Available Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the “raw” data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.
10. 3D Inversion of SQUID Magnetic Tensor Data
DEFF Research Database (Denmark)
Zhdanov, Michael; Cai, Hongzhu; Wilson, Glenn
2012-01-01
Developments in SQUID-based technology have enabled direct measurement of magnetic tensor data for geophysical exploration. For quantitative interpretation, we introduce 3D regularized inversion for magnetic tensor data. For mineral exploration-scale targets, our model studies show that magnetic...... tensor data have significantly improved resolution compared to magnetic vector data for the same model. We present a case study for the 3D regularized inversion of magnetic tensor data acquired over a magnetite skarn at Tallawang, Australia. The results obtained from our 3D regularized inversion agree...
11. Many-particle quantum hydrodynamics: Exact equations and pressure tensors
Science.gov (United States)
Renziehausen, Klaus; Barth, Ingo
2018-01-01
In the first part of this paper, the many-particle quantum hydrodynamics equations for a system containing many particles of different sorts are derived exactly from the many-particle Schrödinger equation, including the derivation of the many-particle continuity equations, many-particle Ehrenfest equations of motion, and many-particle quantum Cauchy equations for any of the different particle sorts and for the total particle ensemble. The new point in our analysis is that we consider a set of arbitrary particles of different sorts in the system. In the many-particle quantum Cauchy equations, there appears a quantity called the pressure tensor. In the second part of this paper, we analyze two versions of this tensor in depth: the Wyatt pressure tensor and the Kuzmenkov pressure tensor. There are different versions because there is a gauge freedom for the pressure tensor similar to that for potentials. We find that the interpretation of all the quantities contributing to the Wyatt pressure tensor is understandable, but for the Kuzmenkov tensor it is difficult. Furthermore, the transformation from Cartesian coordinates to cylindrical coordinates for the Wyatt tensor can be done in a clear way, but for the Kuzmenkov tensor it is rather cumbersome.
12. On the magnetic polarizability tensor of US coinage
Science.gov (United States)
Davidson, John L.; Abdel-Rehim, Omar A.; Hu, Peipei; Marsh, Liam A.; O’Toole, Michael D.; Peyton, Anthony J.
2018-03-01
The magnetic dipole polarizability tensor of a metallic object gives unique information about the size, shape and electromagnetic properties of the object. In this paper, we present a novel method of coin characterization based on the spectroscopic response of the absolute tensor. The experimental measurements are validated using a combination of tests with a small set of bespoke coin surrogates and simulated data. The method is applied to an uncirculated set of US coins. Measured and simulated spectroscopic tensor responses of the coins show significant differences between different coin denominations. The presented results are encouraging as they strongly demonstrate the ability to characterize coins using an absolute tensor approach.
13. A local potential for the Weyl tensor in all dimensions
International Nuclear Information System (INIS)
Edgar, S Brian; Senovilla, Jose M M
2004-01-01
In all dimensions n ≥ 4 and arbitrary signature, we demonstrate the existence of a new local potential-a double (2, 3)-form, P ab cde -for the Weyl curvature tensor C abcd , and more generally for all tensors W abcd with the symmetry properties of the Weyl tensor. The classical four-dimensional Lanczos potential for a Weyl tensor-a double (2, 1)-form, H ab c -is proven to be a particular case of the new potential: its double dual. (letter to the editor)
14. Shielding design for positron emission tomography facility
International Nuclear Information System (INIS)
Abdallah, I.I.
2007-01-01
15. Application of ceramics for neutron shielding. Proposal of multi-functional shielding materials
Energy Technology Data Exchange (ETDEWEB)
Senda, Tetsuya; Akiyama, Shigeru; Matsuoka, Kazuyoshi; Ueki, Kohtaro; Ohashi, Atsuto [Ship Research Inst., Mitaka, Tokyo (Japan); Amada, Shigeyasu [Gunma Univ., Maebashi (Japan)
1999-09-01
Radiation shielding is one of the fundamental technologies to ensure the safety of the nuclear plants. Particularly for the nuclear systems as the power plants of ships and undersea vehicles, radiation shielding should be achieved within limited space and weight. Ceramics are of great interest as shielding components, because they can be composed with a wide variation of elements that have different shielding specifications. They are also known as good structural materials at high temperatures. Therefore, ceramics may be promising as 'multi-functional' shielding materials. In the present study, neutron shielding effects are first investigated by a series of the experiments using a {sup 252}Cf neutron source and simulated by using Monte Carlo Code MCNP 4A. The role of each ceramics is discussed particularly in terms of the 'enhancement effect' by medium-heavy elements, such as chromium and titanium. As an advanced technique to evaluate the thermal shock resistance of the materials, a laser irradiation method is proposed and applied to those ceramics that are expected to be neutron shielding components. Detailed discussion is made on the effects of porosity and multiple irradiation resulting in a fatigue-like behavior. Based on the results of these experiments and simulations, a three-layered arrangement, consisting of chromium carbide, titanium boride and boron nitride, is proposed as a multi-functional shielding material that minimizes the dose-equivalent rate and also exhibits good thermal shock resistance. (author)
16. Symmetric Topological Phases and Tensor Network States
Science.gov (United States)
Jiang, Shenghan
Classification and simulation of quantum phases are one of main themes in condensed matter physics. Quantum phases can be distinguished by their symmetrical and topological properties. The interplay between symmetry and topology in condensed matter physics often leads to exotic quantum phases and rich phase diagrams. Famous examples include quantum Hall phases, spin liquids and topological insulators. In this thesis, I present our works toward a more systematically understanding of symmetric topological quantum phases in bosonic systems. In the absence of global symmetries, gapped quantum phases are characterized by topological orders. Topological orders in 2+1D are well studied, while a systematically understanding of topological orders in 3+1D is still lacking. By studying a family of exact solvable models, we find at least some topological orders in 3+1D can be distinguished by braiding phases of loop excitations. In the presence of both global symmetries and topological orders, the interplay between them leads to new phases termed as symmetry enriched topological (SET) phases. We develop a framework to classify a large class of SET phases using tensor networks. For each tensor class, we can write down generic variational wavefunctions. We apply our method to study gapped spin liquids on the kagome lattice, which can be viewed as SET phases of on-site symmetries as well as lattice symmetries. In the absence of topological order, symmetry could protect different topological phases, which are often referred to as symmetry protected topological (SPT) phases. We present systematic constructions of tensor network wavefunctions for bosonic symmetry protected topological (SPT) phases respecting both onsite and spatial symmetries.
17. Scalar-tensor cosmology with cosmological constant
International Nuclear Information System (INIS)
Maslanka, K.
1983-01-01
The equations of scalar-tensor theory of gravitation with cosmological constant in the case of homogeneous and isotropic cosmological model can be reduced to dynamical system of three differential equations with unknown functions H=R/R, THETA=phi/phi, S=e/phi. When new variables are introduced the system becomes more symmetrical and cosmological solutions R(t), phi(t), e(t) are found. It is shown that when cosmological constant is introduced large class of solutions which depend also on Dicke-Brans parameter can be obtained. Investigations of these solutions give general limits for cosmological constant and mean density of matter in plane model. (author)
18. Tensor glueball-meson mixing phenomenology
International Nuclear Information System (INIS)
Burakovsky, L.; Page, P.R.
2000-01-01
The overpopulated isoscalar tensor states are sifted using Schwinger-type mass relations. Two solutions are found: one where the glueball is the f J (2220), and one where the glueball is more distributed, with f 2 (1820) having the largest component. The f 2 (1565) and f J (1710) cannot be accommodated as glueball-(hybrid) meson mixtures in the absence of significant coupling to decay channels. f 2 '(1525)→ππ is in agreement with experiment. The f J (2220) decays neither flavour democratically nor is narrow. (orig.)
19. Tensor Network Wavefunctions for Topological Phases
Science.gov (United States)
Ware, Brayden Alexander
The combination of quantum effects and interactions in quantum many-body systems can result in exotic phases with fundamentally entangled ground state wavefunctions--topological phases. Topological phases come in two types, both of which will be studied in this thesis. In topologically ordered phases, the pattern of entanglement in the ground state wavefunction encodes the statistics of exotic emergent excitations, a universal indicator of a phase that is robust to all types of perturbations. In symmetry protected topological phases, the entanglement instead encodes a universal response of the system to symmetry defects, an indicator that is robust only to perturbations respecting the protecting symmetry. Finding and creating these phases in physical systems is a motivating challenge that tests all aspects--analytical, numerical, and experimental--of our understanding of the quantum many-body problem. Nearly three decades ago, the creation of simple ansatz wavefunctions--such as the Laughlin fractional quantum hall state, the AKLT state, and the resonating valence bond state--spurred analytical understanding of both the role of entanglement in topological physics and physical mechanisms by which it can arise. However, quantitative understanding of the relevant phase diagrams is still challenging. For this purpose, tensor networks provide a toolbox for systematically improving wavefunction ansatz while still capturing the relevant entanglement properties. In this thesis, we use the tools of entanglement and tensor networks to analyze ansatz states for several proposed new phases. In the first part, we study a featureless phase of bosons on the honeycomb lattice and argue that this phase can be topologically protected under any one of several distinct subsets of the crystalline lattice symmetries. We discuss methods of detecting such phases with entanglement and without. In the second part, we consider the problem of constructing fixed-point wavefunctions for
20. A Case of Tensor Fasciae Suralis Muscle
OpenAIRE
Miyauchi, Ryosuke; Kurihara, Kazushige; Tachibana, Gen
1985-01-01
An anomalous muscle was found on the dorsum of the right lower limb of a 67-year-old Japanese male. It originated by two heads from the semitendinosus and long head of the biceps femoris and ran distally to insert into the deep surface of the sural fascia. The origin, insertion and location of the muscle were compared with those of the various supernumerary muscles hitherto published. The muscle is consequently regarded as being the tensor fasciae suralis. This is the fifth case in Japan.
1. Holographic duality from random tensor networks
Energy Technology Data Exchange (ETDEWEB)
Hayden, Patrick; Nezami, Sepehr; Qi, Xiao-Liang; Thomas, Nathaniel; Walter, Michael; Yang, Zhao [Stanford Institute for Theoretical Physics, Department of Physics, Stanford University,382 Via Pueblo, Stanford, CA 94305 (United States)
2016-11-02
Tensor networks provide a natural framework for exploring holographic duality because they obey entanglement area laws. They have been used to construct explicit toy models realizing many of the interesting structural features of the AdS/CFT correspondence, including the non-uniqueness of bulk operator reconstruction in the boundary theory. In this article, we explore the holographic properties of networks of random tensors. We find that our models naturally incorporate many features that are analogous to those of the AdS/CFT correspondence. When the bond dimension of the tensors is large, we show that the entanglement entropy of all boundary regions, whether connected or not, obey the Ryu-Takayanagi entropy formula, a fact closely related to known properties of the multipartite entanglement of assistance. We also discuss the behavior of Rényi entropies in our models and contrast it with AdS/CFT. Moreover, we find that each boundary region faithfully encodes the physics of the entire bulk entanglement wedge, i.e., the bulk region enclosed by the boundary region and the minimal surface. Our method is to interpret the average over random tensors as the partition function of a classical ferromagnetic Ising model, so that the minimal surfaces of Ryu-Takayanagi appear as domain walls. Upon including the analog of a bulk field, we find that our model reproduces the expected corrections to the Ryu-Takayanagi formula: the bulk minimal surface is displaced and the entropy is augmented by the entanglement of the bulk field. Increasing the entanglement of the bulk field ultimately changes the minimal surface behavior topologically, in a way similar to the effect of creating a black hole. Extrapolating bulk correlation functions to the boundary permits the calculation of the scaling dimensions of boundary operators, which exhibit a large gap between a small number of low-dimension operators and the rest. While we are primarily motivated by the AdS/CFT duality, the main
2. Application of modern tensor calculus to engineered domain structures. 2. Tensor distinction of domain states
Czech Academy of Sciences Publication Activity Database
Kopský, Vojtěch
2006-01-01
Roč. 62, - (2006), s. 65-76 ISSN 0108-7673 R&D Projects: GA ČR GA202/04/0992 Institutional research plan: CEZ:AV0Z10100520 Keywords : tensor ial covariants * domain states * stability spaces Subject RIV: BE - Theoretical Physics Impact factor: 1.676, year: 2006
3. Comparison of eye shields in radiotherapeutic beams
International Nuclear Information System (INIS)
Currie, B.E.; Wellington Hospital, Wellington; Johnson, A.D.
2004-01-01
4. Study and installation of concrete shielding in the civil engineering of nuclear construction (1960)
International Nuclear Information System (INIS)
Dubois, F.
1960-01-01
The object of this report is to give technical information about high density concretes which have become very important for radiation biological shielding. The most generally used heavy aggregates (barytes, ilmenite, ferrophosphorus, limonite, magnetite and iron punching) to make these concretes are investigated from the point of view prospecting and physical and chemical characteristics. At first, a general survey of shielding concretes is made involving the study of components, mixing and placing methods, then, a detailed investigation of some high density concretes: barytes concrete, with incorporation of iron punching or iron shot, ferrophosphorus concrete, ilmenite concrete and magnetite concrete, more particularly with regard to grading and mix proportions and testing process. To put this survey in concrete form, two practical designs are described such as they have been carried out at the Saclay Nuclear Station. Specifications are given for diverse concretes and for making the proton-synchrotron 'Saturne' shielding blocks. (author) [fr
5. Gamma-ray mass attenuation coefficient and half value layer factor of some oxide glass shielding materials
International Nuclear Information System (INIS)
Waly, El-Sayed A.; Fusco, Michael A.; Bourham, Mohamed A.
2016-01-01
The variation in dosimetric parameters such as mass attenuation coefficient, half value layer factor, exposure buildup factor, and the photon mean free path for different oxide glasses for the incident gamma energy range 0.015–15 MeV has been studied using MicroShield code. It has been inferred that the addition of PbO and Bi 2 O 3 improves the gamma ray shielding properties. Thus, the effect of chemical composition on these parameters is investigated in the form of six different glass compositions, which are compared with specialty concrete for nuclear radiation shielding. The composition termed ‘Glass 6’ in this paper has the highest mass attenuation and the smallest half value layer and may have potential applications in radiation shielding. An example dry storage cask utilizing an additional layer of Glass 6 as an intermediate shielding layer, simulated in MicroShield, is capable of reducing the exposure rate at the cask surface by over 20 orders of magnitude compared to the case without a glass layer. Based on this study, Glass 6 shows promise as a gamma-ray shielding material, particularly for dry cask storage.
6. Radiation shielding fiber and its manufacturing method
International Nuclear Information System (INIS)
Tanaka, Koji; Ono, Hiroshi.
1988-01-01
Purpose: To manufacture radiation shielding fibers of excellent shielding effects. Method: Fibers containing more than 1 mmol/g of carboxyl groups are bonded with heavy metals, or they are impregnated with an aqueous solution containing water-soluble heavy metal salts dissolved therein. Fibers as the substrate may be any of forms such as short fibers, long fibers, fiber tows, webs, threads, knitting or woven products, non-woven fabrics, etc. It is however necessary that fibers contain more than 1 mmol/g, preferably, from 2 to 7 mmol/g of carboxylic groups. Since heavy metals having radiation shielding performance are bonded to the outer layer of the fibers and the inherent performance of the fibers per se is possessed, excellent radiation shielding performance can be obtained, as well as they can be applied with spinning, knitting or weaving, stitching, etc. thus can be used for secondary fiber products such as clothings, caps, masks, curtains, carpets, cloths, etc. for use in radiation shieldings. (Kamimura, M.)
7. Hydrogen Induced Cracking of Drip Shield
Energy Technology Data Exchange (ETDEWEB)
G. De
2003-02-24
One potential failure mechanism for titanium and its alloys under repository conditions is via the absorption of atomic hydrogen in the metal crystal lattice. The resulting decreased ductility and fracture toughness may lead to brittle mechanical fracture called hydrogen-induced cracking (HIC) or hydrogen embrittlement. For the current design of the engineered barrier without backfill, HIC may be a problem since the titanium drip shield can be galvanically coupled to rock bolts (or wire mesh), which may fall onto the drip shield, thereby creating conditions for hydrogen production by electrochemical reaction. The purpose of this scientific analysis and modeling activity is to evaluate whether the drip shield will fail by HIC or not under repository conditions within 10,000 years of emplacement. This Analysis and Model Report (AMR) addresses features, events, and processes related to hydrogen induced cracking of the drip shield. REV 00 of this AMR served as a feed to ''Waste Package Degradation Process Model Report'' and was developed in accordance with the activity section ''Hydrogen Induced Cracking of Drip Shield'' of the development plan entitled ''Analysis and Model Reports to Support Waste Package PMR'' (CRWMS M&O 1999a). This AMR, prepared according to ''Technical Work Plan for: Waste Package Materials Data Analyses and Modeling'' (BSC 2002), is to feed the License Application.
8. Cosmic Ray Interactions in Shielding Materials
Energy Technology Data Exchange (ETDEWEB)
Aguayo Navarrete, Estanislao; Kouzes, Richard T.; Ankney, Austin S.; Orrell, John L.; Berguson, Timothy J.; Troy, Meredith D.
2011-09-08
This document provides a detailed study of materials used to shield against the hadronic particles from cosmic ray showers at Earth’s surface. This work was motivated by the need for a shield that minimizes activation of the enriched germanium during transport for the MAJORANA collaboration. The materials suitable for cosmic-ray shield design are materials such as lead and iron that will stop the primary protons, and materials like polyethylene, borated polyethylene, concrete and water that will stop the induced neutrons. The interaction of the different cosmic-ray components at ground level (protons, neutrons, muons) with their wide energy range (from kilo-electron volts to giga-electron volts) is a complex calculation. Monte Carlo calculations have proven to be a suitable tool for the simulation of nucleon transport, including hadron interactions and radioactive isotope production. The industry standard Monte Carlo simulation tool, Geant4, was used for this study. The result of this study is the assertion that activation at Earth’s surface is a result of the neutronic and protonic components of the cosmic-ray shower. The best material to shield against these cosmic-ray components is iron, which has the best combination of primary shielding and minimal secondary neutron production.
9. Correlated Uncertainties in Radiation Shielding Effectiveness
Science.gov (United States)
Werneth, Charles M.; Maung, Khin Maung; Blattnig, Steve R.; Clowdsley, Martha S.; Townsend, Lawrence W.
2013-01-01
The space radiation environment is composed of energetic particles which can deliver harmful doses of radiation that may lead to acute radiation sickness, cancer, and even death for insufficiently shielded crew members. Spacecraft shielding must provide structural integrity and minimize the risk associated with radiation exposure. The risk of radiation exposure induced death (REID) is a measure of the risk of dying from cancer induced by radiation exposure. Uncertainties in the risk projection model, quality factor, and spectral fluence are folded into the calculation of the REID by sampling from probability distribution functions. Consequently, determining optimal shielding materials that reduce the REID in a statistically significant manner has been found to be difficult. In this work, the difference of the REID distributions for different materials is used to study the effect of composition on shielding effectiveness. It is shown that the use of correlated uncertainties allows for the determination of statistically significant differences between materials despite the large uncertainties in the quality factor. This is in contrast to previous methods where uncertainties have been generally treated as uncorrelated. It is concluded that the use of correlated quality factor uncertainties greatly reduces the uncertainty in the assessment of shielding effectiveness for the mitigation of radiation exposure.
10. Shielding Development for Nuclear Thermal Propulsion
Science.gov (United States)
Caffrey, Jarvis A.; Gomez, Carlos F.; Scharber, Luke L.
2015-01-01
Radiation shielding analysis and development for the Nuclear Cryogenic Propulsion Stage (NCPS) effort is currently in progress and preliminary results have enabled consideration for critical interfaces in the reactor and propulsion stage systems. Early analyses have highlighted a number of engineering constraints, challenges, and possible mitigating solutions. Performance constraints include permissible crew dose rates (shared with expected cosmic ray dose), radiation heating flux into cryogenic propellant, and material radiation damage in critical components. Design strategies in staging can serve to reduce radiation scatter and enhance the effectiveness of inherent shielding within the spacecraft while minimizing the required mass of shielding in the reactor system. Within the reactor system, shield design is further constrained by the need for active cooling with minimal radiation streaming through flow channels. Material selection and thermal design must maximize the reliability of the shield to survive the extreme environment through a long duration mission with multiple engine restarts. A discussion of these challenges and relevant design strategies are provided for the mitigation of radiation in nuclear thermal propulsion.
11. Analysis of Shield Construction in Spherical Weathered Granite Development Area
Science.gov (United States)
Cao, Quan; Li, Peigang; Gong, Shuhua
2018-01-01
The distribution of spherical weathered bodies (commonly known as "boulder") in the granite development area directly affects the shield construction of urban rail transit engineering. This paper is based on the case of shield construction of granite globular development area in Southern China area, the parameter control in shield machine selection and shield advancing during the shield tunneling in this special geological environment is analyzed. And it is suggested that shield machine should be selected for shield construction of granite spherical weathered zone. Driving speed, cutter torque, shield machine thrust, the amount of penetration and the speed of the cutter head of shield machine should be controlled when driving the boulder formation, in order to achieve smooth excavation and reduce the disturbance to the formation.
12. Evaluation of a permanent reactor vessel head shield
International Nuclear Information System (INIS)
Wagner, D.S.; Johnson, T.G.; Tipswork, S.R.
1988-01-01
This paper reports that Virginia Power recently completed installing permanent reactor vessel head shields at all four of its nuclear units-Surry 1 and 2 (781-MWe Westinghouse PWRs) and North Anna 1 and 2 (893-MWe Westinghouse PWRs). Permanent shields were chosen over the use of temporary shielding based on a cost/benefit analysis. Factors that were taken into account in the analysis included the cost of the shields, the one-time dose commitment for installation of permanent shields, dose and manpower commitments for installation and removal of temporary shielding during each outage, decontamination and storage of temporary shielding between outages, and projected dose savings for both types of shields. Basically, permanent shields were found to be more cot-effective because each required only a one-time dose commitment for installation
13. Passive magnetic shielding in MRI-Linac systems
Science.gov (United States)
Whelan, Brendan; Kolling, Stefan; Oborn, Brad M.; Keall, Paul
2018-04-01
Passive magnetic shielding refers to the use of ferromagnetic materials to redirect magnetic field lines away from vulnerable regions. An application of particular interest to the medical physics community is shielding in MRI systems, especially integrated MRI-linear accelerator (MRI-Linac) systems. In these systems, the goal is not only to minimize the magnetic field in some volume, but also to minimize the impact of the shield on the magnetic fields within the imaging volume of the MRI scanner. In this work, finite element modelling was used to assess the shielding of a side coupled 6 MV linac and resultant heterogeneity induced within the 30 cm diameter of spherical volume (DSV) of a novel 1 Tesla split bore MRI magnet. A number of different shield parameters were investigated; distance between shield and magnet, shield shape, shield thickness, shield length, openings in the shield, number of concentric layers, spacing between each layer, and shield material. Both the in-line and perpendicular MRI-Linac configurations were studied. By modifying the shield shape around the linac from the starting design of an open ended cylinder, the shielding effect was boosted by approximately 70% whilst the impact on the magnet was simultaneously reduced by approximately 10%. Openings in the shield for the RF port and beam exit were substantial sources of field leakage; however it was demonstrated that shielding could be added around these openings to compensate for this leakage. Layering multiple concentric shield shells was highly effective in the perpendicular configuration, but less so for the in-line configuration. Cautious use of high permeability materials such as Mu-metal can greatly increase the shielding performance in some scenarios. In the perpendicular configuration, magnetic shielding was more effective and the impact on the magnet lower compared with the in-line configuration.
14. Interactive Volume Rendering of Diffusion Tensor Data
Energy Technology Data Exchange (ETDEWEB)
Hlawitschka, Mario; Weber, Gunther; Anwander, Alfred; Carmichael, Owen; Hamann, Bernd; Scheuermann, Gerik
2007-03-30
As 3D volumetric images of the human body become an increasingly crucial source of information for the diagnosis and treatment of a broad variety of medical conditions, advanced techniques that allow clinicians to efficiently and clearly visualize volumetric images become increasingly important. Interaction has proven to be a key concept in analysis of medical images because static images of 3D data are prone to artifacts and misunderstanding of depth. Furthermore, fading out clinically irrelevant aspects of the image while preserving contextual anatomical landmarks helps medical doctors to focus on important parts of the images without becoming disoriented. Our goal was to develop a tool that unifies interactive manipulation and context preserving visualization of medical images with a special focus on diffusion tensor imaging (DTI) data. At each image voxel, DTI provides a 3 x 3 tensor whose entries represent the 3D statistical properties of water diffusion locally. Water motion that is preferential to specific spatial directions suggests structural organization of the underlying biological tissue; in particular, in the human brain, the naturally occuring diffusion of water in the axon portion of neurons is predominantly anisotropic along the longitudinal direction of the elongated, fiber-like axons [MMM+02]. This property has made DTI an emerging source of information about the structural integrity of axons and axonal connectivity between brain regions, both of which are thought to be disrupted in a broad range of medical disorders including multiple sclerosis, cerebrovascular disease, and autism [Mos02, FCI+01, JLH+99, BGKM+04, BJB+03].
15. Black holes in vector-tensor theories
Energy Technology Data Exchange (ETDEWEB)
Heisenberg, Lavinia [Institute for Theoretical Studies, ETH Zurich, Clausiusstrasse 47, 8092 Zurich (Switzerland); Kase, Ryotaro; Tsujikawa, Shinji [Department of Physics, Faculty of Science, Tokyo University of Science, 1-3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601 (Japan); Minamitsuji, Masato, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Centro Multidisciplinar de Astrofisica—CENTRA, Departamento de Fisica, Instituto Superior Tecnico—IST, Universidade de Lisboa—UL, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal)
2017-08-01
We study static and spherically symmetric black hole (BH) solutions in second-order generalized Proca theories with nonminimal vector field derivative couplings to the Ricci scalar, the Einstein tensor, and the double dual Riemann tensor. We find concrete Lagrangians which give rise to exact BH solutions by imposing two conditions of the two identical metric components and the constant norm of the vector field. These exact solutions are described by either Reissner-Nordström (RN), stealth Schwarzschild, or extremal RN solutions with a non-trivial longitudinal mode of the vector field. We then numerically construct BH solutions without imposing these conditions. For cubic and quartic Lagrangians with power-law couplings which encompass vector Galileons as the specific cases, we show the existence of BH solutions with the difference between two non-trivial metric components. The quintic-order power-law couplings do not give rise to non-trivial BH solutions regular throughout the horizon exterior. The sixth-order and intrinsic vector-mode couplings can lead to BH solutions with a secondary hair. For all the solutions, the vector field is regular at least at the future or past horizon. The deviation from General Relativity induced by the Proca hair can be potentially tested by future measurements of gravitational waves in the nonlinear regime of gravity.
16. Quantum chaos and holographic tensor models
International Nuclear Information System (INIS)
Krishnan, Chethan; Sanyal, Sambuddha; Subramanian, P.N. Bala
2017-01-01
A class of tensor models were recently outlined as potentially calculable examples of holography: their perturbative large-N behavior is similar to the Sachdev-Ye-Kitaev (SYK) model, but they are fully quantum mechanical (in the sense that there is no quenched disorder averaging). These facts make them intriguing tentative models for quantum black holes. In this note, we explicitly diagonalize the simplest non-trivial Gurau-Witten tensor model and study its spectral and late-time properties. We find parallels to (a single sample of) SYK where some of these features were recently attributed to random matrix behavior and quantum chaos. In particular, the spectral form factor exhibits a dip-ramp-plateau structure after a running time average, in qualitative agreement with SYK. But we also observe that even though the spectrum has a unique ground state, it has a huge (quasi-?)degeneracy of intermediate energy states, not seen in SYK. If one ignores the delta function due to the degeneracies however, there is level repulsion in the unfolded spacing distribution hinting chaos. Furthermore, there are gaps in the spectrum. The system also has a spectral mirror symmetry which we trace back to the presence of a unitary operator with which the Hamiltonian anticommutes. We use it to argue that to the extent that the model exhibits random matrix behavior, it is controlled not by the Dyson ensembles, but by the BDI (chiral orthogonal) class in the Altland-Zirnbauer classification.
17. Quantum chaos and holographic tensor models
Energy Technology Data Exchange (ETDEWEB)
Krishnan, Chethan [Center for High Energy Physics, Indian Institute of Science,Bangalore 560012 (India); Sanyal, Sambuddha [International Center for Theoretical Sciences, Tata Institute of Fundamental Research,Bangalore 560089 (India); Subramanian, P.N. Bala [Center for High Energy Physics, Indian Institute of Science,Bangalore 560012 (India)
2017-03-10
A class of tensor models were recently outlined as potentially calculable examples of holography: their perturbative large-N behavior is similar to the Sachdev-Ye-Kitaev (SYK) model, but they are fully quantum mechanical (in the sense that there is no quenched disorder averaging). These facts make them intriguing tentative models for quantum black holes. In this note, we explicitly diagonalize the simplest non-trivial Gurau-Witten tensor model and study its spectral and late-time properties. We find parallels to (a single sample of) SYK where some of these features were recently attributed to random matrix behavior and quantum chaos. In particular, the spectral form factor exhibits a dip-ramp-plateau structure after a running time average, in qualitative agreement with SYK. But we also observe that even though the spectrum has a unique ground state, it has a huge (quasi-?)degeneracy of intermediate energy states, not seen in SYK. If one ignores the delta function due to the degeneracies however, there is level repulsion in the unfolded spacing distribution hinting chaos. Furthermore, there are gaps in the spectrum. The system also has a spectral mirror symmetry which we trace back to the presence of a unitary operator with which the Hamiltonian anticommutes. We use it to argue that to the extent that the model exhibits random matrix behavior, it is controlled not by the Dyson ensembles, but by the BDI (chiral orthogonal) class in the Altland-Zirnbauer classification.
18. On Inverse Coefficient Heat-Conduction Problems on Reconstruction of Nonlinear Components of the Thermal-Conductivity Tensor of Anisotropic Bodies
Science.gov (United States)
Formalev, V. F.; Kolesnik, S. A.
2017-11-01
The authors are the first to present a closed procedure for numerical solution of inverse coefficient problems of heat conduction in anisotropic materials used as heat-shielding ones in rocket and space equipment. The reconstructed components of the thermal-conductivity tensor depend on temperature (are nonlinear). The procedure includes the formation of experimental data, the implicit gradient-descent method, the economical absolutely stable method of numerical solution of parabolic problems containing mixed derivatives, the parametric identification, construction, and numerical solution of the problem for elements of sensitivity matrices, the development of a quadratic residual functional and regularizing functionals, and also the development of algorithms and software systems. The implicit gradient-descent method permits expanding the quadratic functional in a Taylor series with retention of the linear terms for the increments of the sought functions. This substantially improves the exactness and stability of solution of the inverse problems. Software systems are developed with account taken of the errors in experimental data and disregarding them. On the basis of a priori assumptions of the qualitative behavior of the functional dependences of the components of the thermal-conductivity tensor on temperature, regularizing functionals are constructed by means of which one can reconstruct the components of the thermal-conductivity tensor with an error no higher than the error of the experimental data. Results of the numerical solution of the inverse coefficient problems on reconstruction of nonlinear components of the thermal-conductivity tensor have been obtained and are discussed.
19. Calculated shielding factors for selected European houses
International Nuclear Information System (INIS)
Hedemann Jensen, P.
1984-12-01
Shielding factors for gamma radiation from activity deposited on structures and ground surfaces have been calculated with the computer model DEPSHIELD for single-family and multi-storey buildings in France, United Kingdom and Denmark. For all three countries it was found that the shielding factors for single-family houses are approximately a factor of 2 - 10 higher that those for buildings with five or more storeys. Away from doors and windows the shielding factors for French, British, and Danish single-family houses are in the range 0.03 - 0.1, 0.06 - 0.4, and 0.07 - 0.3, respectively. The uncertainties of the calculations are discussed and DEPSHIELD-results are compared with other methods as well as with experimental results. (author)
20. Radiation safety shield for a syringe
International Nuclear Information System (INIS)
Tipton, H.W.
1976-01-01
Safety apparatus for use in administering radioactive serums by a syringe, without endangering the health and safety of the medical operators is described. The apparatus consists of a sheath and a shield which can be retracted into the sheath to assay the radioactive serum in an assay well. The shield can be moved from the retracted position into an extended position when the serum is to be injected into the patient. To protect the operator, the shield can be constructed of tantalum or any like high density substance to attenuate the radiation, emanating from the radioactive serums contained in the syringe, from passing to the atmosphere. A lead glass window is provided so that the operator can determine the exact quantity of the radioactive serum which is contained in the syringe
1. Slipforming of reinforced concrete shield building
International Nuclear Information System (INIS)
Hsieh, M.C.; King, J.R.
1982-01-01
The unique design and construction features of slipforming the heavily reinforced concrete cylindrical shield walls at the Satsop nuclear plant in Washington, D.C. site are presented. The shield walls were designed in compliance with seismic requirements which resulted in the need for reinforcing steel averaging 326 kg/m/sup 3/. A 7.6 m high, three-deck moving platform was designed to permit easy installation of the reinforcing steel, embedments, and blockouts, and to facilitate concrete placement and finishing. Two circular box trusses, one on each side of the shield wall, were used in combination with a spider truss to meet both the tolerance and strength requirements for the slipform assembly
2. Final design of ITER thermal shield manifold
Energy Technology Data Exchange (ETDEWEB)
Kim, Kyung-Kyu [Mecha T& S, Jinju-si 52811 (Korea, Republic of); Noh, Chang Hyun, E-mail: [email protected] [National Fusion Research Institute, Daejeon 34133 (Korea, Republic of); Kim, Yun-Kyu; Park, Sungwoo [Mecha T& S, Jinju-si 52811 (Korea, Republic of); Nam, Kwanwoo [National Fusion Research Institute, Daejeon 34133 (Korea, Republic of); Chung, Wooho [Mecha T& S, Jinju-si 52811 (Korea, Republic of); Kang, Dongkwon; Kang, Kyung-O. [National Fusion Research Institute, Daejeon 34133 (Korea, Republic of); Park, Sungmun [SFA Engineering Corporation, Hwaseong-si 10060 (Korea, Republic of); Bae, Jing Do [Korea Marine Equipment Research Institute, Busan 49111 (Korea, Republic of)
2016-11-01
Highlights: • Engineering design of thermal shield manifold is finalized. • Pipe routing, support design and flow balance are verified by analysis. • Mock-ups are fabricated to verify the design. - Abstract: The ITER thermal shield is actively cooled by 80 K pressurized helium gas. The helium coolant flows from the cold valve box to the cooling tubes on the TS panels via manifold piping. This paper describes the final design of thermal shield manifold. Pipe design to accommodate the thermal contraction considering interface with adjacent components and detailed design of support structure are presented. R&D for the pipe branch connection is carried out to find a feasible manufacturing method. Global structural behavior and structural integrity of the manifold including pipe supports are investigated by a finite element analysis based on ASME B31.3 code. Flow analyses are performed to check the flow distribution.
3. Progress of the ITER Thermal Shields
Energy Technology Data Exchange (ETDEWEB)
Her, Namil, E-mail: [email protected] [ITER Organisation, Route de Vinon-sur-Verdon – CS 90046, 13067 St Paul-lez-Durance Cedex (France); Hick, Robby; Le Barbier, Robin; Arzoumanian, Terenig; Choi, Chang-Ho; Sborchia, Carlo [ITER Organisation, Route de Vinon-sur-Verdon – CS 90046, 13067 St Paul-lez-Durance Cedex (France); Chung, Wooho; Nam, Kwanwoo; Noh, Chang Hyun; Kang, Dong Kwon; Kang, Gyoung-O. [ITER Korea, National Fusion Research Institute, Daejeon 34133 (Korea, Republic of); Kang, Youngkil; Lim, Kisuk [SFA Engineering Corporation, Hwaseong-si, Gyeonggi-do 10060 (Korea, Republic of)
2016-11-01
Highlights: • Design improvement of the ITER Thermal Shields was introduced. • Design of TS manifold and TS instrumentation were summarized. • Produced main material of the TS (SS304LN) was summarized. • Status of the VVTS manufacturing and the inspection requirements were summarized. - Abstract: The role of the ITER Thermal Shields (TS) is to minimize the radiation heat load from the warm components such as vacuum vessel and cryostat to magnet operating at 4.5 K. The final design of TS was completed in 2013 and manufacturing of the vacuum vessel thermal shield (VVTS) is now on-going. This paper describes the development status of the TS in particular the design improvements, the fabrication and the requirements.
4. Electromagnetic Shielding Efficiency Measurement of Composite Materials
Science.gov (United States)
Dřínovský, J.; Kejík, Z.
2009-01-01
This paper deals with the theoretical and practical aspects of the shielding efficiency measurements of construction composite materials. This contribution describes an alternative test method of these measurements by using the measurement circular flange. The measured results and parameters of coaxial test flange are also discussed. The measurement circular flange is described by measured scattering parameters in the frequency range from 9 kHz up to 1 GHz. The accuracy of the used shielding efficiency measurement method was checked by brass calibration ring. The suitability of the coaxial test setup was also checked by measurements on the EMC test chamber. This data was compared with the measured data on the real EMC chamber. The whole measurement of shielding efficiency was controlled by the program which runs on a personal computer. This program was created in the VEE Pro environment produced by © Agilent Technology.
5. Radiation shielding performance of some concrete
International Nuclear Information System (INIS)
Akkurt, I.; Akyildirim, H.; Mavi, B.; Kilincarslan, S.; Basyigit, C.
2007-01-01
The energy consumption is increasing with the increased population of the world and thus new energy sources were discovered such as nuclear energy. Besides using nuclear energy, nuclear techniques are being used in a variety of fields such as medical hospital, industry, agriculture or military issue, the radiation protection becomes one of the important research fields. In radiation protection, the main rules are time, distance and shielding. The most effective radiation shields are materials which have a high density and high atomic number such as lead, tungsten which are expensive. Alternatively the concrete which produced using different aggregate can be used. The effectiveness of radiation shielding is frequently described in terms of the half value layer (HVL) or the tenth value layer (TVL). These are the thicknesses of an absorber that will reduce the radiation to half, and one tenth of its intensity respectively. In this study the radiation protection properties of different types of concrete will be discussed
6. Accelerator shielding experts meet at CERN
CERN Multimedia
CERN Bulletin
2010-01-01
Fifteen years after its first CERN edition, the Shielding Aspects of Accelerator, Targets and Irradiation Facility (SATIF) conference was held again here from 2-4 June. Now at its 10th edition, SATIF10 brought together experts from all over the world to discuss issues related to the shielding techniques. They set out the scene for an improved collaboration and discussed novel shielding solutions. This was the most attended meeting of the series with more than 65 participants from 34 institutions and 14 countries. “We welcomed experts from many different laboratories around the world. We come from different contexts but we face similar problems. In this year’s session, among other things, we discussed ways for improving the effectiveness of calculations versus real data, as well as experimental solutions to investigate the damage that radiation produces on various materials and the electronics”, says Marco Silari, Chair of the conference and member of the DGS/RP gro...
7. Heating profiles on ICRF antenna Faraday shields
International Nuclear Information System (INIS)
Taylor, D.J.; Baity, F.W.; Hahs, C.L.; Riemer, B.W.; Ryan, P.M.; Williamson, D.E.
1991-01-01
A conceptual design for an uncooled Faraday shield for the BPX ion cyclotron resonance heating (ICRH) antenna, which should withstand the proposed long-pulse operation, has been completed. A high-heat-flux, uncooled Faraday shield has also been designed for the fast-wave current drive (FWCD) antenna on D3-D. For both components, the improved understanding of the heating profiles made it possible to design for heat fluxes that would otherwise have been too close to mechanically established limits. The analytical effort is described in detail, with emphasis on the design work for the BPX ICRH antenna conceptual design and for the replacement Faraday shield for the D3-D FWCD antenna. Results of analyses are shown, and configuration issues involved in component modeling are discussed. 3 refs., 6 figs., 2 tabs
8. First Wall, Blanket, Shield Engineering Technology Program
International Nuclear Information System (INIS)
Nygren, R.E.
1982-01-01
The First Wall/Blanket/Shield Engineering Technology Program sponsored by the Office of Fusion Energy of DOE has the overall objective of providing engineering data that will define performance parameters for nuclear systems in advanced fusion reactors. The program comprises testing and the development of computational tools in four areas: (1) thermomechanical and thermal-hydraulic performance of first-wall component facsimiles with emphasis on surface heat loads; (2) thermomechanical and thermal-hydraulic performance of blanket and shield component facsimiles with emphasis on bulk heating; (3) electromagnetic effects in first wall, blanket, and shield component facsimiles with emphasis on transient field penetration and eddy-current effects; (4) assembly, maintenance and repair with emphasis on remote-handling techniques. This paper will focus on elements 2 and 4 above and, in keeping with the conference participation from both fusion and fission programs, will emphasize potential interfaces between fusion technology and experience in the fission industry
9. Self-Shielding Of Transmission Lines
Energy Technology Data Exchange (ETDEWEB)
Christodoulou, Christos [Univ. of New Mexico, Albuquerque, NM (United States)
2017-03-01
The use of shielding to contend with noise or harmful EMI/EMR energy is not a new concept. An inevitable trade that must be made for shielding is physical space and weight. Space was often not as much of a painful design trade in older larger systems as they are in today’s smaller systems. Today we are packing in an exponentially growing number of functionality within the same or smaller volumes. As systems become smaller and space within systems become more restricted, the implementation of shielding becomes more problematic. Often, space that was used to design a more mechanically robust component must be used for shielding. As the system gets smaller and space is at more of a premium, the trades starts to result in defects, designs with inadequate margin in other performance areas, and designs that are sensitive to manufacturing variability. With these challenges in mind, it would be ideal to maximize attenuation of harmful fields as they inevitably couple onto transmission lines without the use of traditional shielding. Dr. Tom Van Doren proposed a design concept for transmission lines to a class of engineers while visiting New Mexico. This design concept works by maximizing Electric field (E) and Magnetic Field (H) field containment between operating transmission lines to achieve what he called “Self-Shielding”. By making the geometric centroid of the outgoing current coincident with the return current, maximum field containment is achieved. The reciprocal should be true as well, resulting in greater attenuation of incident fields. Figure’s 1(a)-1(b) are examples of designs where the current centroids are coincident. Coax cables are good examples of transmission lines with co-located centroids but they demonstrate excellent field attenuation for other reasons and can’t be used to test this design concept. Figure 1(b) is a flex circuit design that demonstrate the implementation of self-shielding vs a standard conductor layout.
10. A study on the shielding element using Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Kim, Ki Jeong [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Shim, Jae Goo [Dept. of Radiologic Technology, Daegu Health College, Daegu (Korea, Republic of)
2017-06-15
In this research, we simulated the elementary star shielding ability using Monte Carlo simulation to apply medical radiation shielding sheet which can replace existing lead. In the selection of elements, mainly elements and metal elements having a large atomic number, which are known to have high shielding performance, recently, various composite materials have improved shielding performance, so that weight reduction, processability, In consideration of activity etc., 21 elements were selected. The simulation tools were utilized Monte Carlo method. As a result of simulating the shielding performance by each element, it was estimated that the shielding ratio is the highest at 98.82% and 98.44% for tungsten and gold.
11. Anisotropic cosmological models and generalized scalar tensor theory
physics pp. 669–673. Anisotropic cosmological models and generalized scalar tensor theory. SUBENOY CHAKRABORTY1,*, BATUL CHANDRA SANTRA2 and ... Anisotropic cosmological models; general scalar tensor theory; inflation. PACS Nos 98.80.Hw; 04.50.+h; 98.80.Cq. 1. Introduction. Brans–Dicke theory [1] (BD ...
12. The ultrarelativistic Kerr geometry and its energy-momentum tensor
Science.gov (United States)
Balasin, Herbert; Nachbagauer, Herbert
1995-03-01
The ultrarelativistic limit of the Schwarzschild and the Kerr-geometry together with their respective energy-momentum tensors is derived. The approach is based on tensor-distributions making use of the underlying Kerr-Schild structure, which remains stable under the ultrarelativistic boost.
13. Exploring the tensor networks/AdS correspondence
Energy Technology Data Exchange (ETDEWEB)
Bhattacharyya, Arpan [Department of Physics and Center for Field Theory and Particle Physics, Fudan University,220 Handan Road, 200433 Shanghai (China); Centre For High Energy Physics, Indian Institute of Science,560012 Bangalore (India); Gao, Zhe-Shen [Department of Physics and Center for Field Theory and Particle Physics, Fudan University,220 Handan Road, 200433 Shanghai (China); Hung, Ling-Yan [Department of Physics and Center for Field Theory and Particle Physics, Fudan University,220 Handan Road, 200433 Shanghai (China); State Key Laboratory of Surface Physics and Department of Physics, Fudan University,220 Handan Road, 200433 Shanghai (China); Collaborative Innovation Center of Advanced Microstructures, Nanjing University,Nanjing, 210093 (China); Liu, Si-Nong [Department of Physics and Center for Field Theory and Particle Physics, Fudan University,220 Handan Road, 200433 Shanghai (China)
2016-08-11
In this paper we study the recently proposed tensor networks/AdS correspondence. We found that the Coxeter group is a useful tool to describe tensor networks in a negatively curved space. Studying generic tensor network populated by perfect tensors, we find that the physical wave function generically do not admit any connected correlation functions of local operators. To remedy the problem, we assume that wavefunctions admitting such semi-classical gravitational interpretation are composed of tensors close to, but not exactly perfect tensors. Computing corrections to the connected two point correlation functions, we find that the leading contribution is given by structures related to geodesics connecting the operators inserted at the boundary physical dofs. Such considerations admit generalizations at least to three point functions. This is highly suggestive of the emergence of the analogues of Witten diagrams in the tensor network. The perturbations alone however do not give the right entanglement spectrum. Using the Coxeter construction, we also constructed the tensor network counterpart of the BTZ black hole, by orbifolding the discrete lattice on which the network resides. We found that the construction naturally reproduces some of the salient features of the BTZ black hole, such as the appearance of RT surfaces that could wrap the horizon, depending on the size of the entanglement region A.
14. Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
Science.gov (United States)
2013-08-16
drawn uniformly at random (by the command orth(randn(·, ·)) in Matlab ). The observed entries are chosen uniformly with ratio ρ. We increase the...and 4d pre-stack seismic data completion using tensor nuclear norm (tnn). preprint, 2013. [GQ12] D. Goldfarb and Z. Qin. Robust low-rank tensor
15. Tensor estimation for double-pulsed diffusional kurtosis imaging.
Science.gov (United States)
Shaw, Calvin B; Hui, Edward S; Helpern, Joseph A; Jensen, Jens H
2017-07-01
Double-pulsed diffusional kurtosis imaging (DP-DKI) represents the double diffusion encoding (DDE) MRI signal in terms of six-dimensional (6D) diffusion and kurtosis tensors. Here a method for estimating these tensors from experimental data is described. A standard numerical algorithm for tensor estimation from conventional (i.e. single diffusion encoding) diffusional kurtosis imaging (DKI) data is generalized to DP-DKI. This algorithm is based on a weighted least squares (WLS) fit of the signal model to the data combined with constraints designed to minimize unphysical parameter estimates. The numerical algorithm then takes the form of a quadratic programming problem. The principal change required to adapt the conventional DKI fitting algorithm to DP-DKI is replacing the three-dimensional diffusion and kurtosis tensors with the 6D tensors needed for DP-DKI. In this way, the 6D diffusion and kurtosis tensors for DP-DKI can be conveniently estimated from DDE data by using constrained WLS, providing a practical means for condensing DDE measurements into well-defined mathematical constructs that may be useful for interpreting and applying DDE MRI. Data from healthy volunteers for brain are used to demonstrate the DP-DKI tensor estimation algorithm. In particular, representative parametric maps of selected tensor-derived rotational invariants are presented. Copyright © 2017 John Wiley & Sons, Ltd.
16. The Twist Tensor Nuclear Norm for Video Completion.
Science.gov (United States)
Hu, Wenrui; Tao, Dacheng; Zhang, Wensheng; Xie, Yuan; Yang, Yehui
2017-12-01
In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm (t-TNN). The twist tensor denotes a three-way tensor representation to laterally store 2-D data slices in order. On one hand, t-TNN convexly relaxes the tensor multirank of the twist tensor in the Fourier domain, which allows an efficient computation using fast Fourier transform. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a nonstationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.
17. Multiple M2-branes and the embedding tensor
NARCIS (Netherlands)
Bergshoeff, Eric A.; de Roo, Mees; Hohm, Olaf
2008-01-01
We show that the Bagger-Lambert theory of multiple M2-branes fits into the general construction of maximally supersymmetric gauge theories using the embedding tensor technique. We apply the embedding tensor technique in order to systematically obtain the consistent gaugings of N = 8 superconformal
18. Subtracting a best rank-1 approximation may increase tensor rank
NARCIS (Netherlands)
Stegeman, Alwin; Comon, Pierre
2010-01-01
It has been shown that a best rank-R approximation of an order-k tensor may not exist when R >= 2 and k >= 3. This poses a serious problem to data analysts using tensor decompositions it has been observed numerically that, generally, this issue cannot be solved by consecutively computing and
19. (2, 0) tensor multiplets and conformal supergravity in D = 6
NARCIS (Netherlands)
Bergshoeff, Eric; Sezgin, Ergin; Proeyen, Antoine Van
1999-01-01
We construct the supercurrent multiplet that contains the energy–momentum tensor of the (2, 0) tensor multiplet. By coupling this multiplet of currents to the fields of conformal supergravity, we first construct the linearized superconformal transformations rules of the (2, 0) Weyl multiplet.
20. Data fusion in metabolomics using coupled matrix and tensor factorizations
DEFF Research Database (Denmark)
Evrim, Acar Ataman; Bro, Rasmus; Smilde, Age Klaas
2015-01-01
of heterogeneous (i.e., in the form of higher order tensors and matrices) data sets with shared/unshared factors. In order to jointly analyze such heterogeneous data sets, we formulate data fusion as a coupled matrix and tensor factorization (CMTF) problem, which has already proved useful in many data mining... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171769976615906, "perplexity": 2708.8900867205034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864019.29/warc/CC-MAIN-20180621020632-20180621040632-00590.warc.gz"} |
http://tex.stackexchange.com/questions/28274/is-it-safe-to-temporarily-redefine-and-or-and-not | # Is it safe to temporarily redefine \and, \or, and \not?
I'm writing lots of logic expressions in LaTeX and I'd much rather write $p \and \not q \or r$ than $p \land \lnot q \lor r$. I was thinking of doing something like this:
\newenvironment{logic}{%
\renewcommand\and\land%
\renewcommand\or\lor%
\renewcommand\not\lnot%
}{}
Then I could simply
\begin{logic}
p \and \not q \or r
\end{logic}
Are the commands \and, \or, and \not built into TeX? Is it safe to temporarily redefine them? If not, can you recommend any alternatives? Thanks!
-
Another option is to use upper case letters for your commands: \AND, \OR, and \NOT. – Aditya Sep 13 '11 at 0:28
Ew! But yes, I guess that would work. – jtbandes Sep 13 '11 at 0:29
If all you're after is a faster way to write those symbols you might be better of using an editors features for this, e.g. to use snippets management or autocompletion features. – N.N. Sep 13 '11 at 7:34
I also want to be able to read the source in the future, after not having look at it for a while. – jtbandes Sep 13 '11 at 23:50
They are not built into TeX (which is called primitive in TeX). For example, \and is defined by latex as \end {tabular}\hskip 1em \@plus .17fil\begin {tabular}[t]{c}. However \or, unlike \and and \not, is not a macro but a TeX primitive. It is not a good idea to redefine it.
I think you do can overwrite them if you like as long as you enclose them within in a group, or you won't use their original meaning. For example, you are not likely to use \and after \author in most cases.
-
Thanks for the info. Could you describe what they do, or at least refer me to somewhere I could find out? – jtbandes Sep 13 '11 at 0:28
\and is used within in \author. The standard classes define \maketitle such that before \@author is typeset, a \begin{tabular}... is insert, and after that \end{tabular}, sort of. \not is just a macro for typeset a math char. \or is used in \ifcase, another primitive. For reference of primitives, The TeXbook is the ultimate source. tug.org/utilities/plain/cseq.html is also a good place to find things about that. For other latex or plain tex macros, there is no one-stop place to learn. – Yan Zhou Sep 13 '11 at 3:44
Slight problem: some documentclasses need their specific definition of \and in the headers. (Journal styles with authors in header, mostly.) – Ulrich Schwarz Sep 13 '11 at 6:15
The macros \and and \not are predefined in LaTeX, but the former is generally used in the \author{} command to separate authors, while the latter is used (generally) to form the negation of already-existing symbols, as in \not\perp (not orthogonal).
The MWE below uses the \xspace macro to automate good spacing following the redefined commands in non-math mode.
\documentclass{article}
\usepackage{xspace}
\newenvironment{logic}
{\renewcommand{\and}{\ensuremath{\land}\xspace}
\renewcommand{\or}{\ensuremath{\lor}\xspace}
\renewcommand{\not}{\ensuremath{\lnot}}}
{}
\begin{document}
\begin{logic}
$p$ \and \not $q$ \or $r$
\end{logic}
\end{document}
-
You don't need to restore the original meanings of the commands since they get restored automatically when you leave the group where the new meanings are defined. Of course saving them is probably the smart thing to do since you might want to use the original command(s) in your environment for something. PS: It should probably be \renewcommand{\and}{\ensuremath{\land}\xspace} - the two symbols might not be the same in all fonts – kahen Sep 13 '11 at 2:31
This is wrong under many respects: the spacing in the formula is completely fooled up; and if you happen to use \begin{logic}...\foo...\end{logic}, where \foo is based on the primitive \or, you'll understand why a primitive should never be redefined unless in a very controlled environment. – egreg May 9 '15 at 23:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9606441855430603, "perplexity": 1649.2162506963962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00149-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/continuous-symmetries-as-particles.772133/ | # Continuous symmetries as particles
1. Sep 21, 2014
### arivero
I am not sure if I recall all the ways for a symmetry to appear as some particle in a Quantum Field Theory.
- The Lagrangian and the vacuum is invariant under the generators of a global symmetry/gauge group. Then the particles in the theory are classified according representations of such group, with all the elements in the same multiplet having equal mass, but... a) Are all the representations expected to appear, and b) is the representation of the generators, the fundamental representation, expected to appear?
- The Lagrangian is invariant under a global gauge group but the vacuum is not. The broken generators then appear as Goldstone bosons, of spin zero. Are they always spin zero? What about the unbrogen generators. No clue of them?
- The Lagrangian is invariant under a local gauge group. Then the generators appear as massless spin 1 bosons, under the fundamental representation of the group.
- The Lagrangian is invariant under a local gauge group and the vacuum is not. The unbroken generators appear as massless spin 1, the broken generators as massive spin 1, all of them join in a fundamental representation of the group that nevertheless has different masses.
2. Sep 21, 2014
### ChrisVer
-I think all the irreducible representations are expected to appear (fundamental,antifundamental etc). I am not sure if there can be representations which don't appear...I don't understand b.
-Yes... as for if the spin is zero, in general I think that's true. Otherwise they would have to be vector bosons, and thus the vacuum which you "break" would have to have a vector field's vev. The unbroken generators/fields still exist afterwards (just like the Higgs boson which exists as a 1 dof after the 4dof of the field are gauged out.
-The Lagrangian invariant under a local gauge group, brings the massless spin-1 generators existing in the adjoint representation. Or at least that's what I have understood from SU(N) theories.
- For the same reason as Higgs... They don't join the fundamental representation though. For example the fundamental representation of SU(2)xU(1) is a (2,1). The gauge bosons exist in the adjoint representation which comes from the tensorial product of the fundamental and antifundamental reprs. As for the masses, I think it depends on the way you are breaking the symmetry.
3. Sep 21, 2014
### arivero
Sorry, my fault. I always do a mess with the naming. I meant the adjoint. b) Are the unbroken generators expected to appear as particles in the adjoint representation, in a Lagrangian with global unbroken symmetries?
All this stuff can be mathematically consistent. In fact it is. But it is also very confusing when comparing the local gauge with the global case. Nor to speak of approximate global symmetries.
Generically, it seems that in the global case the symmetry itself lives outside of the world, and than it is only in the case of local gauge when the generators also become particles. But then we have the goldstone bosons, breaking this intuition and proving than intuition is not a good guide here.
Similar Discussions: Continuous symmetries as particles | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9780939221382141, "perplexity": 531.570358799348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825700.38/warc/CC-MAIN-20171023054654-20171023074654-00749.warc.gz"} |
http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=11&document_srl=804187&l=en&sort_index=date&order_type=desc | Ramanujan's conjecture on the tau function, the Fourier coefficients of the discriminant function, led to the development of Hecke Theory. Many divisibility property of Fourier coefficients of modular functions were proved using the theory. Like the canonical basis of the space of modular functions form a Hecke system, we show that the Niebur-Poincare basis of the space of Harmonic Maass forms also form a Hecke system. As consequences, several arithmetic properties of Fourier coefficients of modular functions on the higher genus modular curves and mock modular functions are established. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9163070917129517, "perplexity": 234.35862075643232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00662.warc.gz"} |
http://mathhelpforum.com/math-topics/153304-conversions.html | 1. ## conversions
convert 47.3 miles per hour to meters per second
1mile =1,609 m
2. Perform a series of multiplications, keeping in mind your units, so that the units "cancel out" and you are left with the unit you are trying to convert to.
47.3 miles per hour can be written as
$\left( \dfrac{47.3 \:\text{mi}}{1 \:\text{hr}}\right)$
You've given us the conversion factor 1mile =1,609 m. So we'll write that as a ratio and multiply it by the first fraction above:
$\left( \dfrac{47.3 \:\text{mi}}{1 \:\text{hr}}\right)\left( \dfrac{1609 \:\text{m}}{1 \:\text{mi}}\right)$
I put the meters on top and miles on bottom so that the miles units "cancel out."
Now we have to convert from hours to seconds. There are 3600 seconds in 1 hour, so we have another multiplication:
$\left( \dfrac{47.3 \:\text{mi}}{1 \:\text{hr}}\right)\left( \dfrac{1609 \:\text{m}}{1 \:\text{mi}}\right)\left( \dfrac{1 \:\text{hr}}{3600 \:\text{sec}}\right)$
I put the hours on top and seconds on bottom so that the hours units "cancel out"
When all the units "cancel out" I will be left with meters on top and seconds on the bottom -- meters per second, which is what we want. All is left is to multiply/divide the numbers. I'll let you do that.
3. thnx eumyang u.ve been like a tutor for me today!
one more question hod do i write 21.140 in the correct scientific notation?
4. $2.114 \times 10^1$
Remember that a number in scientific notation must have a single digit between 1 and 9 (inclusive) to the left of the decimal point. So an answer like this:
$0.2114 \times 10^2$
would be wrong, because you can't have just a 0 to the left of the decimal point. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687065482139587, "perplexity": 720.1670324673314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00162-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/singularities-at-end-point-in-integration.108908/ | # Singularities at end point in integration
1. Jan 31, 2006
### ashesh
singularities at end point in integration....
Hi,
I Need to perform an integration with poles and zeros in the integrand. Please let me know if there a matlab routine/program that can handle the definite integral
sqrt((x-a)*(x-b)/((x-c)*(x-d)))
between the limits (c,d), (a,d), (a,b) or (b,c).
I have read about the routine in quadpack called "dqawse.f" which can perform "integration of functions having algebraico-logarithmic end point singularities".
I need a matlab equivalent program that can perform this type of integration. I have checked gausscc.m file (http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=2905&objectType=file) which does integration by Chebychev-Curtis quadrature, but that seems to be no good in handling singularities.
Hope someone can give some leads to solve the above problem.
Ashesh
Can you help with the solution or looking for help too?
Draft saved Draft deleted
Similar Discussions: Singularities at end point in integration | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669907450675964, "perplexity": 2317.1857844596734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721027.15/warc/CC-MAIN-20161020183841-00121-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/165577/some-doubt-on-linear-diophantine-equation | # Some doubt on Linear Diophantine equation
We know $ax+by=c$ is solvable iff $(a,b)|c$ where $a,b,c,x,y$ are integers. If $x=2$, $a=\dfrac{k(k+5)}{2}$, $y=k$, $b=k+3$ and $c=2k$, where $k$ is any integer, then $$2 \frac{k(k+5)}{2} - k(k+3) = 2k.$$ So, $\left(\dfrac{k(k+5)}{2}, (k+3)\right) | 2$ for all integral value of $k$. But $k+3$ cannot be even unless $k$ is odd. Where I am going wrong?
-
A newly found article on Diophantine equation iosrjournals.org/iosr-jm/pages/v7i4.html – user90946 Aug 19 '13 at 8:27
$2k(k+5)/2 - k(k+3)=2k$ with $a=k(k+5)/2$, $x=k$, $y=k$, would mean $b=k+3$, not $k(k+3)$. The conclusion is that $(k(k+5)/2,k+3) | 2k$, not $2$. For example, with $k=3$, $(24,6)=6$.
-
+1: Good eye! ${}$ – Cameron Buie Jul 2 '12 at 5:14
Good observation, I missed it – lab bhattacharjee Jul 2 '12 at 5:17
Why does $k+3$ need to be even for the gcd of $k(k+5)/2$ and $k+3$ to divide $2$? For instance, consider the $k=2$ case.
Now, if $2$ had to divide the gcd of those numbers, then we could conclude that $k+3$ must be even.
-
Here, the gcd does not divide 2. But, we have derived that gcd will divide 2 for all integral value of k. – lab bhattacharjee Jul 2 '12 at 5:07
"So, (k(k+5)/2, (k+3)) | 2 for all integral value of k. =>k+3 is even for all integral value of k" is what you said. This implication fails to hold, with the counterexample $k=2$. I'm simply pointing out that fact. – Cameron Buie Jul 2 '12 at 5:12
Let $\rm\:j = (k\!+\!3,\, k(k\!+\!5)/2).\,$ The solvability criterion is $\rm\,j\:|\:2k,\,$ not $\rm\:j\:|\:2.\,$ The two are equivalent only when $\rm\:(j,k) = 1\iff (k,3) = 1.$ Otherwise $\rm\:3\:|\:k\:|\:j\:$ thus $\rm \:j\nmid 2.$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818065166473389, "perplexity": 490.3499162955976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207931085.38/warc/CC-MAIN-20150521113211-00243-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/0709.3216/ | # Curvature-coupling dependence of membrane protein diffusion coefficients
Stefan M. Leitenberger, Ellen Reister-Gottfried, and Udo Seifert
II. Institut für Theoretische Physik, Universität Stuttgart, 70550 Stuttgart, Germany
###### Abstract
We consider the lateral diffusion of a protein interacting with the curvature of the membrane. The interaction energy is minimized if the particle is at a membrane position with a certain curvature that agrees with the spontaneous curvature of the particle. We employ stochastic simulations that take into account both the thermal fluctuations of the membrane and the diffusive behavior of the particle. In this study we neglect the influence of the particle on the membrane dynamics, thus the membrane dynamics agrees with that of a freely fluctuating membrane. Overall, we find that this curvature-coupling substantially enhances the diffusion coefficient. We compare the ratio of the projected or measured diffusion coefficient and the free intramembrane diffusion coefficient, which is a parameter of the simulations, with analytical results that rely on several approximations. We find that the simulations always lead to a somewhat smaller diffusion coefficient than our analytical approach. A detailed study of the correlations of the forces acting on the particle indicates that the diffusing inclusion tries to follow favorable positions on the membrane, such that forces along the trajectory are on average smaller than they would be for random particle positions.
## 1 Introduction
During the last decade it has become more and more apparent that lateral diffusion of proteins in membranes plays a crucial role in cellular functioning [1, 2]. Therefore, a whole range of experimental techniques has been developed, which is constantly being improved in order to determine accurate values of lateral protein diffusion coefficients [3, 4]. The most important methods include fluorescence recovery after photo bleaching [5, 6], fluorescence correlation spectroscopy [7, 8], or single particle tracking [9, 10]. While a large amount of data has been collected with these techniques the interpretation of results always depends on reliable models for the diffusive process. In some situations, like restricted diffusion due to corrals [10, 11, 12], a certain, often rather crude, qualitative explanation is easily found, in other situations, however, this is by no means the case and a reliable quantitative interpretation can only be achieved, if corresponding theoretical calculations or simulations are performed.
Only recently an increased interest in lateral diffusion has emerged from a theoretical viewpoint. In order to compare theoretical results with experiments it is necessary to take into account various aspects of the particular system. These include the nature of the membrane the particle is diffusing in, the properties of the diffusing particle, or the experimental method with which diffusion coefficients are determined. A very important aspect in both theoretical calculations and the analysis of experimental results is that the membrane must not be regarded as a flat plane but is often structured such that regions with higher and lower curvatures appear. For example, this must be accounted for in the study of diffusion in the membranes of the endoplasmic reticulum. Neglecting the influence of the membrane shape leads to considerable errors in the determination of diffusion coefficients [13]. Several analytical and simulational studies have been performed that regard diffusion on various fixed curved surfaces [14, 15, 16, 17, 18, 19, 20].
But even if a membrane appears to be flat on average, it is subject to thermal fluctuations that lead to rapid shape changes around the flat configuration [21, 22]. Neglecting the influence of the membrane on the movement of the particle these fluctuations that depend on properties like bending rigidity, surface tension, proximity to substrates or other membranes, etc., have an influence on the measured values for lateral diffusion coefficients because experiments usually regard the path of the inclusion projected on a flat reference plane instead of the actual path along the membrane. Compared to the intramembrane diffusion coefficient the measured diffusion coefficient will be the smaller the stronger the fluctuations. This was initially pointed out by Gustafsson and Halle [23]; the quantitative evaluation of this effect for free intramembrane diffusion was performed more recently in independent work by us and two other groups using analytical calculations [24, 25] and simulations [26, 20].
All studies mentioned in the last two paragraphs take into account the influence of the shape of the membrane on measured diffusion coefficients, but otherwise neglect any interaction between membrane and protein. Considering that a protein also has certain physical properties it must be assumed that interactions between the protein and the membrane exist that influence the diffusion coefficient. This assumption is corroborated by experimental findings: for example after photoactivation of bacteriorhodopsin (BR) in model membranes the lateral diffusion coefficient is reduced by a factor of five [27]. This reduction is attributed to oligomerization of BR upon activation caused by structural changes that influence protein-lipid interactions. There are many other experimental examples that indicate that the interactions between membrane and protein have a significant influence on lateral diffusion, see for example refs. [28, 29]. An important property of a membrane compared to a flat surface is the curvature. A variety of studies mainly using particle based simulations are concerned with the influence of inclusions with a certain intrinsic curvature on membrane shape and lateral diffusion [30, 31, 32, 33, 34, 35, 36, 37, 38]. In earlier work we calculated effective diffusion coefficients for particles with a bending rigidity and a spontaneous curvature [24]. These calculations revealed that the additional interaction that tries to move the particle to positions on the membrane where the curvature agrees with the particle’s spontaneous curvature, leads to an increase in the diffusion coefficient.
Our previous work on curvature-coupled diffusion effectively describes diffusion of a point-like particle and relies on several approximations [24, 26]. One of these, the so-called pre-averaging approximation, assumes that membrane fluctuations on all possible length scales have a much shorter relaxation time than the time it takes the particle to diffuse the corresponding distance. In this paper we introduce a scheme to simulate the diffusion of a particle with a certain extension, a bending rigidity and a spontaneous curvature in a thermally fluctuating membrane. A sketch of the considered system with the relevant physical parameters is given in fig. 1.
The present method is no longer restricted to certain relative timescales of diffusion and membrane fluctuations. The additional energy of the inclusion is introduced by replacing a patch of membrane that is described by the Helfrich Hamiltonian with a new Helfrich-like term with a different bending rigidity and a spontaneous curvature. The extension of the particle is modeled by a Gaussian weighting function in order to have a smooth crossover from the bare membrane to the particle. Obviously the system is dominated by two dynamic processes: the shape fluctuations of the membrane and the particle diffusion. Two Langevin-equations, one for the membrane, the other for the particle, are derived from the energy of the system. Our simulation scheme consists of the numerical integration in time of these coupled equations. Apart from performing simulations we also analytically evaluate the coupled dynamic equations by use of a perturbation theory that neglects the influence of the particle on the membrane and assumes that membrane relaxation times are much smaller than corresponding diffusive time scales. In order to compare with these analytical calculations and to reduce the computational effort, we restrict our simulations in the current study such that the membrane dynamics is also not influenced by the diffusing particle. The influence on membrane movement will be considered in future work. The main quantity of interest is the ratio of the curvature-coupled and the intramembrane diffusion coefficient as a function of the membrane parameters bending rigidity and surface tension. The latter coefficient is a parameter of our scheme and resembles the free diffusion coefficient of the particle if no additional force were acting on it. The application of both our approaches shows that curvature-coupling leads to increased diffusion. However, the comparison of our analytic calculations with the simulation results reveals a systematic difference. In order to gain insight into the reason for these discrepancies we study force correlation functions that are the main contributions to the diffusion constant.
The paper is organized as follows: In the following section we explain the model for the membrane dynamics and the Langevin-equation for the inclusion. In sec. 3, the method and the approximations of the analytical calculation of the curvature-coupled diffusion coefficient are presented while sec. 4 introduces the used simulation scheme. Sec. 5 discusses the choice of parameters used in both the calculations and the simulations. The presentation of the results in sec. 6 is followed by a detailed discussion in sec. 7, why simulations lead to smaller diffusion constants and a possible interpretation of our findings. The paper finishes with some conclusions and an outlook for future work.
## 2 Model
### 2.1 Membrane dynamics
We consider a model membrane in a fluid environment. The membrane is given in Monge-representation where is the position in the -plane with the deviation out of this plane. For such a membrane Helfrich [39] derived the free energy that to lowest order has the following form in the Monge-gauge [40]
H0[h(r,t)]=∫L2dr[κ2(∇2rh(r,t))2+σ2(∇rh(r,t))2], (1)
with the bending rigidity , the effective surface tension and the area in the -plane. The dynamics of a membrane is given by [22]
∂th(r,t)=−∫L2dr′Λ(r−r′)δH0δh(r′,t)+ξ(r,t) (2)
with the Onsager coefficient that takes into account hydrodynamic interactions with the fluid background. Using the Fourier-transformation
h(k,t) = (3) h(r,t) = 1L2∑kh(k,t)exp{ir⋅k} (4)
one obtains (2) in Fourier-space
∂th(k,t)=−Λ(k)E(k)h(k,t)+ξ(k,t) (5)
with and which is the Fourier-transformed Onsager coefficient for a free membrane in a fluid with viscosity [22]. The stochastic force obeys the fluctuation-dissipation theorem
⟨ξ(k,t)⟩ = 0, (6) ⟨ξ(k,t)ξ∗(k′,t′)⟩ = 2Λ(k)L2βδ(t−t′)δk,k′, (7)
where is the inverse temperature. Later on, we need the correlations of which are given by
⟨h(k,t)⟩ = 0, (8) ⟨h(k,t)h∗(k′,t′)⟩ = L2βE(k)exp[−Λ(k)E(k)|t−t′|]δk,k′. (9)
### 2.2 Diffusion
Now we place an inclusion into the membrane that diffuses freely along the membrane. The dynamics of the inclusion may be described by a Fokker-Planck-equation (FP-eq.). However, since the diffusive motion takes place on a curved surface the Laplace-operator needs to be replaced by the Laplace-Beltrami-operator. This leads to a new FP-eq. [24, 26]
∂tP(r,t)=D∑i,j∂i√ggij∂j1√gP(r,t), (10)
with the diffusion coefficient , the metric and the inverse metric tensor . In the Monge gauge the metric has the form , while the inverse metric tensor is
gij≡(1+h2y−hxhy−hxhy1+h2x). (11)
The subscripts denote partial derivatives, e.g. . The probability of finding the projection of the inclusion at a position is normalized to . In the simulations we make use of a Langevin-equation to describe the motion of the projected particle position . Using the above FP-eq. a projected Langevin-eq. is derived within the Stratonovich calculus [41, 42]:
∂tX(t) = D1g(√g+1)(hYhXY−hXhYY) +√D1g−1[(h2X√g+h2Y)ζX(t)+ hXhY(1√g−1)ζY(t)], ∂tY(t) = D1g(√g+1)(hXhXY−hYhXX) +√D1g−1[hXhY(1√g−1)ζX(t)+ (h2Y√g+h2X)ζY(t)].
The upper case subscripts express that the partial derivatives at the particle position have to be used. The stochastic force has zero mean and is delta-correlated:
⟨ζi⟩ = 0, (13) ⟨ζi(t)ζj(t)⟩ = 2δijδ(t−t′). (14)
Equation (2.2) comprises a drift that is caused by the membrane curvature and diffusive terms. The consequences of such a drift term for a freely diffusing inclusion have been introduced in ref. [26].
### 2.3 Curvature-coupled model
The equations derived so far apply to a freely diffusing point-like inclusion. If one is interested in the diffusion of a more realistic inclusion, one has to take the physical parameters of the inclusion into account. First the inclusion has a non vanishing area which is set to . Furthermore, the inclusion has possibly its own bending rigidity and maybe a spontaneous curvature . As indicated by “into the membrane” the inclusion completely replaces the membrane at its position. To consider this in the free energy of the system, one has to add a new energy term for the inclusion and remove the part of the membrane which is replaced. The additional term caused by the inclusion leads to the new free energy
H=H0+H1, (15)
where
H1[h(r,t),R(t)]=∫L2drG(r−R)××[m2(∇2rh(r,t)−Cp)2−κ2(∇2rh(r,t))2] (16)
is the correction to Helfrich’s free energy . is a weighting function for the extension of the particle that we set to be a Gaussian such that the crossover from particle to membrane is smooth. Taking into account the area constraint it is given by
G(r−R)=exp{−(r−R)2a2p}. (17)
The altered free energy (16) induces additional forces on the inclusion and the membrane. The membrane dynamics is obtained by replacing in (2) with such that
(18)
The forces that influence the diffusive behavior of the inclusion can be calculated by . Taking into account the curvature of the membrane in the force term, which needs to be added to the right hand side of (2.2), we get the complete equation of motion for the inclusion
∂tRi(t)=∂tRproj,i−μ1g∑jgij∂jH1, (19)
with the mobility that is related to the intramembrane diffusion coefficient via the Einstein relation .
With eqs. (18) and (19) the dynamics of the system is fully determined. Note, that these equations are coupled since the particle diffusion depends on the shape of the membrane via the partial derivatives of at the particle position, and the membrane dynamics on the position of the particle through the additional energy.
## 3 Analytical calculations
In order to calculate a new curvature-coupling affected diffusion coefficient defined as
Dcc≡limt→∞⟨ΔR2(t)⟩4t, (20)
one has to determine the mean square displacement
(21)
by integrating eq. (19) in time and performing the thermal average. Since the explicit calculation of the mean square displacement using the exact equation of motion (19) and the full membrane dynamics (18) is not possible analytically, it is necessary to introduce several approximations.
In order to simplify eq. (19) we perform a pre-averaging approximation. This approximation is applicable if for all modes the membrane relaxation times , see eq. (9), are considerably shorter than the time it takes a particle to diffuse the distance given by the corresponding wave length of the mode. For typical experimental values for bending rigidity , tension , diffusion coefficients , and system sizes , this condition is very often fulfilled. If membrane fluctuations are “faster” than the diffusion of the particle it is assumed that the particle only feels average membrane fluctuations. The applicability of this approximation for free lateral diffusion is discussed in ref. [26]. In our current work the pre-averaging approximation results in the replacement of in eq. (19) by where the projected free mobility is defined by [24, 26].
We, furthermore, assume that the additional energy caused by the insertion of a single particle is small, in order to justify a perturbation expansion to first order in the particle energy. The consequence of this approximation is that the dynamics of the membrane is not influenced by the presence of the inclusion. Thus the membrane dynamics is expressed by eq. (5).
Another approximation needs to be employed so as to make analytical calculations possible. Inserting eq. (19) into eq. (21) we see that the mean square displacement becomes a function of the height correlations . These correlations decay with increasing time difference due to two reasons: during the time interval the membrane shape changes and the particle position advances. Since we assume diffusion to be much slower than membrane shape changes we neglect the effect caused by the particle movement.
Note, that this approximative analytical calculation cannot include a possible correlation between the particle position and the membrane shape in the vicinity of the inclusion. In other words, we assume a constant probability for finding the particle at any point in the system relative to a given membrane configuration. This aspect becomes important when we compare with simulation results, as will be discussed in sec. 7.
Using the inverse Fourier-transform given in eq. (4) and applying the previously explained approximations we find for the mean square displacement
(22)
with the hight correlation function given in eq. (9) and .
Inserting the resulting equation for into (20) and performing the long time limit one gets
Dcc=Dproj+μ2projm2C2p(πa2p)21L2××∑kk6exp{−k2a2p2}12βE2(k)Λ(k). (23)
In this equation it is interesting that there is no need to set a cut-off for the wavenumber since the exponential function damps higher values. In ref. [24] a similar calculation for the curvature coupled diffusion coefficient is performed. There the area function has the form and a cut-off is necessary. The resulting diffusion coefficient agrees with the above diffusion coefficient of equation (23) in the limit of vanishing variance of the Gaussian.
## 4 Simulation method
To probe the applicability of our analytical calculations that depend on several approximations we set up simulations that numerically integrate the coupled equations of motion for the membrane and the diffusing particle. We use a square, periodic lattice with lattice points and the lattice spacing to map a model membrane with size , see fig 1. To simulate the shape fluctuations of the membrane we numerically integrate the appropriate equation of motion. In order to reduce the computational effort and to compare with the analytical calculations introduced in the previous section we evaluate the unperturbed Langevin-equation (5) discretely in time. These calculations are performed in Fourier-space since the equations of motion for the height function modes decouple. Due to the periodic boundary conditions the wave vectors are of the form with and being integers. Since the hight function is a real function defined on a lattice the relation applies leading to the restriction .
Regarding the fluctuation-dissipation theorem (7) it is obvious that fluctuations of would diverge for the Onsager-coefficient . The dynamics of the height function mode corresponds to the center of mass movement of the whole membrane. Due to the irrelevance of this movement in the determination of the lateral diffusion coefficient we set and keep fixed at all times.
In order to choose an appropriate discrete time step that ensures that numerical errors are small it is necessary to point out that the largest -vector possible in the given lattice determines the smallest time scale of the membrane as can be seen in eq. (9). The used time step in the simulation should be smaller than this smallest time scale.
The dynamics of the inclusion is given by a discrete version of eq. (19) that is also numerically integrated in time. Since it is coupled to the membrane equation (5) via the derivatives , , etc., of the membrane configuration at the position of the inclusion, the temporal evolution during a discrete time step consists of an update of the membrane shape and the particle position. In contrast to the membrane dynamics the motion of the inclusion is calculated in real space. The required derivatives of are, therefore, determined in Fourier-space and then transformed to real space using routines of the FFTW-libraries [43]. We allow off-lattice diffusion for the inclusion which is necessary if the average time it takes a particle to diffuse a distance is much larger than the corresponding membrane relaxation time. Thus the derivatives at the position of the inclusion are determined by a distance weighted linear extrapolation of the four nearest lattice sites. In order to minimize the numerical error caused by not following the membrane surface correctly the displacement per time step is to be set to a small fraction of the lattice spacing . With the above explained restriction for the time step in order to describe the membrane shape evolution with sufficient accuracy there are overall two conditions that must be fulfilled in the choice of .
Apart from obvious computational limits the determination of how long simulation runs should at least be is again dictated by two time scales. On the one hand the length of the simulation should be several times the longest relaxation time of the membrane, which is given by the smallest possible wave vector, to ensure that the membrane shape has passed through an adequate amount of independent configurations. On the other hand it is preferable that the inclusion has enough time on average to cover a distance of several lattice sites.
A more detailed description of the simulation method is given in ref. [26], where the corresponding scheme for free particle diffusion has been introduced.
## 5 Parameters
Before we present our results we will introduce the used parameters. These are the same for the analytical calculations and the simulations. As already mentioned in the description of the simulation method we use a discrete membrane with a lattice spacing of that we set to . This choice reflects a compromise between the wish to simulate reasonably sized systems and the computational limitation in the number of lattice sites. A decrease in the lattice spacing for a constant system size, i.e. an increase in the number of lattice sites, introduces additional large -values that contribute only weakly to membrane fluctuations, see eq. (9), or the curvature-coupled diffusion coefficient (23). is one of the basic units. The others are the time given in seconds and the thermal energy that is at room temperature. All parameters of the system are given in units of , , and . For the determination of the membrane parameters we look at typical experiments and extract a range of to for the bending rigidity and to for the effective tension that corresponds to to for . At around rupture of the membrane occurs. In the experiments it is, furthermore, common to use water as a surrounding medium with a viscosity of . The last parameter of the membrane is the size that is related to the number of lattice points in each direction via . Since a sufficient number of wave vectors are considered for a lattice we use a system size of that corresponds to . For the parameters of the inclusion we choose , and as in our previous calculations [24]. The bare diffusion coefficient , however, has to be chosen carefully. Since the analytical calculations rely on a pre-averaging approximation we have to set sufficiently small. Therefore, the time scale of the diffusion must be much longer than the largest membrane time. This time is given by with the absolute value of the smallest wave vector which corresponds to the longest length in the system. The comparison of the two time scales leads to . We choose . This choice is good for all values of and in the selected range. For the total length of the simulations one has to keep in mind that it has to be several times longer than the time scale of the longest wave vector. The time step however has to be so small that it is smaller than the shortest membrane time and that the inclusion diffuses only a short distance. We set the total length to and such that each simulation run comprises time steps. This choice of the time step is applicable for small bending rigidities .
Note, that an increase of and leads to smaller time scales of the membrane. In order not to increase computing time we keep fixed for all considered and . This, however, will lead to a slight increase in numerical errors for the modes with very large wave vectors in membranes with large and . Since fluctuations of these modes, see eq. (9), are rather small these errors are negligible in the determination of .
## 6 Results
The analytically derived curvature-coupled diffusion coefficient is calculated by numerical summation of equation (23) using Mathematica for the whole range of parameters given in the previous section. The ratio of the curvature-coupled and the free intramembrane diffusion coefficient is plotted as a function of the bending rigidity and the effective surface tension in fig. 2. The ratio increases for brighter colors. In the figure one can see that for small the ratio increases very strongly for . With a further increase of the bending rigidity the ratio reaches a plateau. Here the strengthening of the forces caused by an increasing bending rigidity of the inclusion is compensated by the fact that thermal fluctuations become weaker for increasing . The increase of by about three orders of magnitude leads to no significant effect. However, for even larger values of the ratio decreases fast to one for small . This happens since a large surface tension also damps the thermal fluctuations of the membrane and, therefore, the additional force on the inclusion is small. Increasing the ratio also increases for large but does not reach the same hight as for small tensions. In this case the increase of for high values of is also compensated by the damping caused by the surface tension.
Overall, our calculations show that the curvature-coupling, which leads to an additional force on the inclusion, enhances the inclusion’s diffusion rate in the investigated parameter range. It is noteworthy that the ratio is always bigger than one despite the fact that the projection alone would lead to a ratio smaller than one [24].
Results from variations of the other parameters, like , , etc., are not plotted but the effect on can easily be obtained from eq. (23). However, one has to keep in mind that the effect of the inclusion on the membrane has to be small in order for the perturbation theory to be applicable.
In the simulations, we also use the unperturbed membrane equation of motion (5) and the same parameter sets as for the numerical summation just discussed. As we are interested in results in thermal equilibrium each membrane starts in a random configuration and has ms to equilibrate before the inclusion is placed in its center. ms is about five times the longest membrane relaxation time so we can be sure that the membrane is in thermal equilibrium. Since we obtain only one particle trajectory per independent simulation run we average over 500 simulations with the same set of parameters to get the mean square displacement . An example for as a function of time is plotted in figure 3. The slope of the resulting straight line at late times corresponds to , see eq. (20). The determined values for from the simulations (+) are plotted with the analytical curve (line) in fig. 4 as a function of for a constant and in fig. 5 as a function of for a constant bending rigidity . We see that the simulations follow qualitatively the analytical curve but the values are about smaller than expected.
## 7 Discussion
Both of our approaches demonstrate that curvature-coupling enhances diffusion. This result is plausible for the following reason. Due to the membrane fluctuations the positions that are favorable for the diffusing particle are constantly changing. Thus the particle is subject to changing forces leading to enhanced movement of the particle, which in turn leads to a higher diffusion coefficient. The resulting enhanced diffusion coefficient is caused by forces, which are still thermal. Therefore, the system is still in equilibrium and the fluctuation-dissipation-theorem is applicable, such that an increased effective mobility or a reduced effective friction of the particle can be determined.
On the quantitative side, the analysis of the simulation data reveals a diffusion coefficient that is about smaller than we expect from the analytical calculations. In these calculations several approximations that we have explained in sec. 3 are applied to calculate the mean square displacement (21). The dominant contribution to is with the force , see eqs. (19), (21). Hence we investigate this force correlation function.
First we consider the averaged quadratic force for along the trajectory of the inclusion to see whether differences in the strength of the correlations occur. Then we will regard the dependence of the correlations on the time interval . To examine the strength of the correlations for the quadratic force acting on the inclusion is determined at each time step of the simulation and then averaged over all values. For several sets of parameters we receive the values that are plotted in fig. 6 as a function of for a constant and in fig. 7 as a function of for a constant . The values resulting from the analytical calculations are also plotted in these figures. We observe a difference between analytical and simulation results that is on the same order of magnitude as the difference in the diffusion coefficients. In the analytical calculations we assume that the probability of finding the inclusion at a particular position is the same for any point of the membrane. If we calculate, using the simulation data, the mean square of the force the inclusion would be exposed to if it were fixed at some arbitrary position on the membrane we obtain values very close to the mean squared force resulting from the analytical calculations. These values are also plotted in figs. 6 and 7. The differences in fig. 6 between the analytical calculations and the average for a fixed point are caused by the time step of the simulations that induces bigger numerical errors for higher values of as previously explained in section 5. Overall, averaging over the whole lattice in the calculations leads to seemingly higher forces than along the actual particle trajectory. A possible explanation for this reduced force is that most of the time the inclusion is close to a local minimum of the free energy. The extrema of the energy are created by the membrane shape, which is constantly changing due to thermal fluctuations. Therefore, the positions of the extrema will also move along the membrane. As the negative gradient of the energy is always pointing towards the nearest local minimum the inclusion will predominantly move in the direction of the nearest local minimum. For a fast enough diffusion rate the inclusion is capable of following a local minimum.
Due to the fact that for each simulation run a new thermally equilibrated membrane is used and the particle is always placed in the center of the membrane, it is very unlikely that the inclusion is initially close to an energy minimum. Therefore, the inclusion is exposed to higher forces at the beginning than at later times. Since higher forces go along with a higher diffusion rate we expect to observe two diffusion coefficients from the simulation data: a smaller one for late and a larger one for early times. Indeed such a behavior occurs as one can see in fig. 3 where the linear fits to the mean square displacement for early and late times are plotted. In this example the crossover is at about ms. Comparing the resulting diffusion coefficients with the analytical values, the one determined at the beginning of the simulation agrees reasonably well with the analytically determined diffusion coefficient. The inclusion starts with a higher diffusion rate and then, after a variable time period, finds a local minimum, which it tries to follow.
Now that we have found that the force acting on the inclusion is reduced in the simulations we consider the time correlations of the force . We expect to see a reduction of the correlations caused by the inclusion following an energy minimum but in addition the time dependency will be influenced by the movement of the particle position . The obtained correlation function along the trajectory is plotted in fig. 8 together with the analytical result for and vanishing tension. For a better representation the functions are normalised to one and plotted for small in the inset of the figure. It is obvious that in addition to the altered start values the time dependence is also different. The decay of the correlations along the trajectory is slower than for the analytical curve. To demonstrate that this altered decay is caused by the motion of the inclusion we choose five fixed points of the lattice and determine, from the simulation data, the time correlation function of the force that would act on the inclusion if it were fixed at these points. The average over these points is also plotted in fig. 8 and agrees well with the analytical curve. The altered decay along the trajectory is an effect caused by the motion of the particle and indicates that the force correlations are stronger along the trajectory. This fact does not only lead to a slightly higher diffusion rate but also shows that the forces for a series of time steps point in similar directions which corroborates our interpretation that the inclusion tries to follow a local minimum. Although a larger decay time of correlations leads to enhanced diffusion the observed diffusion rate is smaller than the analytically calculated value since the effect that forces close to energy minima are reduced dominates.
## 8 Conclusions
In this paper, we have derived a model for the interaction of an inclusion with a model membrane and have investigated the influence of this interaction on the diffusion of the inclusion. In the model the inclusion is a physical object with an area, a spontaneous curvature and a bending rigidity. In analogy to Helfrich’s free energy a new free energy is derived and with this coupled equations of motion for the inclusion and the membrane dynamics. Using these stochastic equations and employing several approximations we calculate the curvature-coupled diffusion coefficient . To assess the quality of the approximations in this analytical calculation that assumes a weak perturbation of the membrane we set up simulations. These simulations that numerically integrate the coupled equations of motion for the membrane and the inclusion are also based on the unperturbed membrane equation.
Both, the simulations and our analytical approach, clearly display that the additional force on the inclusion caused by the interaction between particle and membrane leads to a significant increase in the diffusion coefficient compared to bare intramembrane diffusion. However, comparing the analytical calculations with the simulation results one finds that the curvature-coupled diffusion coefficient in the simulations is about smaller than analytically expected, but follows the behavior qualitatively. A closer look at the forces on the inclusion shows that the averaged forces along the trajectory of the inclusion are smaller than the averaged notional forces acting at any arbitrary fixed point of the lattice. The average over the latter forces, however, has a good agreement with the values from the analytical calculations. Since smaller forces correspond to local extrema of the free energy, and the gradient, i.e. the force, always points to the nearest local minimum it is likely that the inclusion is in the vicinity of such a local minimum of the energy most of the time. If the mobility of the inclusion is big enough the inclusion is able to follow such a local minimum. Another point for this interpretation is that we place the inclusion in a new thermally equilibrated membrane for each simulation run. Hence, the inclusion is not necessarily close to a local minimum at the beginning and the diffusion rate should be faster for early times of the simulations. Considering the simulation data we find indeed that the diffusion for early times is about the same as the diffusion rate expected from the analytical calculation. After a short time the diffusion rate decreases and then remains constant. This corroborates the assumption that the inclusion meets a local minimum after some time and then tries to follow it.
Our study leads to the conclusion that the analytical calculations provide qualitative values for the curvature-coupled diffusion coefficient for a given set of parameters. For a quantitative value of the curvature-coupled diffusion coefficient simulations are necessary that take the effects of the movement of the inclusion into account.
In this paper we use the unperturbed membrane equation of motion. In ongoing work we are investigating the influence of the inclusion on the membrane dynamics and the diffusion by use of simulations that incorporate the perturbed membrane equation of motion. Then, the particle does not only adapt to the membrane but the membrane also adjusts to the particle. This will possibly lead to a further reduction of the diffusion coefficient compared to the analytical calculation. By comparing these results with those of the present paper it will be possible to determine the parameter range in which the perturbation may be neglected. We intend to also study the diffusion of several inclusions in a membrane in order to investigate possible cluster formation induced by membrane mediated interactions between the inclusions. Aside from this one may also be interested in other forms of interactions between the membrane and the inclusion or additional interactions between the inclusions.
The investigation of several possible interactions and the resulting effects on the diffusion coefficient of inclusions as a function of membrane parameters may help to understand experimental data better and should finally lead to a deeper insight of the diffusion processes in biological membranes. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327751994132996, "perplexity": 351.1870604392405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00068.warc.gz"} |
http://www.mathworks.com/examples/statistics/mw/stats-ex64248497-find-nearest-neighbors-using-a-custom-distance-metric | MATLAB Examples
# Find Nearest Neighbors Using a Custom Distance Metric
This example shows how to find the indices of the three nearest observations in X to each observation in Y with respect to the chi-square distance. This distance metric is used in correspondence analysis, particularly in ecological applications.
Randomly generate normally distributed data into two matrices. The number of rows can vary, but the number of columns must be equal. This example uses 2-D data for plotting.
rng(1); % For reproducibility X = randn(50,2); Y = randn(4,2); h = zeros(3,1); figure; h(1) = plot(X(:,1),X(:,2),'bx'); hold on; h(2) = plot(Y(:,1),Y(:,2),'rs','MarkerSize',10); title('Heterogenous Data')
The rows of X and Y correspond to observations, and the columns are, in general, dimensions (for example, predictors).
The chi-square distance between j-dimensional points x and z is
where is the weight associated with dimension j.
Choose weights for each dimension, and specify the chi-square distance function. The distance function must:
• Take as input arguments one row of X, e.g., x, and the matrix Z.
• Compare x to each row of Z.
• Return a vector D of length , where is the number of rows of Z. Each element of D is the distance between the observation corresponding to x and the observations corresponding to each row of Z.
w = [0.4; 0.6]; chiSqrDist = @(x,Z)sqrt((bsxfun(@minus,x,Z).^2)*w);
This example uses arbitrary weights for illustration.
Find the indices of the three nearest observations in X to each observation in Y.
k = 3; [Idx,D] = knnsearch(X,Y,'Distance',chiSqrDist,'k',k);
idx and D are 4-by-3 matrices.
• idx(j,1) is the row index of the closest observation in X to observation j of Y, and D(j,1) is their distance.
• idx(j,2) is the row index of the next closest observation in X to observation j of Y, and D(j,2) is their distance.
• And so on.
Identify the nearest observations in the plot.
for j = 1:k; h(3) = plot(X(Idx(:,j),1),X(Idx(:,j),2),'ko','MarkerSize',10); end legend(h,{'\texttt{X}','\texttt{Y}','Nearest Neighbor'},'Interpreter','latex'); title('Heterogenous Data and Nearest Neighbors') hold off;
Several observations of Y share nearest neighbors.
Verify that the chi-square distance metric is equivalent to the Euclidean distance metric, but with an optional scaling parameter.
[IdxE,DE] = knnsearch(X,Y,'Distance','seuclidean','k',k,... 'Scale',1./(sqrt(w))); AreDiffIdx = sum(sum(Idx ~= IdxE)) AreDiffDist = sum(sum(abs(D - DE) > eps))
AreDiffIdx = 0 AreDiffDist = 0
The indices and distances between the two implementations of three nearest neighbors are practically equivalent. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515376687049866, "perplexity": 2213.405288509016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806832.87/warc/CC-MAIN-20171123123458-20171123143458-00307.warc.gz"} |
http://mathhelpforum.com/math-topics/113492-physics-11-questions-halfway-solved.html | # Thread: Physics 11 Questions - Halfway Solved
1. ## Physics 11 Questions - Halfway Solved
The Latex in the Physics forums are a bit messed, so I'm crossing my fingers and hoping this works.
In a brake test on dry asphalt, a Chevvy Camaro, travelling with an initial speed of 26.8 m/s, can stop without skidding after moving 39.3m. The mass of the Camaro, including the driver, is 1580 kg.
a. Determine the magnitudde of the average acceleration of the car during the non-skidding braking.
b. Calculate the magnitude of the average stopping friction force.
c. Assume that the test is now done with skidding on dry asphalt. Determine the magnitude of the kinetic friction, the magnitude of the average acceleration, and stopping distance during the skid. Compare this situation with the non-skid test.
d. Repeat (c) for the car skidding on ice.
e. In the skidding tests, does the mass of the car have an effect on the average acceleration? Explain, using examples.
a.
${v_f}^2={v_i}^2+2a\Delta{d}$
$0=(26.8m/s)^2+2(a)(39.3m)$
$0=718.24+78.6a$
$a=9.14m/s^2$
b.
Using Newton's law...
$F=ma$
$1580kg*9.14m/s^2=14441.2\approx1.44*10^4N$
c.
Magnitude of kinetic friction...
$\mu{K}$ of rubber on dry asphalt is 1.07.
$1.07*1580kg*9.8N/kg\approx1.66*10^4N$
What is the magnitude of average acceleration? I cannot seem to derive a formula for it...and I seem to be stuck with the distance.
d.
Magnitude of kinetic friction...
$\mu{K}$ of rubber on ice is 0.005.
$0.005*1580kg*9.8N/kg\approx8.0*10^1N$
I'm having the same problem as the previous for magnitude of average acceleration and stopping distance...
e.
I'm pretty positive it does, but I can't be sure--I have yet to find the average acceleration.
Any help is GREATLY appreciated! Thanks!
-Nate
2. Originally Posted by nathan02079
The Latex in the Physics forums are a bit messed, so I'm crossing my fingers and hoping this works.
a.
${v_f}^2={v_i}^2+2a\Delta{d}$
$0=(26.8m/s)^2+2(a)(39.3m)$
$0=718.24+78.6a$
$a=9.14m/s^2$
ok
b.
Using Newton's law...
$F=ma$
$1580kg*9.14m/s^2=14441.2\approx1.44*10^4N$
ok
c.
Magnitude of kinetic friction...
$\mu{K}$ of rubber on dry asphalt is 1.07.
$1.07*1580kg*9.8N/kg\approx1.66*10^4N$
What is the magnitude of average acceleration? I cannot seem to derive a formula for it...and I seem to be stuck with the distance.
$\textcolor{red}{ma = f_k}$
$\textcolor{red}{a = \frac{\mu mg}{m}}$
$\textcolor{red}{a = \mu \cdot g}$
d.
Magnitude of kinetic friction...
$\mu{K}$ of rubber on ice is 0.005.
$0.005*1580kg*9.8N/kg\approx8.0*10^1N$
I'm having the same problem as the previous for magnitude of average acceleration and stopping distance...
e.
I'm pretty positive it does, but I can't be sure--I have yet to find the average acceleration.
Any help is GREATLY appreciated! Thanks!
-Nate
...
3. Thank you! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503033757209778, "perplexity": 1703.9272175384108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721268.95/warc/CC-MAIN-20161020183841-00498-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://quant.stackexchange.com/questions/8594/derivation-of-the-tangency-maximum-sharpe-ratio-portfolio-in-markowitz-portfol/8617 | # Derivation of the tangency (maximum Sharpe Ratio) portfolio in Markowitz Portfolio Theory?
I have seen the following formula for the tangency portfolio in Markowitz portfolio theory but couldn't find a reference for derivation, and failed to derive myself. If expected excess returns of $N$ securities is the vector $\mu$ and the covariance of returns is $\Sigma$, then the tangent portfolio (maximum Sharpe Ratio portfolio) is:
$$w^* = (\iota \Sigma^{-1} \mu)^{-1} \Sigma^{-1} \mu$$
Where $\iota$ is a vector of ones. Anyone know a source of the derivation?
• Hi, would you also elaborate a bit on why such a portfolio is called max Sharpe portfolio? Does it maxmise $w^T r / \sqrt{w^T\Sigma w}$? – Vim Feb 12 at 10:01
The unconstrained mean-variance problem $$w_{mv,unc}\equiv argmax\left\{ w'\mu-\frac{1}{2}\lambda w'\Sigma w\right\}$$ can easily be found by taking the derivative $$\frac{\partial}{\partial w}\left(w'\mu-\frac{1}{2}\lambda w'\Sigma w\right)=\mu-\lambda\Sigma w$$ setting it to zero, and solving for $w$. This gives $$w_{mv,unc}\equiv\frac{1}{\lambda}\Sigma^{-1}\mu$$ To find the portfolio constraining all the weights to sum to $1$, it is as simple as dividing by the sum of the portfolio weights $$w_{mv,c}\equiv\frac{w_{mv,unc}}{1'w_{mv,unc}}=\frac{\Sigma^{-1}\mu}{1'\Sigma^{-1}\mu}$$which after canceling out the risk aversion variables gives what you have above.
For more general constraints, such that $Aw=b$, the formula is more complex. I often refer to the derivation in this paper for the formula.
• Thank you so very much. I never thought it would be so simple. I think everyone is familiar with the unconstrained optimal portfolio, but for some reason I never understood how to put the constraint in. Thanks again! – Slow Learner Aug 3 '13 at 4:45
• I know it's late, but why is the tangency optimization problem $argmax\{w'\mu - \frac{ \lambda w' \Sigma w}{2} \}$ instead of $argmax \frac{w'\mu}{\sqrt{w'\Sigma w}}$? We are trying to find the portfolio on the efficient frontier that maximizes the sharpe ratio, the ratio of return to standard deviation, are we not? – Marie. P. Jan 29 '17 at 17:25
• @Marie.P. If you want to maximize the Sharpe ratio, then that's generally the formula you would use. It's more difficult than standard mean variance. Under some assumptions, the optimal mean variance portfolio fully invested will equal the maximum Sharpe ratio portfolio. I just wanted to give a simple derivation of the formula the OP was asking about. I'm sure it would be useful to post other derivations here, if you want to add another. – John Jan 30 '17 at 17:32
Check out following link. In page 23 you'll find the derivation. http://faculty.washington.edu/ezivot/econ424/portfolioTheoryMatrix.pdf
• It is advisable that you also quote the relevant part instead of simply referring to an external link. External references are not permanent and have a tendency to become unreachable as time passes. – Karol J. Piczak Feb 10 '14 at 22:24
Merton, Robert, 1972, An Analytic Derivation of the Efficient Portfolio Frontier, Journal of Financial and Quantitative Analysis
• This is not a proper answer. Please make it complete. – SRKX Aug 1 '13 at 13:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9079762101173401, "perplexity": 433.94824728014777}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000610.35/warc/CC-MAIN-20190627015143-20190627041143-00163.warc.gz"} |
https://palatine.org.uk/expectation-and-variance/ | After a couple of successful bets, many beginners and hobby players begin to misjudge their own strengths, considering themselves to be betting professionals.
#### Contents
An indicator such as the mathematical expectation helps to assess success in the long term.
Under the mat. Expectation in betting is understood to be such a value that the bettor can win or lose when regularly making a bet with the same odds. There is a formula for calculating the indicator:
M = (Probability of winning) x (amount of potential winnings for the current bet) - (probability of losing) x (amount of potential loss for the current bet).
When the mathematical expectation is greater than zero, the player remains in positive territory. If M <0, then at a loss.
Determining the variance is especially important for bettors using the catch-up strategy and its derivatives.
Dispersion - deviation of game results from average indicators, mathematical expectation at a distance. It characterizes the uneven distribution of negative or positive totals in the rates. As stated in 22 bet app article, it is considered as follows:
D = (1 - 1 / K) ^ S, where
S - the number of negative (positive) outcomes in a row
K is a factor.
## Completion
Of course, a win-win mathematical strategy for 22 bet download has not yet been created, but without a competent checkmate. analysis it is impossible to beat the bookmaker at a distance. It is the mathematical approach to betting that significantly increases the bettor's chances in this confrontation, helping to choose underestimated bookmaker outcomes, competently manage bankroll, reduce risks and simply exclude emotions.
Read more in Wikipedia: Probability Theory | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055042624473572, "perplexity": 2028.1678036045857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00596.warc.gz"} |
http://www.cs.ubc.ca/cgi-bin/tr/2001/full | Technical Reports
2001 UBC CS Technical Report Abstracts
TR-2001-01 Quantum Signal Propagation in Depolarizing Channels, March 28, 2001 Nicholas Pippenger, 7 pages
Quantum Signal Propagation in Depolarizing Channels Nicholas Pippenger Abstract: Let X be an unbiassed random bit, let Y be a qubit whose mixed state depends on X, and let the qubit Z be the result of passing Y through a depolarizing channel, which replaces Y with a completely random qubit with probability p. We measure the quantum mutual information between X and Y by T(X; Y) = S(X) + S(Y) - S(X,Y), where S(...) denotes von Neumann's entropy. (Since X is a classical bit, the quantity T(X; Y) agrees with Holevo's bound chi(X; Y) to the classical mutual information between X and the outcome of any measurement of Y.) We show that T(X;Z) <= (1-p)^2 T(X;Y). This generalizes an analogous bound for classical mutual information due to Evans and Schulman, and provides a new proof of their result.
TR-2001-02 Analysis of Carry Propagation in Addition: An Elementary Approach, March 28, 2001 Nicholas Pippenger, 23 pages
Analysis of Carry Propagation in Addition: An Elementary Approach Nicholas Pippenger Abstract: Our goal in this paper is to analyze carry propagation in addition using only elementary methods (that is, those not involving residues, contour integration, or methods of complex analysis). Our results concern the length of the longest carry chain when two independent uniformly distributed n-bit numbers are added. First, we show using just first- and second-moment arguments that the expected length C_n of the longest carry chain satisfies C_n = log_2 n + O(1). Second, we use a sieve (inclusion-exclusion) argument to give an exact formula for C_n. Third, we give an elementary derivation of an asymptotic formula due to Knuth, C_n = log_2 n + Phi(log_2 n) + O((log n)^4 / n), where Phi(x) is a bounded periodic function of x, with period 1, for which we give both a simple integral expression and a Fourier series. Fourth, we give an analogous asymptotic formula for the variance V_n of the length of the longest carry chain: V_n = Psi(log_2 n) + O((log n)^5 / n), where Psi(x) is another bounded periodic function of x, with period 1. Our approach can be adapted to addition with the "end-around" carry that occurs in the sign-magnitude and 1s-complement representations. Finally, our approach can be adapted to give elementary derivations of some asymptotic formulas arising in connection with radix-exchange sorting and collision-resolution algorithms, which have previously been derived using contour integration and residues.
TR-2001-03 Proving Sequential Consistency by Model Checking, May 17, 2001 Tim Braun, Anne Condon, Alan J. Hu, Kai S. Juse, Marius Laza, Michael Leslie and Rita Sharma, 23 pages
Sequential consistency is a multiprocessor memory model of both practical and theoretical importance. The general problem of deciding whether a finite-state protocol implements sequential consistency is undecidable. In this paper, however, we show that for the protocols that arise in practice, proving sequential consistency can be done automatically in theory and can be reduced to regular language inclusion via a small amount of manual effort. In particular, we introduce an approach to construct finite-state ``observers'' that guarantee that a protocol is sequentially consistent. We have developed possible observers for several cache coherence protocols and present our experimental model checking results on a substantial directory-based cache coherence protocol. From a theoretical perspective, our work characterizes a class of protocols, which we believe encompasses all real protocols, for which sequential consistency can be decided. From a practical perspective, we are presenting a methodology for designing memory protocols such that sequential consistency may be proven automatically via model~checking.
TR-2001-05 Separating Crosscutting Concerns Across the Lifecycle:, August 08, 2001 Siobhan Clarke and Robert J. Walker, 13 pages
From Composition Patterns to AspectJ and Hyper/J distribution or persistence) present many problems for software development that manifest themselves throughout the lifecycle. Inherent properties of crosscutting requirements, such as scattering (where their support is scattered across multiple classes) and tangling (where their support is tangled with elements supporting other requirements), reduce the reusability, extensibility, and traceability of the affected software artefacts. Scattering and tangling exist both in designs and code and must therefore be addressed in both. To remove scattering and tangling properties, a means to separate the designs and code of crosscutting behaviour into independent models or programs is required. This paper discusses approaches that achieve exactly that in either designs or code, and presents an investigation into a means to maintain this separation of crosscutting behaviour seamlessly across the lifecycle. To achieve this, we work with composition patterns at the design level, AspectJ and Hyper/J at the code level, and investigate a mapping between the two levels. Composition patterns are a means to separate the design of crosscutting requirements in an encapsulated, independent, reusable, and extensible way. AspectJ and Hyper/J are technologies that provide similar levels of separation for Java code. We discuss each approach, and map the constructs from composition patterns to those of AspectJ and Hyper/J. We first illustrate composition patterns with the design of the Observer pattern, and then map that design to the appropriate code. As this is achieved with varying levels of success, the exercise also serves as a case study in using those implementation techniques.
TR-2001-06 Aspect-Oriented Incremental Customization of Middleware Services, May 28, 2001 Alex Brodsky, Dima Brodsky, Ida Chan, Yvonne Coady, Jody Pomkoski and Gregor Kiczales, 12 pages
As distributed applications evolve, incremental customization of middleware services is often required; these customizations should be unpluggable, modular, and efficient. This is difficult to achieve because the customizations depend on both application-specific needs and the services provided. Although middleware allows programmers to separate application-specific functionality from lower-level details, traditional methods of customization do not allow efficient modularization. Currently, making even minor changes to customize middleware is complicated by the lack of locality. Programmers may have to compromise between the two extremes: to interpose a simple, well-localized layer of functionality between the application and middleware, or to make a large number of small, poorly localized, invasive changes to all execution points which interact with middleware services. Although the invasive approach allows a more efficient customization, it is harder to ensure consistency, more tedious to implement, and exceedingly difficult to unplug. Thus, a common approach is to add an extra layer for systemic concerns such as robustness, caching, filtering, and security. Aspect-oriented programming (AOP) offers a potential alternative between the interposition and invasive approaches by providing modular support for the implementation of crosscutting concerns. AOP enables the implementation of efficient customizations in a structured and unpluggable manner. We demonstrate this approach by comparing traditional and AOP customizations of fault tolerance in a distributed file system model, JNFS. Our results show that using AOP can reduce the amount of invasive code to almost zero, improve efficiency by leveraging the existing application behaviour, and facilitate incremental customization and extension of middleware services.
TR-2001-07 Using Versioning to Simplify the Implementation of a Highly-Available File System, January 23, 2001 Dima Brodsky, Jody Pomkoski, Mike Feely, Norm Hutchinson and Alex Brodsky, 5 pages
(Abstract not available on-line)
TR-2001-08 Image-Based Measurement of Light Sources With Correct Filtering, July 30, 2002 Wolfgang Heidrich and Michael Goesele, 9 pages
In this document we explore the theory and potential experimental setups for measuring the near field of a complex luminary. This work extends on near field photometry by taking filtering issues into account. The physical measurement setups described here have not been tested at the time of writing this document, we simply describe several possibilities here. Once actual tests have been performed, the results will be published elsewhere.
TR-2001-09 Constraint-Based Agents: A Formal Model for Agent Design, May 25, 2001 Alan K. Mackworth and Ying Zhang, 20 pages
Formal models for agent design are important for both practical and theoretical reasons. The Constraint-Based Agent (CBA) model includes a set of tools and methods for specifying, designing, simulating, building, verifying, optimizing, learning and debugging controllers for agents embedded in an active environment. The agent and the environment are modelled symmetrically as, possibly hybrid, dynamical systems in Constraint Nets. This paper is an integrated presentation of the development and application of the CBA framework, emphasizing the important special case where the agent is an online constraint-satisfying device. Using formal modeling and specification, it is often possible to verify complex agents as obeying real-time temporal constraint specifications and, sometimes, to synthesize controllers automatically. In this paper, we take an engineering point of view, using requirements specification and system verification as measurement tools for intelligent systems. Since most intelligent systems are real-time dynamic systems, the requirements specification must be able to represent timed properties. We have developed timed \$\forall\$-automata for this purpose. We present this formal specification, examples of specifying requirements and a general procedure for verification. The CBA model demonstrates the power of viewing constraint programming as the creation of online constraint-solvers for dynamic constraints.
TR-2001-10 The Shortest Disjunctive Normal Form of a Random Boolean Function, June 08, 2001 Nicholas Pippenger, 28 pages
The Shortest Disjunctive Normal Form of a Random Boolean Function Nicholas Pippenger This paper gives a new upper bound for the average length l(n) of the shortest disjunctive normal form for a random Boolean function of n arguments, as well as new proofs of two old results related to this quantity. We consider a random Boolean function of n arguments to be uniformly distributed over all 2^{2^n} such functions. (This is equivalent to considering each entry in the truth-table to be 0 or 1 independently and with equal probabilities.) We measure the length of a disjunctive normal form by the number of terms. (Measuring it by the number of literals would simply introduce a factor of n into all our asymptotic results.) We give a short proof using martingales of Nigmatullin's result that almost all Boolean functions have the length of their shortest disjunctive normal form asymptotic to the average length l(n). We also give a short information-theoretic proof of Kuznetsov's lower bound l(n) >= (1+o(1)) 2^n / log n log log n. (Here log denotes the logarithm to base 2.) Our main result is a new upper bound l(n) <= (1+o(1)) H(n) 2^n / log n log log n, where H(n) is a function that oscillates between 1.38826... and 1.54169.... The best previous upper bound, due to Korshunov, had a similar form, but with a function oscillating between 1.581411... and 2.621132.... The main ideas in our new bound are (1) the use of Ro"dl's "nibble" technique for solving packing and covering problems, (2) the use of correlation inequalities due to Harris and Janson to bound the effects of weakly dependent random variables, and (3) the solution of an optimization problem that determines the sizes of "nibbles" and larger "bites" to be taken at various stages of the construction.
TR-2001-11 Characterizations of Random Set-Walks, August 1, 2001 Joseph H. T. Wong, 58 pages
In this thesis, we introduce a new class of set-valued random processes called random set-walk, which is an extension of the classical random walk that takes into account both the nonhomogeneity of the walk's environment, and the additional factor of nondeterminism in the choices of such environments. We also lay down the basic framework for studying random set-walks. We define the notion of a characteristic tuple as a 4-tuple of first-exit probabilities which characterizes the behaviour of a random walk in a nonhomogeneous environment, and a characteristic tuple set as its analogue for a random set-walk. We prove several properties of random set-walks and characteristic tuples, from which we derive our main result: the long-run behaviour of a sequence of random set-walks, relative to the endpoints of the walks, converges as the length of the walks tend to infinity.
TR-2001-12 Enumeration of Matchings in the Incidence Graphs of Complete and Complete Bipartite Graphs, September 10, 2001 Nicholas Pippenger, 23 pages
Enumeration of Matchings in the Incidence Graphs of Complete and Complete Bipartite Graphs Nicholas Pippenger If G = (V, E) is a graph, the incidence graph I(G) is the graph with vertices the union of V and E and an edge joining v in V and e in E when and only when v is incident with e in G. For G equal to K_n (the complete graph on n vertices) or K_{n,n} (the complete bipartite graph on n + n vertices), we enumerate the matchings (sets of edges, no two having a vertex in common) in I(G), both exactly (in terms of generating functions) and asymptotically. We also enumerate the equivalence classes of matchings (where two matchings are considered equivalent if there is an automorphism of G that induces an automorphism of I(G) that takes one to the other).
TR-2001-13 Concern Graphs: Finding and Describing Concerns Using Structural Program Dependencies, September 10, 2001 Martin P. Robillard and Gail C. Murphy, 11 pages
Many maintenance tasks address concerns, or features, that are not well modularized in the source code comprising a system. Existing approaches available to help software developers locate and manage scattered concerns use a representation based on lines of source code, complicating the analysis of the concerns. In this paper, we introduce the Concern Graph representation that abstracts the implementation details of a concern and makes explicit the relationships between different parts of the concern. The abstraction used in a Concern Graph has been designed to allow an obvious and inexpensive mapping back to the corresponding source code. To investigate the practical tradeoffs related to this approach, we have built the Feature Exploration and Analysis tool (FEAT) that allows a developer to manipulate a concern representation extracted from a Java system, and to analyze the relationships of that concern to the code base. We have used this tool to find and describe concerns related to software change tasks. We have performed case studies to evaluate the feasibility, usability, and scalability of the approach. Our results indicate that Concern Graphs can be used to document a concern for change, that developers unfamiliar with Concern Graphs can use them effectively, and that the underlying technology scales to industrial-sized programs.
TR-2001-14 Loosely Coupled Optimistic Replication for Highly Available, Scalable Storage, September 13, 2001 Dima Brodsky, Jody Pomkoski, Michael J. Feeley, Norman Hutchinson and Alex Brodsky, 12 pages
People are becoming increasingly reliant on computing devices and are trusting increasingly important data to persistent storage. These systems should protect this data from failure and ensure that it is available anytime, from anywhere. Unfortunately, traditional mechanisms for ensuring high availability suffer from the complexity of maintaining consistent, distributed replicas of data. This paper describes Mammoth, a novel file system that uses a loosely-connected set of nodes to replicate data and maintain consistency. The key idea of Mammoth is that files and directories are stored as histories of immutable versions and that all meta-data is stored in append-only change logs. Users specify availability policies for their files and the system uses these policies to replicate certain, but not necessarily all, versions to remote nodes to protect them from a variety of failures. Because file data is immutable, it can be freely replicated without complicating the file's consistency. File and directory meta-data is replicated using an optimistic policy that allows partitioned nodes to read and write whatever file versions are currently accessible. When network partitions heal, inconsistent meta-data is reconciled by merging the meta-data updates made in each partition; conflicting updates manifest as branches in the file's or directory's history and can thus can be further resolved by higher-level software or users. We describe our design and the implementation and performance of an early prototype.
TR-2001-15 Bayesian Latent Semantic Analysis of Multimedia Databases, October 11, 2001 Nando de Freitas and Kobus Barnard, 35 pages
We present a Bayesian mixture model for probabilistic latent semantic analysis of documents with images and text. The Bayesian perspective allows us to perform automatic regularisation to obtain sparser and more coherent clustering models. It also enables us to encode a priori knowledge, such as word and image preferences. The learnt model can be used for browsing digital databases, information retrieval with image and/or text queries, image annotation (adding words to an image) and text illustration (adding images to a text).
TR-2001-17 Clustering Facial Displays in Context, November 13, 2001 Jesse Hoey, 16 pages
A computer user's facial displays will be context dependent, especially in the presence of an embodied agent. Furthermore, each interactant will use their face in different ways, for different purposes. These two hypotheses motivate a method for clustering patterns of motion in the human face. Facial motion is described using optical flow over the entire face, projected to the complete orthogonal basis of Zernike polynomials. A context-dependent mixture of hidden Markov models (cmHMM) clusters the resulting temporal sequences of feature vectors into facial display classes. We apply the clustering technique to sequences of continuous video, in which a single face is tracked and spatially segmented. We discuss the classes of patterns uncovered for a number of subjects.
TR-2001-18 The Optimized Segment Support Map for the Mining of Frequent Patterns, November 15, 2001 Carson Kai-Sang Leung, Raymond T. Ng and Heikki Mannila, 25 pages
Computing the frequency of a pattern is a key operation in data mining algorithms. We describe a simple, yet powerful, way of speeding up any form of frequency counting satisfying the monotonicity condition. Our method, the optimized segment support map (OSSM), is based on a simple observation about data: Real life data sets are not necessarily be uniformly distributed. The OSSM is a light-weight structure that partitions the collection of transactions into m segments, so as to reduce the number of candidate patterns that require frequency counting. We study the following problems: (i) What is the optimal value of m, the number of segments to be used (the segment minimization problem)? (ii) Given a user-determined m, what is the best segmentation/composition of the m segments (the constrained segmentation problem)? For the segment minimization problem, we provide a thorough analysis and a theorem establishing the minimum value of m for which there is no accuracy lost in using the OSSM. For the constrained segmentation problem, we develop various algorithms and heuristics to help facilitate segmentation. Our experimental results on both real and synthetic data sets show that our segmentation algorithms and heuristics can efficiently generate OSSMs that are compact and effective.
TR-2001-19 Animation of Fish Swimming, January 30, 2002 William F. Gates, 8 pages
We present a simple, two-part model of the locomotion of slender-bodied aquatic animals designed specifically for the needs of computer animation. The first part of the model is kinematic and addresses body deformations for three swimming modes: steady swimming, rapid starting, and turning. The second part of the model is dynamic and addresses the resulting propulsion of the aquatic animal. While this approach is not as general as a fully dynamic model, it provides the animator with a small set of intuitive parameters that directly control how the fish model moves and is more efficient to simulate.
TR-2001-20 Free-Surface Conditions in the Realistic Animation of Liquids, January 30, 2002 William F. Gates, 13 pages
The realistic animation of liquids based on the dynamic simulation of free-surface flow requires appropriate conditions on the liquid-gas interface. These conditions can be painstaking to implement and are in general not unique. We present the conditions we use in our implementation of a fluid animation system and discuss our rationale behind them.
TR-2001-22 Controlling Fluid Flow Simulation, January 30, 2002 William F. Gates and Alain Fournier, 3 pages
Simulating fluid dynamics can be a powerful approach to animating liquids and gases, but it is often difficult to ``direct'' the simulation to ``perform'' as desired. We introduce a simple yet powerful technique of controlling incompressible flow simulation for computer animation purposes that works for any simulation method using a projection scheme for numerically solving the Navier-Stokes equations. In our technique, an abstract vector field representing the desired influence over the simulated flow is modelled using simple primitives. This technique allows an arbitrarily degree of control over the simulated flow at every point while still conserving mass, momentum, and energy. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9180226922035217, "perplexity": 1188.8753581200708}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822145.14/warc/CC-MAIN-20171017163022-20171017183022-00181.warc.gz"} |
https://mathsci.kaist.ac.kr/~sangil/seminar/20090327/ | ## Jack Koolen, Some topics in spectral graph theory
Some topics in spectral graph theory
Jack Koolen
Department of Mathematics, POSTECH, Pohang, Korea
2009/03/27 Fri 5PM-6PM (Room 2411)
In spectral graph theory one studies the eigenvalues (and spectrum) of the adjacency matrix and how they are related with combinatorial properties of the underlying graph.
In this talk, I will discuss several topics in spectral graph theory.
Tags: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9514641761779785, "perplexity": 2655.8870095453526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700560.62/warc/CC-MAIN-20191020001515-20191020025015-00264.warc.gz"} |
https://www.physicsforums.com/threads/time-period-of-shm.540090/ | # Homework Help: Time Period of SHM
1. Oct 13, 2011
### Asphyx820
1. The problem statement, all variables and given/known data
Two Springs are present (one just infront of the other). The Spring towards the left has +Q charge and towards the right -Q charge (at their ends).The distance between the two charges is d. The Springs are of length l. Find the Time Period of the Simple Harmonic Motion if the charges are of same mass. ( l > d )
Diagram
(Wall)-->(Spring)-->+Q -Q<--(Spring)<--(Wall)
2. Relevant equations
F(elec)=(k Q^2) / (d^2) where k=(1/4)∏ε
F(Spring)=( Kl )
3. The attempt at a solution
I know the above two equations, but cant proceed. Is there any other force too? I cant figure out why will the charges move back again? I'm having two confusions
1) The charges are opposite so they will attract each other. When they reach a certain point they will collide (as l > d ) and move back. Is this the reason why they move back? What other equation do i have to use?
2) Is it the Spring force will pulls the charges back before they collide. But it shouldn't be true as ( l > d ) and electrostatic forces are very strong and spring force cannot overcome it. Am i right? so how should i proceed
Pls help me....
2. Oct 14, 2011
### ehild
You can assume that the system is in equilibrium initially, and then you give a little push to the masses. For a simple harmonic motion, the displacement of the charged masses from their equilibrium positions must be small with respect to the distance between them. Find the time period of small oscillations with this assumption. Do not forget that the springs are connected to two opposite walls, so the sum of the spring lengths and the distance between the masses is constant.
ehild
3. Oct 15, 2011
### Asphyx820
But what about the electrstatic force. It constantly changes as the distance between the charges change. how to incorporate that. I know by intergrating the force, but then what to do?
4. Oct 15, 2011
### ehild
Do Taylor expansion of the Coulomb force around the equilibrium position and keep the constant and first-order terms.
ehild
5. Oct 15, 2011
### Asphyx820
I havent yet learnt Taylor expansion. So is there any other method ?
I actually found this question in a magazine. The answer page was torn. So i dont know the answer too.
I have tried solving it 8-10 times but with no success !! (cant reach the final expansion)
I found this sum interesting so I picked it up
I would be helpful to me if you can solve the sum or give me the equations to be solved
I would be learning things both ways.
6. Oct 15, 2011
### ehild
Sorry, I am not allowed to solve problems. I can only help.
It is useful to learn how to calculate with small quantities.
Suppose you have to calculate (1+a)2, where a<<1. Decomposing the square, (1+a)2=1+2a+a2. If a<<1 you can ignore the square of it at approximate (1+a)2≈1+2a. Calculate 1.0012and compare it with 1+2*0.001.
Suppose you have a fraction, 1/(1+q) and q << 1.
1/(1+q) is equal to the sum of the geometric series 1-q+q2-q3+... =1/(1+q)
If q<<1 you can ignore the terms with second or higher power, and use the approximation 1/(1+q)=1-q.
Try to calculate 1/1.001 and compare it with 1-0.001.
Here you have the Coulomb force of form A/(R+Δr)2.
Factor out R: You get ( A/(R2) (1/(1+Δr/R)2.
Assume that Δr/R<<1. Try to apply the approximations above.
ehild | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956589698791504, "perplexity": 728.817567888511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510867.6/warc/CC-MAIN-20181016201314-20181016222814-00121.warc.gz"} |
https://chemistry.stackexchange.com/questions/27636/does-sodium-form-complexes-like-transition-metal-ions | # Does sodium form complexes like transition metal ions?
I realise that there is a similar question here Difference between sodium ion and a transition metal ion dissolving in water? and it seems to answer my question, however I was reading about how aluminion ions form complexes in water, and aluminium is not a transition metal. So how can aluminium form complexes like transition metal ions, but sodium does not?
Aqua complexes are formed in aqueous solution, the most common being $\ce{[Na(H2O)6]+}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232339262962341, "perplexity": 1390.854036268309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00083.warc.gz"} |
https://brilliant.org/problems/brnstedlowry-acidbase-theory/ | # Brønsted–Lowry acid–base theory
Chemistry Level pending
In chemistry, the Brønsted–Lowry theory is an acid–base reaction theory, proposed independently by Johannes Nicolaus Brønsted and Thomas Martin Lowry in $$1923.$$ The fundamental concept of this theory is that an acid (or Brønsted acid) is defined as being able to lose, or "donate" a proton (the hydrogen cation, or $$\ce{H+}$$) while a base (or Brønsted base) is defined as a species with the ability to gain, or "accept," a proton. The above shows the reaction model where an ammonia $$(\ce{NH3})$$ and a hydrogen chloride $$(\ce{HCl})$$ molecules react to produce an ammonium ion $$(\ce{NH4+})$$ and a chloride ion $$(\ce{Cl-})$$. Which of the following correctly lists all, if any, of the Brønsted acids and Brønsted bases?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448249340057373, "perplexity": 4100.2348835944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424960.67/warc/CC-MAIN-20170725022300-20170725042300-00450.warc.gz"} |
https://tex.stackexchange.com/questions/140458/biblatex-chicago-in-texshop-error-cannot-find-bib | # Biblatex-Chicago in TexShop: Error: Cannot find .bib
I'm a doctoral candidate in the humanities who can claim only the osmotic tech savy that comes from living with people who know what they're doing. I'm just beginning my dissertation and hoping to set up BibDesk as a reference management system that integrates with TexShop. So far, I've not been successful in typesetting references or in producing a bibliography. My field requires formatting according to the Chicago Manual of Style, for which I am trying to use Biblatex-Chicago.
I have a file which begins:
\documentclass[letterpaper,12pt]{article}
%\documentclass[letterpaper,11pt]{article}
\usepackage{fullpage}
\usepackage{fancyhdr}
\usepackage{nameref}
\usepackage{marginnote}
\usepackage[top=1in, bottom=1in, inner=0.25in, outer=2.75in, marginparwidth=2.50in]{geometry}
\usepackage{setspace}
\usepackage[
notes,
backend=biber,
hyperref=true
]{biblatex-chicago}
\title{Half-Draft}
\author{Dissertation Chapter 2}
\date{6 November, 2013} % Activate to display a given date or no date
\begin{document}
When typeset, I still see only a cite key in the footnote. I see no error when typesetting in LaTeX. When I switch to BibTex (to do the Latex+Bibtex+Latex+Latex typesetting sequence I've seen recommended) I get the following error:
INFO - This is Biber 1.6
INFO - Logfile is '20131021-Hist934-SeminarPaper-SFD-HalfDraft.blg'
INFO - Found 1 citekeys in bib section 0
INFO - Processing section 0
INFO - Looking for bibtex format file 'DissCh2-Bibliography' for section 0
ERROR - Cannot find 'DissCh2-Bibliography'!
INFO - ERRORS: 1
I have the .bib file stored in the same folder as the .tex and other files for this document.
I also tried adding a bibliography using:
\bibliographystyle{plain}
\bibliography{DissCh2-Bibliography}
\end{document}
This produced errors when trying to typeset in LaTeX:
Package bib latex Warning: Missing 'hyperref' package.
(biblatex) Setting hyperref=false
(./20131021-Hist934-SeminarPaper-SFD-HalfDraft.aux)
*geometry* drive: auto-detecting
*geometry* detected driver: pdftex
No file 20131021-Hist934-SeminarPaper-SFD-HalfDraft.bbl.
(/usr/local/texlive/2013/texmf-dist/tex/latex/base/omscmr.fd) [1{/usr/local/texlive/2013/texmf-var/fonts/map/pdftex/updmap/pdftex.map}] [2]
LaTeX Warning: Citation 'Logan:1677fk' on page 3 undefined on input line 114.
[3] [4] [5]
! Package biblatex Error: '\bibliographystyle' invalid.
See the biblatex package documentation for explanation.
Type H <return> for immediate help
. . .
l.152 \bibliographystyle{plain}
?
When I type H <return> as prompted, it tells me Your command was ignored. Can anyone help?
EDIT*** Thank you to everyone for the insight. It does appear that I was confused about the differences between \bibliography{} and addbibresource{}. Adding .bib when using the latter has fixed the problem, and I discovered that the former will work too when I handle it correctly. Thank you all!
• Have you tried \addbibresource{DissCh2-Bibliography.bib}? Biblatex needs the file extension. – DG' Oct 24 '13 at 14:54
• you say you are using (the older) bibtex program to generate the bibliography but your log says INFO - This is Biber 1.6 which suggests you are using the (newer, and default for biblatex) biber system. That confusion is part of the problem (and for example why biblatex is complaining about \bibliographystyle which is a command for controlling bibtex) – David Carlisle Oct 24 '13 at 14:55
As others have remarked, biber cannot find your .bib file because you didn't tell it its full name, namely DissCh2-Bibliography.bib. Giving the full name and getting rid of the \bibliographystyle and \bibliography commands should get you up and running again.
... However, a few remarks are in order given that you are just getting your dissertation up and running (too long for a comment). Trust me, it is better to make the change(s) now rather than part-way through or right before the end.
First, although it may be tempting to typeset an individual thesis chapter in the documentclass article, this is not really recommended. Eventually, you will have a whole thesis composed of several chapters (and perhaps appendices and other things), and the article class is completely unsuitable for it. You should switch your workflow ASAP to either the report or book class, or (my recommendation) one of the more feature-rich classes like memoir or the KOMA-script classes (scrreprt or scrbook), or perhaps classicthesis. These feature-rich classes give you a lot of functionality 'out of the box'.
Let's pretend, however, that you are using the simple report class (nothing wrong with it, after all: it just means you'll need to load more packages to do the tweaking you want/need/desire).
\documentclass[letterpaper,12pt]{report}
% \usepackage{fullpage} % <-- probably not needed; use geometry
\usepackage{fancyhdr}
\usepackage{marginnote}
\usepackage[top=1in, bottom=1in, inner=0.25in, outer=2.75in, marginparwidth=2.50in]{geometry}
\usepackage{setspace}
So far so good. I'd recommend loading fontenc and inputenc is you plan on using pdflatex to compile your documents (or fontspec if using xelatex or lualatex):
\usepackage[T1]{fontenc} % strongly recommended
\usepackage[utf8]{inputenc} % if the in the humanities, you probably will need more than just ASCII
Now the real issue, biblatex; I strongly recommend you load babel and csquotes for quotations and multi-lingual support. For now, we'll keep it simple (only English):
\usepackage[american]{babel} % or option 'british'
\usepackage{csquotes} % many options skipped for now
\usepackage[
notes,
backend=biber,
hyperref=true % <-- note how you are asking for hyperref integration
]{biblatex-chicago}
\addbibresource{DissCh2-Bibliography.bib} % biber can do more than just .bib files, so you need to specify the extension.
Normally, you should load hyperref as the last package (or nearly last: see this question. We'll keep it simple again:
\usepackage{xcolor} % <-- for coloured links
And that should be enough to get you started. Putting it all together, you could do the following:
\documentclass[letterpaper,12pt]{report}
% the following filecontents is just
% for the sake of getting a self-contained example file
\usepackage{filecontents}
\begin{filecontents}{\jobname.bib}
@Article{aaa,
author = {Smith, John},
title = {Article Title},
journal = {Journal Title},
date = 2000,
volume = 30,
number = 2,
pages = {100--120},
}
\end{filecontents}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{fancyhdr}
\usepackage{marginnote}
\usepackage[top=1in, bottom=1in, inner=0.25in, outer=2.75in, marginparwidth=2.50in]{geometry}
\usepackage{setspace}
\usepackage[american]{babel}
\usepackage{csquotes}
\usepackage[
notes,
backend=biber,
hyperref=true
]{biblatex-chicago}
\addbibresource{\jobname.bib} % <-- 'name' of bibliography above
\usepackage{xcolor}
\begin{document}
\chapter{DissCh2}
A citation.\autocite{aaa}
\printbibliography
\end{document}
Now, the reason this structure makes sense is because it will allow you to break up your chapters and then include them in this master file via \include{DissCh2} (say). This will make editing your chapters a little easier since things are more compartmentalized:
% greatly simplified masterfile
\documentclass{report}
... preamble stuff goes here
\begin{document}
\include{abstract}
\include{acknowledgements}
\tableofcontents
... other front matter
\include{chapter01}
\include{DissCh2}
... back matter stuff
\printbibliography
\end{document}
• Jon, thank you so much! I am very aware that the structure and formatting I choose now will be crucial throughout the process. I really appreciate your insight about better document formats for the humanities and for the ability to compile a dissertation out of component parts. I'll definitely look into all of these options, thank you for the guidance! – user38869 Oct 25 '13 at 15:20
• @user38869 -- well, I hope the advice proves useful! – jon Oct 25 '13 at 15:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8668219447135925, "perplexity": 3874.172839825016}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526210.32/warc/CC-MAIN-20190719095313-20190719121313-00418.warc.gz"} |
https://research.tudelft.nl/en/publications/the-integral-as-accumulation-function-approach-a-proposal-of-a-le | # The integral as accumulation function approach: A proposal of a learning sequence for collaborative reasoning
Sonia Palha, Jeroen Spandaw
Research output: Contribution to journalArticleScientificpeer-review
## Abstract
Learning mathematical thinking and reasoning is a main goal in mathematical education. Instructional tasks have an important role in fostering this learning. We introduce a learning sequence to approach the topic of integrals in secondary education to support students mathematical reasoning while participating in collaborative dialogue about
the integral‐as‐accumulation‐function. This is based on the notion of accumulation in general and the notion of accumulative distance function in particular. Through a case‐study methodology we investigate how this approach elicits 11th grade students’ mathematical thinking and reasoning. The results show that the integral‐as‐accumulationfunction
has potential, since the notions of accumulation and accumulative function can provide a strong intuition for mathematical reasoning and engage students in mathematical dialogue. Implications of these results for task design and further research are discussed.
Original language English 109 - 136 28 European Journal of Science and Mathematics Education 7 3 Published - 2019
## Keywords
• mathematical reasoning
• collaborative reasoning
• secondary education
• integral
• accumulation function
## Fingerprint
Dive into the research topics of 'The integral as accumulation function approach: A proposal of a learning sequence for collaborative reasoning'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649743795394897, "perplexity": 2595.731316617431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00116.warc.gz"} |
https://experts.illinois.edu/en/publications/acoustic-diode-wave-non-reciprocity-in-nonlinearly-coupled-wavegu | # Acoustic diode: Wave non-reciprocity in nonlinearly coupled waveguides
Itay Grinberg, Alexander F. Vakakis, Oleg V. Gendelman
Research output: Contribution to journalArticlepeer-review
## Abstract
The paper describes a passive time-independent setting for non-reciprocal wave transmission in mechanical and acoustic systems with strong nonlinearities. In the proposed system vibro-impact elements with pre-defined clearances are used to couple two non-dispersive waveguides. The asymmetry necessary for the non-reciprocal behavior is realized through unequal grounding springs of the vibro-impact elements. We show that, for appropriate selection of the parameters, the proposed system acts as a mechanical diode, allowing the transmission of acoustic waves in one direction and completely preventing reverse transmission. Two different designs of the coupling elements are suggested, with the possibility of single-sided or double-sided impacts. A unique feature of the proposed non-reciprocal acoustic system is that minimal distortion of the harmonic content of the transmitted wave occurs, in contrast to current designs where nonlinear non-reciprocity is achieved at the expense of a rather strong distortion of the transmitted signals. For both designs, we derive exact solutions for propagation and reflection of the harmonic waves, and demonstrate the possibility for strong non-reciprocity. Stability properties of the observed solutions in the space of parameters are also explored.
Original language English (US) 49-66 18 Wave Motion 83 https://doi.org/10.1016/j.wavemoti.2018.08.005 Published - Dec 2018
## Keywords
• Acoustic diode
• Non-reciprocity
• Phonon diode
• Vibro-impact
## ASJC Scopus subject areas
• Modeling and Simulation
• Physics and Astronomy(all)
• Computational Mathematics
• Applied Mathematics
## Fingerprint
Dive into the research topics of 'Acoustic diode: Wave non-reciprocity in nonlinearly coupled waveguides'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896026074886322, "perplexity": 4231.158719742689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00391.warc.gz"} |
https://marozols.wordpress.com/category/math/quantum-math/quantum-algorithms-quantum/ | # Quantum walks can find a marked element on any graph
This post is about my paper Quantum walks can find a marked element on any graph with Hari Krovi, Frédéric Magniez, and Jérémie Roland. We wrote it in 2010, but after spotting a subtle mistake in the original version, we have recently substantially revised it. It went from 15 to 50 pages after we added much more details and background material, as well as corrected some small bugs and addressed one major bug.
Problem statement
Imagine a huge graph with many vertices, some of which are marked. You are able to move around this graph and query one vertex at a time to figure out if it is marked or not. Your goal is to find any of the marked vertices.
Given an instance of such problem, a typical way to solve it is by setting up a random walk on the graph. You can imagine some probabilistic procedure that systematically moves from one vertex to another and looks for a marked one.
More formally, you would define a stochastic matrix $P$ whose entry $P_{xy}$ describes the probability to move from vertex $x$ to $y$. Starting from some randomly chosen initial vertex, your goal is to end up in one of the marked vertices in the set $M$.
Our result
We show that any classical algorithm that is based on such random walk can be turned into a quantum walk algorithm that finds a marked vertex quadratically faster. To state this more formally, let us define the hitting time of a random walk:
Definition. The hitting time of $P$ with respect to a set of marked vertices $M$ is the expected number of steps the walk $P$ takes to find a marked vertex, starting from a random unmarked vertex chosen from a distribution which is proportional to the stationary distribution of $P$. Let us denote this quantity by $\mathrm{HT}(P,M)$.
Our main result is as follows:
Theorem. Quantum walk can find a marked vertex in $\sqrt{\mathrm{HT}^+(P,M)}$ steps.
As you can see, this is a very general result—no matter how cleverly the transition probabilities for the random walk $P$ are chosen, we can always cook up a related quantum walk that beats the classical walk quadraticaly!
Note that this is much more general than what is achieved by Grover’s algorithm. Grover’s algorithm corresponds to the special case, where the underlying graph is complete and the random walk moves from any vertex to any other with equal probability.
It’s weaker than before…
You may have noticed a little “+” that appeared next to $\mathrm{HT}$ in the statement of the theorem. Indeed, this has to do with the subtle mistake we spotted in our earlier proof. It turns out that if $|M| = 1$ (i.e., there is a unique marked vertex) then
$\mathrm{HT}^+(P,M) = \mathrm{HT}(P,M)$
and we indeed achieve a quadratic quantum speedup. Unfortunately, when $|M| > 1$ it can happen that
$\mathrm{HT}^+(P,M) > \mathrm{HT}(P,M)$
and thus we don’t get a fully quadratic speedup. Hence our result is weaker than what we claimed previously. On the other hand, the new version is more correct!
Proof idea
Our algorithm is based on Szegedy’s method for turning random walks into quantum walks. It constructs two unitary matrices, each corresponding to a reflection with respect to a certain subspace. These subspaces are defined by
1. the standard basis vectors corresponding to marked vertices,
2. the unit vectors obtained by taking entry-wise square roots of the rows of $P$.
Together these two reflections define one step of the quantum walk.
Our contribution consists in modifying the original walk $P$ before we quantize it. We define a semi-absorbing walk $P(s)$ that leaves a marked vertex with probability $1-s$ even when one is found. This might seem like a bad idea, but one can check that at least classically it does not make things worse by too much. In fact, the $s \to 1$ limit of $P(s)$ corresponds to the classical algorithm that never leaves a marked vertex once it is found.
Glitch in the previous proof
Our proof makes extensive use of a certain quantity $\mathrm{HT}(s)$ which we associate to $P(s)$ and call interpolated hitting time. Then $\mathrm{HT}^+(P,M)$, the extended hitting time that appears in the above theorem, is defined as the limit
$\mathrm{HT}^+(P,M) := \lim_{s \to 1} \mathrm{HT}(s)$
When $|M| = 1$, taking this limit is straightforward and one can easily see that it gives $\mathrm{HT}(P,M)$, the regular hitting time.
It is tempting to guess that the same happens also when $|M| > 1$. Indeed, it is far from obvious why in this case the answer does not come out the same way as in the $|M| = 1$ case. This is exactly what was overlooked in the earlier version of our paper. Computing the limit properly when $|M| > 1$ is much harder (the expression contains inverse of some matrix that is singular at $s = 1$). This is done in detail in the final appendix of our paper.
Open problems
Here are some open questions:
• Why is it that our algorithm has a harder time to find a needle in a haystack when there are several needles rather than just one?
• What is the operational interpretation of the interpolated hitting time $\mathrm{HT}(s)$?
• Can quadratic speedup for finding be achieved also when there are multiple marked vertices?
• How can we efficiently prepare the initial state on which the walk is applied?
One might get some insight in the first two questions by observing that our algorithm actually solves a slightly harder problem than just finding a marked vertex—it samples the marked vertices according to a specific distribution (proportional to $P(s^*)$ for some $s^*$). When there is only one marked vertex, finding it is the same as sampling it. However, for multiple marked vertices this is equivalence does not hold and in general it should be harder to sample.
# Exact quantum query algorithms
Andris Ambainis, Jānis Iraids, and Juris Smotrovs recently have obtained some interesting quantum query algorithms [AIS13]. In this blog post I will explain my understanding of their result.
Throughout the post I will consider a specific type of quantum query algorithms which I will refer to as MCQ algorithms (the origin of this name will become clear shortly). They have the following two defining features:
• they are exact (i.e., find answer with certainty)
• they measure after each query
Quantum effects in an MCQ algorithm can take place only for a very short time — during the query. After the query the state is measured and becomes classical. Thus, answers obtained from two different queries do not interfere quantumly. This is very similar to deterministic classical algorithms that also find answer with certainty and whose state is deterministic after each query.
Basics of quantum query complexity
Our goal is to evaluate some (total) Boolean function $f(x)$ on an unknown input string $x \in \{1,-1\}^n$ (we assume for convenience that binary variables take values +1 and -1). We can access $x$ only by applying oracle matrix
$Q_x = \begin{pmatrix} x_1 & & & \\ & x_2 & & \\ & & \ddots & \\ & & & x_n \end{pmatrix}$
to some quantum state. The minimum number of queries needed to determine the value of $f(x)$ with certainty is called the exact quantum query complexity of $f$.
Each interaction with oracle in an MCQ algorithm can be described as follows:
1. prepare some state $|\psi\rangle$
2. apply query matrix $Q_x$
3. apply some unitary $U$
4. measure in the standard basis
Intuitively, this interaction is a quantum question (specified by $|\psi\rangle$ and $U$) which produces a classical answer (measurement outcome $i$ that appears with probability $|\langle i|UQ_x|\psi\rangle|^2$).
Since each of the answers reveal some property of $x$, it is convenient to identify the collection of these properties with the question itself (I think of it as a “quantum Multiple Choice Question”, hence MCQ). Of course, not every collection of properties constitutes a valid quantum question — only those for which there exists a corresponding $|\psi\rangle$ and $U$. (We will see some examples soon.)
MCQ algorithms
Simply put, an MCQ algorithm is a decision tree: each its leaf contains either 0 or 1 (the value of $f(x)$ for corresponding $x$), and each of the remaining nodes contains a quantum question and children correspond to answers.
Classical deterministic decision trees are very similar to MCQ algorithms, except that their nodes contain classical questions — at each node we can only ask one of the $n$ variables $x_i$. Quantumly, we have a larger variety of questions — for example, we can ask XOR of two variables (as in Deutsch’s algorithm). Another difference is that a quantum question does not have a unique answer: if several answers are consistent with the input string $x$, we will get one of them at random.
An obvious question regarding MCQ algorithms is this:
How can we exploit the quantum oracle to find $f(x)$ with less queries than classically?
Surprisingly, until recently essentially no other way of exploiting the quantum oracle was known, other than Deutsch’s XOR trick (see [MJM11] by Ashley Montanaro, Richard Jozsa, and Graeme Mitchison for more details). What is interesting about the [AIS13] paper is that it provides a new trick!
Query, measure, recurse!
All algorithms discussed in [AIS13] are MCQ and recursive. They proceed as follows:
1. query
2. measure
3. recurse
In the last step, depending on the measurement outcome, either $f(x)$ is found or the problem is reduced to a smaller instance and we proceed recursively. Let me explain how this works for two functions which I will call BALANCED and MAJORITY (they are special cases of EXACT and THRESHOLD discussed in [AIS13]).
BALANCED
$\mathrm{BALANCED}_{2k} (x_1, x_2, \dotsc, x_{2k}) = 1$ iff exactly $k$ of the variables $x_i$ are equal to 1. An MCQ algorithm asks a quantum question that can reveal the following properties of the input string $x$:
1. $x$ is not balanced (the number of +1s and -1s is not equal)
2. $x_i$ is not equal to $x_j$ (for some $i \neq j$)
If we get the first answer then $\mathrm{BALANCED}_{2k}(x) = 0$ and we are done. If we get the second answer for some $i \neq j$, we can ignore the variables $x_i$ and $x_j$ and recursively evaluate $\mathrm{BALANCED}_{2(k-1)}$ on the remaining variables. In total, we need at most $k$ queries (which can be shown to be optimal).
It remains to argue that the above is a valid quantum question. Alternatively, we can show how to prepare the following (unnormalized) quantum state:
$\sum_{i=1}^{2k} x_i |0\rangle + \sum_{i
(It lives in the space spanned by $|0\rangle$ and $|ij\rangle$ for all $i.) This can be easily done by taking
$|\psi\rangle = \sum_{i=0}^{2k} |i\rangle$
and $U$ that acts as
$U |i\rangle = \frac{1}{\sqrt{2k}} \Bigl( |0\rangle + \sum_{j>i} |ij\rangle - \sum_{j
To check that $U$ is a valid isometry, notice that it maps $|i_1\rangle$ and $|i_2\rangle$ to paths that overlap in exactly one cell (rows are labeled by $i$ and columns by $j$):
Since both states also have $|0\rangle$ in common, they are orthogonal.
MAJORITY
$\mathrm{MAJORITY}_{2k+1} (x_1, x_2, \dotsc, x_{2k+1}) = 1$ iff at least $k+1$ of the variables $x_i$ are equal to 1. This time the quantum question has answers
1. $x$ is not balanced when $x_i$ is omitted (for some $i$)
2. $x_i$ is not equal to $x_j$ (for some $i \neq j$)
If we get the first answer for some $i$, we omit $x_i$ and any other variable. If we get the second answer for some $i \neq j$, we ignore the variables $x_i$ and $x_j$. In both cases we proceed by recursively evaluating $\mathrm{MAJORITY}_{2k-1}$ on the remaining variables. When only one variable is left, we query it to determine the answer. This requires at most $k+1$ queries in total (which again can be shown to be optimal).
The corresponding (unnormalized) state in this case is
$\sum_{i=1}^{2k+1} \sum_{j \neq i} x_i |j\rangle + \sqrt{2k-1} \sum_{i
(It lives in the space spanned by $|j\rangle$ for all $j$ and $|ij\rangle$ for all $i.) It can be obtained by choosing
$|\psi\rangle = \sum_{i=1}^{2k+1} |i\rangle$
and $U$ acting as
$U |i\rangle = \frac{1}{2k} \Bigl( \sum_{j \neq i} |j\rangle + \sqrt{2k-1} \sum_{j>i} |ij\rangle - \sqrt{2k-1} \sum_{j
One can check that $U$ is an isometry using a similar picture as above.
Open questions
The problem of finding a quantum query algorithm with a given number of queries and a given success probability can be formulated as a semi-definite program. This was shown by Howard Barnum, Michael Saks, and Mario Szegedy in [BSS03] and can be used to obtain exact quantum query algorithms numerically. Unfortunately, this approach does not necessarily give any insight of why and how the obtained algorithm works. Nevertheless, it would be interesting to know if there is a similar simple characterization of MCQ algorithms.
The algorithms from [AIS13] described above are relatively simple. However, that does not mean that they were simple to find. In fact, the SDP corresponding to $\mathrm{BALANCED}_4$ had already been solved numerically in [MJM11]. Unfortunately, it did not provide enough insight to obtain an algorithm for $\mathrm{BALANCED}_{2k}$ for any $k$. A similar situation is now with $\mathrm{EXACT}^6_{2,4}$ (which it is true if exactly two or four out of the six variables are true). From [MJM11] we know that it has an exact 3-query algorithm. Unfortunately, we do not have enough understanding to describe it in a simple way or generalize it. Besides, I wonder if $\mathrm{EXACT}^6_{2,4}$ has a 3-query MCQ algorithm, or do we actually need interference between the queries to find the answer so fast?
Finally, it would be interesting to know if there is any connection between exact quantum query algorithms and non-local games or Kochen–Specker type theorems. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 116, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345463275909424, "perplexity": 342.90402641720243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00192.warc.gz"} |
https://statisticalphysics.leima.is/equilibrium/most-probable-distribution.html | # Most Probable Distribution¶
Applications of Most Probable Distribution
Application of most probable distribution is discussed in Boltzmann Statistics.
In Boltzman distribution, one of the key ingredients is to calculate the most probable distribution. First things first, the most probable distribution is indicating the distribution of energy (abbr. distribution) that is most probable. On the other hand, we also talked about the probability of the microstates. It is crucial to understand the difference between distribution and microstates.
Microstates describes the configurations of the system which is the most detailed view of the system in statistical physics. A distribution of the system describes the number of particles on each energy levels of the particle.
Why would we choose most probable distribution?
First of all, tt can be derived easily.
The most probable distribution in Boltzman theory is extremely sharp for large number of particles.
Assuming an actual distribution of the distribution, $$\rho(\{\epsilon_i:a_i\})$$ where $$\{\epsilon_i:a_i\}$$ is a distribution of energies. The observable $$\langle\mathscr O\rangle$$ can be calculated using the following integral
$\langle\mathscr O\rangle = \int \mathscr \rho(\{\epsilon_i:a_i\}) O(\{\epsilon_i:a_i\}) d \{\epsilon_i:a_i\}.$
If $$\rho(\{\epsilon_i:a_i\})$$ is a delta function distribution, only the most probable distribution, $$\rho_{\text{most probable}}$$ is need to calculate observables.
This can be demonstrated with numerical simulations.
Other Possible Distributions
Statistically speaking, energy distribution is not the only available distribution we have. We look into the energy distribution because we would like to derive something easy to use for physics using the fundamental assumption of the probabilities of the microstates.
There are other granular distributions. In Ising model,
1. distribution of the magnet directions, i.e., the number of magnets pointing upward and the number of magnets pointing downward, for each microstate,
2. distribution of the total energies, i.e., the number of microstates with a specific energy.
## Equal A Prior Probability¶
As mentioned in What is Statistical Mechanics, a theory of the distribution of the microstates shall be useful for our predictions of the macroscopic observables.
Equal A Prior Probability
For systems with enormous number of particles, we observe their macroscopic properties such as energies, pressure in experiments. But we have very limited information about the internal structure. The principle proposed by Boltzmann is that all these different possible configurations of microstructure are equally distributed, a.k.a., principle of equal a prior probabilities.
It should be noted that all the possible states to be used for the probabilities should produce the observables we already know. For example,
## An Example of Calculations¶
This is Only an Example for the Calculation
This example is not exactly an statistical physics problem since we do not have enough particles to make it statistically significant. For example, we use Sterling’s approximation but this doesn’t hold in this case.
The equal a-priori principle can be illustrated using a two-magnet system. For simplicity, we will ignore the interactions between the magnets. In the example, we use $$a_i$$ to denote the number spins on the different states,
$\begin{split}\begin{cases} a_1 \qquad \text{Number of spins pointing downward}, \\ a_2 \qquad \text{Number of spins pointing upward} \end{cases}\end{split}$
Fig. 9 A simple system of 2 magnets in an external magnetic field. The external magnetic field is pointing upwards. The energy of the system is labelled as $$E$$ and the distributions are labelled on the right. In principle, we could also have multiple possible distributions for the same energy of the whole system. In our case, it is simply a coincidence that we only have on distribution corresponding to each energy of the system.
With an external magnetic field, the energy of the system is determined by
$(s_1 + s_2) B,$
where $$s_i=\pm 1$$.
Each of the possible configuration of the the two magnets is considered as a microstate. That being said, the equal a-prior principle tells us that the probabilities of the different configurations are the same, for each total energy, if we our restricting observable is energy. This is an effort of least information assumption, a.k.a., the Bernoulli’s principle of indifference.[Buck2015]_
We have the following possible energy distributions.
$\begin{split}\begin{cases} a_1 = 0, & \qquad \text{particles at single particle energy state } \varepsilon_1 = -\mu B \\ a_2 = 2, & \qquad \text{particles at single particle energy state } \varepsilon_2 = \mu B \end{cases}\end{split}$
which has total energy of $$2\mu B$$ and number of microstates $$\Omega = 1$$.
$\begin{split}\begin{cases} a_1 = 1, & \qquad \text{particles at single particle energy state } \varepsilon_1 = -\mu B \\ a_2 = 1, & \qquad \text{particles at single particle energy state } \varepsilon_2 = \mu B \end{cases}\end{split}$
which has total energy of $$0$$ and number of microstates $$\Omega = 2$$.
$\begin{split}\begin{cases} a_1 = 2, & \qquad \text{particles at single particle energy state } \varepsilon_1 = -\mu B \\ a_2 = 0, & \qquad \text{particles at single particle energy state } \varepsilon_2 = \mu B \end{cases}\end{split}$
which has total energy of $$-2\mu B$$ and number of microstates $$\Omega = 1$$.
In principle, we could calculate all observables of the system using this assumption. However, it will be extremely difficult to tranverse all the possible states (How Expensive is it to Calculate the Distributions).
## Probabilities of Distributions¶
Suppose we have an equalibrium system with energy 0. In above example of the 2-magnet system, we only have one distribution and two microstates. We do not need more granular information about the microstates. As we include more magnets, each total energy corresponds to multiple energy distributions. For example, the number of microstates associated with a energy distribution in an Ising model could be huge.
The number of microstates associated with each macrostates can be derived theoretically. Those results are presented in most textbooks. The derivation involves the following steps.
1. Find the total number of macrostates (single particle energy distributions), $$\Omega$$.
2. Take the log of the distribution and find the maximum using Lagrange multipler method.
3. The most probable distribution should follow the Boltzman distribution of exponential distribution based on energy levels.
## The Magic of Equal a Priori Probabilities¶
Though assuming least knowledge of the distribution of the microstates, we are still able to predict the observables. There exists several magical processes in this theory.
The first magics is the so called more is different. Given thorough knowledge of a single particle, we still find phenomena unexplained by the single particle property.
How could Equal a Priori help?
Equal a priori indicates a homogeneous distribution. How would a homogeneous distribution of microstates be useful to form complex materials?
The reason behind it is the energy degeneracies of the states. Some microstates lead to the same energy, as shown in Fig. 9. Even for the same microstates, the distribution of energies will be different with different interactions applied.
Different degeneracies lead to different observable systems.
Why is Temperature Relevant?
In this formalism, we do not consider the temperature. In the deriveation, we used the Lagrange multiplier method which introduced an equivalent of the temperature.
Temperature is a punishment of our energy distribution. It sets the level of base energy.
## How Expensive is it to Calculate the Distributions¶
It is very expensive to iterate through all the possible microstates to simulate large systems. To demonstrate this, I use Python to iterate through all the possible states in an Ising model, without any observables constraints. All the results can be dervied theoretically. However we will only show the numerical results to help us building up some inutitions and to understand how expensive it is to iterate through all the possible states.
### Ising Model with Self-interactions¶
For example, we could calculate all the configurations and energies of the configurations using brute force.
Fig. 10 Microstate counts of energy distribution. The bars shows the number of microstates with the specific energy distributions, which indicates the probability of the corresponding distributions given the equal-a-priori principle. The line shows the corresponding energy of the distribution.
In reality, these calculations becomes really hard when the number of particles gets large. For benchmark purpose, I did the calculations in serial on a MacBook Pro (15-inch, 2018) with 2.2 GHz Intel Core i7 and 16 GB 2400 MHz DDR4. It takes about 20min to work out the 5 by 5 grid. The calculation time is scaling up as $$2^N$$ where $$N$$ is the total number of particles, if we do not implement any parallel computations.
## References¶
| Created with Sphinx and . | | | | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255799651145935, "perplexity": 385.0755720435182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00541.warc.gz"} |
http://math.stackexchange.com/questions/289190/if-x-and-fx-are-independent-then-fx-is-almost-surely-constant | # If $X$ and $f(X)$ are independent, then $f(X)$ is almost surely constant
Reading some exam material, I found this property:
Let $f :\mathbb{R}\rightarrow\mathbb{R}$ a measurable function. If $X$ and $f(X)$ are independent, then $f(X)$ is almost surely constant.
Most of the properties come with a proof, but this one doesn't. So I assume that it's trivial, but I just can't see it. Any thoughts?
-
"$f$ is almost surely constant" doesn't make sense. You mean $f(X)$ is almost surely constant. – Robert Israel Jan 28 '13 at 19:25
Thank you for spotting that. – Mihai Bogdan Jan 28 '13 at 19:29
## 1 Answer
Let $A$ be the event $f(X) \le a$. Then $A$ is independent of itself (if random variables $Y$ and $Z$ are independent, then the events $Y \in B$ and $Z \in C$ are independent, for any measurable sets $B$ and $C$). Now what can you say about an event that is independent of itself?
-
It's probability is 1? – Mihai Bogdan Jan 28 '13 at 19:34
Or possibly zero. Solutions to $x^2=x$. – copper.hat Jan 28 '13 at 19:35
I get now why this implies that there is only one a such that $f(X) \le a$. But why is A independent of itself? – Mihai Bogdan Jan 28 '13 at 19:39
I mean $f(X) = a$. – Mihai Bogdan Jan 28 '13 at 19:45
Let $B=\{\omega | f(X(\omega)) \in (\infty,a]\}$ and $C = (\infty,a]$. Then $X \in B$ and $F(X) \in C$ are the same event, but by assumption of independence, $p(B \cap C) = pB pC$, or equivalently, $pB = (p B)^2$. – copper.hat Jan 28 '13 at 19:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.942184567451477, "perplexity": 362.74919895697565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765002.8/warc/CC-MAIN-20141217075245-00170-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/moments-of-a-ladder-propped-up-against-a-wall.828787/ | # Moments of a Ladder Propped Up Against A Wall
Tags:
1. Aug 22, 2015
1. The problem statement, all variables and given/known data
A uniform ladder rests against a vertical wall where there is negligible friction. The bottom of the ladder rests on rough ground where there is friction. The top of the ladder is at a height h above the ground and the foot of the ladder is at a distance 2a from the wall. The diagram shows the forces that act on the ladder.
Which equation is formed by taking moments?
(A) Wa + Fh = 2Wa
(B) Fa + Wa = F h
(C) Wa + 2Wa = Fh
(D) Wa – 2Wa = 2Fh
2. Relevant equations
moment=F x perpendicular distance
3. The attempt at a solution
The anticlockwise moments are Wa and 2Wa and the clockwise moment is Fh, so they should be equated.
But which is the pivot? Is the pivot one of the ends of the ladder or some other point on the ladder. Shouldn't the forces have to be split into components because the force and distance must be perpendicular to each other. And shouldn't the distance be taken along the length of the ladder and not the ground?
2. Aug 22, 2015
### Ellispson
You're right,C can be an option if moments are taken about the topmost point of ladder.
About the pivot,I'm not sure what you mean.
The force should be split into components and we should then take the component which is perpendicular to the distance.But,in this situation,the forces and the distances given are perpendicular to each other already.Hence,we don't need to break up the force into components.
And if you take distance along the ladder you will also have to take the components of the forces perpendicular to this distance.By a bit of trigonometry you'll see that this will give you the same result as the one you got earlier.
3. Aug 22, 2015
Around which point is the ladder going to rotate? Around the centre of gravity or does it rotate around one of the ends of the ladder? Where is the hinge of the turning effect located?
4. Aug 22, 2015
### haruspex
No. Those two moments are only in the same sense if you pick an axis between the two forces. But then the distances a and 2a will not both be right.
Taking an axis at the base of the wall, the ladder's weight has a clockwise moment and the reaction from the ground has an anticlockwise moment.
5. Aug 22, 2015
But why are they adding the clockwise and anticlockwise moments? How did they get 2Wa on the right hand side of the equation?
6. Aug 22, 2015
Shouldn't you equate the two moments as the ladder does not turn and is in equilibrium?
7. Aug 22, 2015
### haruspex
In equation C? To produce a deliberately incorrect equation.
You can either write $\Sigma$ clockwise moments = $\Sigma$ anticlockwise moments, or write $\Sigma$ moments = 0, where each moment is given a sign according to its sense.
8. Aug 22, 2015
No, in option (a), the clockwise moments have been added to the anticlockwise moments to get 2Wa. Wa is the clockwise moment and Fh is the anticlockwise moment, but what is 2Wa?
9. Aug 22, 2015
### haruspex
No, they haven't done that.
They have not stated what axis they are taking moments about in each case. You have to try to find an axis for which the equation is correct.
How might a force W produce a moment 2Wa?
10. Aug 22, 2015
Is the axis here then along the W force that acts upwards?
But how do we figure out where the axis is if there are no options and if it was a written question?
11. Aug 22, 2015
### haruspex
In the diagram, there are two forces magnitude F and two magnitude W.
In equation A, there are moments Wa, 2Wa and Fh.
Clearly the Fh moment must be a result of one of the two F forces and the other F force has no moment. This leaves only two possibilities for the Y coordinate of the axis. What are they?
Similarly, to get those two moments from the two W forces, there are only two possible values for the X coordinate of the axis.
Finally, see if any of those four X, Y combinations give the right signs.
Edit: That said, it's probably quicker just to pick a fairly obvious axis, work out the balance of moments equation for yourself, and see if it can be manipulated to match one of the given equations.
12. Aug 22, 2015
The F that acts along the horizontal at the base of the ladder has no moments. So the two possibilities for the Y coordinate of the axis is along the point on the ladder where its weight acts (W) or through the foot of the ladder where there is a force of W.
And the X coordinate acts either through F (the one at the upper tip of the ladder) or through the centre where its weight acts.
So the axis of rotation is through the W in the middle of the ladder?
But then where does 2Wa come from?
13. Aug 22, 2015
### haruspex
That is only true if you pick an axis somewhere along that horizontal line.
If you pick an axis at the middle of the ladder the weight will have no moment, so there will only be one W term in the equation.
Seems like you do not know how to take moments. Let's try a specific example. Suppose you pick the point where the floor meets the wall. For each of the four forces, how far is its line of action from that point? What four moments do they result in?
14. Aug 23, 2015
For the ladder, I think I understand how it works.
For the example, the forces acting is the weight of the ladder acting downwards and the normal reaction force that acts upwards, friction acts at the point and the other force is opposing this, right?
15. Aug 23, 2015
### Kushashwa
How about considering the origin (say, vertical is y axis and horizontal is x axis) as the point of rotation of the rod?
Clearly :
1. The force acting horizontally at the ground will not produce any moment.
2. The force at the height h will produce moment Fh in clockwise direction.
3. The weight acting at the center of the ladder will produce moment Wa in clockwise direction.
4. The weight acting at the end of the rod will produce moment W(2a) in anti-clockwise direction.
You can deduce the result from here, right?
And for how to solve such questions - Always prefer to imagine the situation and taking the origin as the pivot point and try to deduce equations accordingly. You can also try to work out by taking other points as the point of rotation and see the difference in the result.
16. Aug 23, 2015
### haruspex
Yes, those are the forces, but what moments do they exert about the point where the floor meets the wall?
Let's just start with one: what moment does the normal reaction from the floor have about that point, and in what direction?
17. Aug 28, 2015
The moment is zero as the normal force acts at the point, so the perpendicular distance is zero.
18. Aug 28, 2015
Thank you!
19. Aug 28, 2015
### haruspex
No, the normal force from the floor acts where the ladder meets the floor. The point I specified is where the wall meets the floor. Did you misread my question?
20. Aug 29, 2015
Are the moments Fh, Wa and 2Wa?
Draft saved Draft deleted
Similar Discussions: Moments of a Ladder Propped Up Against A Wall | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8661378026008606, "perplexity": 662.3198997369618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00169.warc.gz"} |
http://www.erikdrysdale.com/survival/ | # Introduction to survival analysis
Understanding the dynamics of survival times in clinical settings is important to both medical practitioners and patients. In statistics, time-to-event analysis models a continuous random variable $$T$$, which represents the duration of a state. If the state is “being alive”, then the time to event is mortality, and we refer to it as survival anaylsis. If $$t$$ is a given realization of $$T$$, then we are conceptually interested in modelling $$P_\theta(T>t|X)$$, indexed by some parametric distribution $$\theta$$ and conditional on some baseline covariates $$X$$. If a patient has the state of being alive with cancer, then survival analysis answers the question: what is the probability that life expectancy will exceed $$t$$ months given your features $$X$$ (age, gender, genotype, etc) and underlying biological condition[1].
If $$T$$ is a continuous random variable, then it has a CDF and PDF of course, but in survival analysis we are also interested in two other functions (dropping the subscripts/conditional variables for notational simplicity):
1. Survival function: $$S(t)=1-F(t)$$. Probability of being alive at time $$t$$.
2. Hazard function: $$h(t)=\frac{f(t)}{S(t)}$$. The rate of change of risk, given you are alive at $$t$$.
Visualizing each term in R provides a useful way to understand what each function represents. Assume that $$T \sim \text{Exp}(\theta)$$ distribution.
We can see that the survival function rapidly approaches zero as time increases, with the probability of living longer than $$t>2$$ almost nill. However, the hazard function is flat at $$h(t)=2$$. This is not unexpected, as the exponential distribution is known to give a constant hazard function, meaning that given that you have made it to some point $$t_1$$ or $$t_2$$, the probability of mortality in the next moment is the same for both cases, which is another way of saying the exponential distribution is memoryless.
As a constant hazard rate is a strong modelling assumption, alternative distributions for the duration measure are often used in the literature including the Weibull distribution which permits an increasing, decreasing, or constant hazard rate depending on $$\alpha$$:
$$T \sim$$ Weibull distribution
1. $$F(t|\theta,\alpha)=1-\exp[-(\theta t)^\alpha]$$
2. $$S(t|\theta,\alpha)=\exp[-(\theta t)^\alpha]$$
3. $$h(t|\theta,\alpha)=\alpha\theta^\alpha t^{\alpha-1}$$
Let’s see how the Weibull hazard function looks for different parameterizations of $$\alpha$$ with $$\theta=1$$. When $$\alpha=1$$ the hazard function is constant over time, but is decreasing (increasing) if $$\alpha$$ is less (greater) than one.
## A distribution with covariates
If $$\theta$$ parameterizes our distribution, then to introduce covariates which influence survival times we can simply rewrite $$X\beta=\theta$$ such that each individual $$i$$ and $$j$$ will have a different hazard rate if their baseline covariates differ[2]. We can model the log likelihood as:
\begin{aligned} l(\boldsymbol t | \beta,\alpha) &=\sum_{i=1}^n \log f(t_i | X_i,\beta,\alpha ) \\ &= \sum_{i=1}^n \log h(t_i | X_i,\beta,\alpha ) + \sum_{i=1}^n \log S(t_i | X_i,\beta,\alpha ) \end{aligned}
We will generate a simple example with 20 individuals, using a Weibull distribution with $$\alpha=1.5$$, and an intercept and dummy covariates. The dummy variable can be thought of as a receiving a treatment or not. Also note that the weibull functions in R use a scale parameter which is $$1/\theta$$.
Let’s look at the distribution of our survival times. In figure A below we see that while some people who didn’t receive the treatment lived longer than those who did, on average, receiving the treatment increased survival times from around $$t=1$$ to $$t=3$$. This was result was engineered by setting the coefficients to $$\beta=[1,-2/3]^T$$. Figure B, known as a Kaplan-Meier (KM) plot, shows a non-parametric estimate of survival times. Every time there is a step downwards, this represents the death of a patient. KM plots are popular due to their the visual interpretability, as one can ask (1) what share of the treatment group is alive after four units of time, or (2) at what time have half of the patients died?
To find the maximum likelihood estimate of the data we can use the optim function in base R (this is easier than writing a Newton-Raphson algorithm). We see that our parameter estimates are fairly close, especially with only twenty observations to perform the inference.
Notice that one can write the hazard function for the Weibull distribution as $$h(t_i|X_i,\beta,\alpha)=g_1(X_i)g_2(t)$$, with $$g_2$$ known as the baseline hazard function, so that ratio of two hazard functions between person $$i$$ and $$j$$ is going to indepdent of the survival time:
$$\frac{h(t_i,X_i)}{h(t_j,X_j)}=\Big(\frac{X_i\beta}{X_j\beta}\Big)^\alpha$$
This result, known as the proportional hazards assumption, allows for the estimates of the parameters contained within $$g_1$$ to be estimated independently of $$g_2$$, using a method called partial likelihood, which will not be discussed here, but is the approach used by the Cox proportional hazazrd model - the default model used in survival analysis.
## Cencoring
Until this point we have only seen observations where the event has occured, meaning we know when a patient’s state began (what is labelled as time zero) and when mortality occured. However, in clinical settings we will not get to observe the event of interest for every patient for several reasons including loss to follow up and insufficient measurement time. Using the previously generated data, we will randomly censor half of the observations by selecting with uniform probability from their true survival time. Note that there are several types of censoring that can occur, but we will use observations which are:
1. Right censored: The observed time is as least as large as the eventual survival time.
2. Independently censored: Survival time is independent of the censoring event.
Throughout this post, the word censoring will be used for notational simplicity instead of writing “Type II (independent) right-censoring”.
We can now visualize the data, showing what we “observe” as the statistician, but also what the true time would have been had we been able to continue observing the patients for longer. Figure A below shows the same twenty patients as before, but with half having censored observations, meaning that these patients are alive (or had an unknown status) at the time of measurement. Using the survival times of either the deceased patients or all the patients will give an underestimate of the true survival times (figure B) because the patients with censored observations will live, on average, for more time.
To perform inference with censored data, the likelihood function will need to account for both censored ($$C$$) and uncensored ($$U$$) observations. If a value is censored, then the density of its observation is not $$f(t)$$ but rather $$P(T>t)=S(t)$$.
\begin{aligned} l(\boldsymbol t | \beta,\alpha) &= \sum_{i\in C} \log f(t_i | X_i,\beta,\alpha ) + \sum_{j \in U} \log S(t_j | X_j,\beta,\alpha ) \end{aligned}
Next, we’ll generate a larger data set ($$n=100$$) with 50% of the obsersations independently censored, and then use the log-likelihood formulation above to estimate the $$\alpha$$ and $$\beta$$ parameters.
With the parameter estimates, we can now estimate what the average survival time for patients with and without the treatment would be, noting that the mean for a Weibull distribution is $$\frac{1}{\theta}\Gamma(1+1/\alpha)$$.
While our inference shows that individuals should live longer than what we observe, it seems “too high” compared to the sample mean we would have observed had the observations not been censored. This is due to the problems of finite sample bias, in maximum likelihood estimators. Correcting for this is beyond the scope of this analysis. Overall, this post has highlighted the importance of survival models in statistics: (1) they provide a way to estimate the distribution of survival times for different patients using variation in baseline covariates, and (2) they are able to extract information from both censored and uncensored observations to perform inference.
1. If there were two data sets of survival times with patient cohorts of breast and pancreatic cancer, then we would expect that probability of survival would be lower in the latter group, even if patients with breast/pancreatic cancer had the same covariate values, simply because pancreatic cancer is known to be a more aggressive cancer.
2. Note that this is only saying that $$f_i\neq f_j$$ because $$X_i\neq X_j$$ which is different than $$\beta_i \neq \beta_j$$. The former assumes baseline covariates cause differences in expected survival outcomes, whereas the latter is saying that for the same set of covariate values, survival times will differ between individuals. While simple survival models, and the type used in this post, assume that $$\beta$$ is the same between individuals, this is becoming a more reasonable assumption as the quality of biomedical data sets increases, especially with access to genomic data. For example, if one of the covariates in a breast cancer study is whether a patient received a selective estrogen receptor modulator, than we would expect $$\beta$$ to differ in its effects depending on the underlying genetic profile of tumor. Whereas if we had access to gene expression for genes such as $$her2$$ or $$brca1$$ this should control for the different efficacies of treatment across gene types.
Written on January 12, 2017 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301250576972961, "perplexity": 593.5943948360668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00170.warc.gz"} |
https://drops.dagstuhl.de/opus/frontdoor.php?source_opus=16036 | When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.SoCG.2022.28
URN: urn:nbn:de:0030-drops-160365
URL: https://drops.dagstuhl.de/opus/volltexte/2022/16036/
Go to the corresponding LIPIcs Volume Portal
### Tight Lower Bounds for Approximate & Exact k-Center in ℝ^d
pdf-format:
### Abstract
In the discrete k-Center problem, we are given a metric space (P,dist) where |P| = n and the goal is to select a set C ⊆ P of k centers which minimizes the maximum distance of a point in P from its nearest center. For any ε > 0, Agarwal and Procopiuc [SODA '98, Algorithmica '02] designed an (1+ε)-approximation algorithm for this problem in d-dimensional Euclidean space which runs in O(dn log k) + (k/ε)^{O (k^{1-1/d})}⋅ n^{O(1)} time. In this paper we show that their algorithm is essentially optimal: if for some d ≥ 2 and some computable function f, there is an f(k)⋅(1/ε)^{o (k^{1-1/d})} ⋅ n^{o (k^{1-1/d})} time algorithm for (1+ε)-approximating the discrete k-Center on n points in d-dimensional Euclidean space then the Exponential Time Hypothesis (ETH) fails.
We obtain our lower bound by designing a gap reduction from a d-dimensional constraint satisfaction problem (CSP) to discrete d-dimensional k-Center. This reduction has the property that there is a fixed value ε (depending on the CSP) such that the optimal radius of k-Center instances corresponding to satisfiable and unsatisfiable instances of the CSP is < 1 and ≥ (1+ε) respectively. Our claimed lower bound on the running time for approximating discrete k-Center in d-dimensions then follows from the lower bound due to Marx and Sidiropoulos [SoCG '14] for checking the satisfiability of the aforementioned d-dimensional CSP.
As a byproduct of our reduction, we also obtain that the exact algorithm of Agarwal and Procopiuc [SODA '98, Algorithmica '02] which runs in n^{O (d⋅ k^{1-1/d})} time for discrete k-Center on n points in d-dimensional Euclidean space is asymptotically optimal. Formally, we show that if for some d ≥ 2 and some computable function f, there is an f(k)⋅n^{o (k^{1-1/d})} time exact algorithm for the discrete k-Center problem on n points in d-dimensional Euclidean space then the Exponential Time Hypothesis (ETH) fails. Previously, such a lower bound was only known for d = 2 and was implicit in the work of Marx [IWPEC '06].
### BibTeX - Entry
```@InProceedings{chitnis_et_al:LIPIcs.SoCG.2022.28,
author = {Chitnis, Rajesh and Saurabh, Nitin},
title = {{Tight Lower Bounds for Approximate \& Exact k-Center in \mathbb{R}^d}},
booktitle = {38th International Symposium on Computational Geometry (SoCG 2022)},
pages = {28:1--28:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-227-3},
ISSN = {1868-8969},
year = {2022},
volume = {224},
editor = {Goaoc, Xavier and Kerber, Michael},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509838819503784, "perplexity": 2499.9718712390736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00670.warc.gz"} |
https://math.stackexchange.com/questions/462104/harmonic-conjugate-in-star-domain | # Harmonic Conjugate in Star Domain
I have been given that $u(x,y)$ is a harmonic function on a star shaped domain $D$. I have to show that it has harmonic conjugate $v(x,y)$ on same domain given up to additive constant by $$v(B)=\int_A^B\left(\frac{\partial u}{\partial x}dy-\frac{\partial u}{\partial y}dx\right)$$
My Solution:
So, if they are harmonic conjugate, they satisfy Cauchy Riemann equation. So, $$\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}\\\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$$ Simplifying these I get: $$v=\int\left(\frac{\partial u}{\partial x}dy-\frac{\partial u}{\partial y}dx\right)+c$$ without difficulty but the problem is I don't know if I have to show the existence of harmonic conjugate itself. My proof goes along the line of already assuming existence of Harmonic Conjugate.
Also, I don't understand why I need Star Shaped Domain, and what A and B refer to.
You shouldn't assume that there exists a harmonic conjugate - you should prove it. In order to that you could show that the suggested $v(B)$ has the desired partial derivatives.
Also, you don't necessarily need $D$ to be a star domain, any simply connected domain would have been just as fine.
$A$ is a fixed point in $D$, and $B$ varies. This is similar to the case from calculus: for a fixed point $a$, $F(b)=\int_a^bf(x) \mathrm{d}x$ defines a primitive function of $f(x)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9386171698570251, "perplexity": 162.36111807203994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00475.warc.gz"} |
https://rd.springer.com/chapter/10.1007/978-0-8176-8289-7_7 | # Convergence of Functions
Chapter
## Abstract
In many situations we have a sequence of functions f n that converges to some function f and f is not easy to study directly. Can we use the functions f n to get some information about f? For instance, if the f n are continuous, is f necessarily continuous? Another question that often comes up is: can I compute $$\int_{a}^{b} f\,dx$$ using $$\int_{a}^{b} f_{n}\,dx$$? More precisely, is it true that
$$\lim_{n\to\infty}\int_a^b f_n\,dx=\int_a^b f\,dx?$$
We can rewrite the question as
$$\lim_{n\to\infty}\int_a^b f_n\,dx=\int_a^b \lim_{n\to\infty}f_n\,dx?$$
In other words, can we interchange the limit and the integral? We will give some partial answers to these questions in this chapter. First, we need to define what we mean by the convergence of a sequence of functions. There are many different ways a sequence of functions can converge. In this chapter we will just consider two of them.
## Keywords
Power Series Triangle Inequality Uniform Convergence Fundamental Theorem Function Sine | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899155497550964, "perplexity": 182.56560971027073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646375.29/warc/CC-MAIN-20180319042634-20180319062634-00304.warc.gz"} |
https://discuss.codechef.com/t/mgame-editorial/21630 | # MGAME - EDITORIAL
Setter: Smit mandavia
Tester: Xiuhan Wang
Editorialist: Taranpreet Singh
Easy
### PREREQUISITES:
Modulo operator and Basic Combinatorics.
### PROBLEM:
Given two integers N and P, suppose the maximum value of (((N \bmod i) \bmod j) \bmod k ) \bmod N be M where i, j, k \in [1, P], Find the number of ways to select i, j, k \in [1, P] such that (((N \bmod i) \bmod j) \bmod k ) \bmod N equals M.
### SUPER QUICK EXPLANATION
• The maximum value of N \bmod x where x \in [1,N], if N is odd, is (N-1)/2 when x = (N+1)/2, and if N is even, is N/2-1 when x = N/2+1.
• We can achieve (((N \bmod i) \bmod j) \bmod k ) \bmod N = M in three ways. Let x = \lceil (N+1)/2 \rceil
• i = x and j,k > M.
• i > N, j = x and k > M.
• i, j > N and k = x.
Each of this case can be easily computed.
### EXPLANATION
First of all, Let is find this value M. It has to be less than min(i,j,k,N) which implies, M < N. Hence, if we want M > 0, we need (((N \bmod i) \bmod j) \bmod k) < N. So, We know for sure, that to maximize M, min(i, j, k) \leq N. Hence, we need maximum (((N \bmod i) \bmod j) \bmod k) < N and now we can ignore the last \bmod N.
So, The maximum N \bmod x can attain is \lfloor (N-1)/2 \rfloor. This happens when x = \lceil (N+1)/2 \rceil. It can be easily verified either by checking by hand, or writing a simple program
Now, try finding out number of ways (((N \bmod i) \bmod j) \bmod k) equals M. It can be approached in Simple case base analysis.
We can try all possible triplets of (i,j,k) and generalize them into three cases.
• When i = \lceil (N+1)/2 \rceil and j,k > M
• When i > N, j = \lceil (N+1)/2 \rceil and k > M
• When i,j > N and k = \lceil (N+1)/2 \rceil
In all three cases, we can simply count the number of triplets (i, j, k) satisfying any condition and print the answer.
Corner Case
When N \leq 2, M = \lfloor (N-1)/2 \rfloor = 0. This is because we cannot achieve (((N \bmod i) \bmod j) \bmod k ) \bmod N > 0. So, all triplets (i, j, k) are valid.
Alternate solution - read at your own risk, you have been warned
For those curious enough not to be satisfied with such solutions, there also exists a pattern based solution too, using basic math. Just use brute solution to find the first terms of series and solve using the pattern formed. Number 6 is important. Enjoy
### Time Complexity
Time complexity is O(1) per test case.
### AUTHOR’S AND TESTER’S SOLUTIONS:
Feel free to Share your approach, If it differs. Suggestions are always welcomed.
3 Likes
About the alternate solution, since this contest has no restriction, one could reference OEIS and came up with something like this:
#include <stdio.h>
#define sqr(x) ((x) * (x))
int main() {
long long t, n, p, diff, u, v;
scanf("%lld", &t);
while(t--) {
scanf("%lld %lld", &n, &p);
if (n < 3) {
printf("%lld\n", p * p * p);
continue;
}
diff = p - n;
if (diff % 2) {
u = n / 2, v = diff/2;
printf("%lld\n", (u+v*3+2)*(u+v*3+3) + v*(v+1)*3 + 1);
} else {
printf("%lld\n", sqr(p/2 + diff + 1) + sqr(diff)*3/4);
}
}
return 0;
}
2 Likes
#include <bits/stdc++.h>
#define int long long int
using namespace std;
int32_t main() {
int t;
cin>>t;
while(t--)
{
int n,p;
cin>>n>>p;
if(n==1||n==2)
cout<<p*p*p<<"\n";
else{
int i = n/2 + 1;
int af , tf , mf;
tf = p-n;
af = (p-n)*2 + i;
mf = (p-n+i)*(p-n+i);
cout<<af*tf+mf<<"\n";
}
}
}
wow… nice solution
I tried similar approach but failed to formulate it properly, can you share how you found patter in odd number greater than 3.
I think my algo is same as the editorialist but why am i getting wrong answer…
https://www.codechef.com/submit/MGAME | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220338463783264, "perplexity": 3321.1060554667383}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176922.14/warc/CC-MAIN-20201124170142-20201124200142-00664.warc.gz"} |
https://asmedigitalcollection.asme.org/turbomachinery/article-abstract/108/2/253/438081/The-Effect-of-Circumferential-Aerodynamic-Detuning?redirectedFrom=fulltext | A mathematical model is developed to predict the enhanced coupled bending-torsion unstalled supersonic flutter stability due to alternate circumferential spacing aerodynamic detuning of a turbomachine rotor. The translational and torsional unsteady aerodynamic coefficients are developed in terms of influence coefficients, with the coupled bending-torsion stability analysis developed by considering the coupled equations of motion together with the unsteady aerodynamic loading. The effect of this aerodynamic detuning on coupled bending-torsion unstalled supersonic flutter as well as the verification of the modeling are then demonstrated by considering an unstable twelve-bladed rotor, with Verdon’s uniformly spaced Cascade B flow geometry as a baseline. It was found that with the elastic axis and center of gravity at or forward of the airfoil midchord, 10 percent aerodynamic detuning results in a lower critical reduced frequency value as compared to the baseline rotor, thereby demonstrating the aerodynamic detuning stability enhancement. However, with the elastic axis and center of gravity at 60 percent of the chord, this type of aerodynamic detuning has a minimal effect on stability. For both uniform and nonuniform circumferentially spaced rotors, a single degree of freedom torsion mode analysis was shown to be appropriate for values of the bending-torsion natural frequency ratio lower than 0.6 and higher than 1.2. However, for values of this natural frequency ratio between 0.6 and 1.2, a coupled flutter stability analysis is required. When the elastic axis and center of gravity are not coincident, the effect of detuning on cascade stability was found to be very sensitive to the location of the center of gravity with respect to the elastic axis. In addition, it was determined that when the center of gravity was forward of an elastic axis located at midchord, a single degree of freedom torsion model did not accurately predict cascade stability.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88387531042099, "perplexity": 962.7999737367525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00345.warc.gz"} |
https://2022.help.altair.com/2022/flux/Flux/Help/english/UserGuide/English/topics/FormulesAnalytiques.htm | # Analytical formulas
## Introduction
This section presents the analytical formulas used to model parallelepiped-shaped portions of conductors connected in parallel with the PEEC method.
To facilitate comprehension, the formulas are initially presented in the simplified case of filiform conductors (negligible cross-section area), then extended to the real case of volume conductors.
With a perfect ground plane, the method of the images allows the computation of equivalent partial inductances.
## Bibliography
Additional information on the formulas is available in the following documents:
• C. Hoer, C. Love
"Exact Inductance Equations for Rectangular Conductors With Applications to More Complicated Geometries."
Journal of Research of the National Bureau of Standards C, Engineering and Instrumentation, Vol. 69C N°2, April-June 1965, pp. 127 - 137
• J. L. Schanen, C. Guerin, J. Roudet, G. Meunier
"Modelling of a Printed Circuit Board Loop Inductance."
IEEE Transaction on Magnetics Sept. 94 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297566175460815, "perplexity": 4430.916370721903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00158.warc.gz"} |
https://www.alexstephenson.me/post/2021-02-06-an-example-of-iv-estimation-in-r/ | # An Example of IV Estimation in R
There have likely been more words written about the use and misuse of instrumental variables than atoms in the universe. When I was starting in grad school, almost all of our methods education came in the context of experiments. Instrumental Variables were treated as a compliance problem. A researcher ran an experiment, but some people decided not to comply with treatment for some reason, which led to missing values. Using the random assignment as an instrument for treatment, the researcher could find the Complier Average Treatment Effect (CATE). Without doing anything other than analyzing the experiment based on the intention to treat (ITT), the researcher would get an estimate that would be smaller and noisier. The CATE can is defined as:
\begin{aligned} CATE = \frac{ITT}{Pr(compliers)} \end{aligned}
Notably, the CATE does not equal what the researcher is interested in: the Average Treatment Effect (ATE). Or at least it will not unless the researcher got very lucky and the ATE for non-compliers is identical to the CATE. Incidentally, this probability has measure zero, at least practically speaking.
Of course, most of the time, IV is considered in the case of non-experimental data. The researcher plans to use regression analysis, and there is a worry about an endogenous regressor. In this context, regression estimates will measure only the magnitude of association but not the magnitude and direction of causation needed. Not great!
In this context, IV a common strategy to get the Local Average Treatment Effect (LATE), for which the most common estimation strategy is two-stage least squares (2SLS). However, we can still use the reduced form division estimator, and it is sometimes useful for pedagogical reasons. Consider the following example from Cameron and Trevedi (2005).
Suppose there is a data generating process (DGP) defined as follows:
\begin{aligned} Y &= 0.5X + u \end{aligned}
\begin{aligned} X &= Z + v \end{aligned}
where z is $N(2,1)$ and $(u,v)$ are jointly standard normal with a correlation between them of $\rho = 0.8$. In R, we can rely on a formula for bivariate joint normality to simulate this scenario.
set.seed(123)
makeBivariateNormalVars <- function(N, mu_x, mu_y,
sigma_x, sigma_y,
rho){
# Begin with two independent N(0,1) variables
Z1 <- rnorm(N, 0, 1)
Z2 <- rnorm(N,0,1)
u <- sigma_x*Z1 + mu_x
v <- sigma_y *(rho*Z1 + sqrt(1-rho^2)*Z2) + mu_y
return(list(u,v))
}
N <- 10000
errorTerms <- makeBivariateNormalVars(10000, 0, 0, 1, 1, .8)
u <- errorTerms[[1]]
v <- errorTerms[[2]]
z <- rnorm(N, 2,1)
x <- 0 + z + v
y <- 0 + 0.5*x + u
The true value of X is 0.5, but the estimate obtained from OLS will be much too large.
lm(y~x)$coefficient[2] ## x ## 0.9037703 Yikes. Much much too large. Now let’s bring back in our Wald Estimator procedure for IV. In our first step, we run the regression of y on z, the exogenous regressor. # IV procedure # Step 1 num <- round(lm(y~z)$coefficient[2],3)
In our second step, we run the regression of x on z.
# Step 2
den <- round(lm(x~z)$coefficient[2],3) In our third step, we get the Wald Estimator for the reduced form equation by dividing Step 1 by Step 2. # Step 3 # IV estimator num/den ## z ## 0.5197239 A substantially closer estimate, though not precisely .5. This is ok. An IV estimator is not unbiased but is consistent. That means if we increase N, the estimator will converge in probability to the parameter of interest. To prove it, you could simulate with different size N like the one coded below. I wrapped the procedure in this example into a function called simulate. simulate <- function(N){ errorTerms <- makeBivariateNormalVars(N, 0, 0, 1, 1, .8) u <- errorTerms[[1]] v <- errorTerms[[2]] z <- rnorm(N, 2,1) x <- 0 + z + v y <- 0 + 0.5*x + u num <- lm(y~z)$coefficient[2]
den <- lm(x~z)\$coefficient[2]
return(num/den)
}
set.seed(1234)
Ns <- seq(1000, 100000, by = 10000)
test <-purrr::modify(Ns, simulate)
df <- data.frame(N = Ns, value = test)
p <- ggplot2::ggplot(df, ggplot2::aes(Ns, value))+
ggplot2::geom_line()+
ggplot2::ylim(0.45, 0.55)+
ggplot2::geom_hline(yintercept = .5, color = "blue")+
ggplot2::theme_bw()+
ggplot2::xlab("Sample Size")+
ggplot2::ylab("Parameter Estimate")+
ggplot2::ggtitle("Wald Estimation with different sample sizes") | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245378136634827, "perplexity": 2093.5557938500824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00504.warc.gz"} |
https://brilliant.org/problems/new-concept/ | # New Concept
Algebra Level 3
Let defined the concept of $$n^{!}$$.
Definition of $$n^{!}$$.
Let $$n \in \mathbb{W}$$, we defined $$n^{!}$$ using the formula
$$n^{!} = n^{n-1^{n-2^{n-3^{....^{3^{2^{1}}}}}}}$$
Find $$n$$. If $$(n-1)^{!} = 1$$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852235913276672, "perplexity": 4045.653962628899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157503.43/warc/CC-MAIN-20180921190509-20180921210909-00061.warc.gz"} |
https://math.stackexchange.com/questions/2915572/impossible-permutations-of-the-gear-cube | # Impossible permutations of the Gear Cube
If you're familiar with the group properties of the Rubik's Cube, you will probably know that, under the action of the standard moves, all possible permutations of the (unoriented) edge pieces are possible and all possible permutations of the (unoriented) corner pieces are possible, but because the action on these two sets is not independent, not all combinations of these are possible -- it turns out that the permutations on the two sets need to have the same sign. I am interested in understanding the similar restrictions that exist on the Gear Cube.
In case you're not familiar with a Gear Cube, it's structure is similar to a Rubik's cube except that it is geared in such a way that a half turn of any face elicits a quarter turn of the adjacent middle slice. A quarter turn of a face is not therefore possible as the middle slice would be misaligned, as in the picture below.
This sounds complicated, but in fact the restriction to half turns of faces, simplifies things in lots of ways. For example,
• the 8 corners split into two sets (tetrahedrons) of 4, which never intermingle, and orientation is never an issue.
• the 12 edge pieces split into three sets (planes) of 4, which never intermingle.
I should also add that there is some gearing on the orientation of the edge pieces, which again looks complicated at first but ultimately all it means is that the orientation of the edges is trivial and we don't need to worry about it.
I've approached the problem by considering one corner (right-back-down) as fixed, and considering the actions of the group generated by a clockwise (half) turn of the left(L), front(F) and upper(U) faces. This restricts the number of moves to worry about and also fixes the planes in which the edge pieces move. The disadvantage is it allows the centre pieces to move, and ultimately it is the restriction on this movement that's causing me the most difficulty.
As with a normal Rubik's Cube, the centres are fixed in space relative to each other and as such their permutations are exactly the rotations of a solid Cube. This is $S_4$, and is normally seen to be such by considering the action on the diagonals. Note that these permutations determine the permutations of the 3 main axes, which is essentially $S_3$ and can be considered as a quotient of the $S_4$ mentioned earlier.
Various other sets of 4 seem to behave similarly. The action on one particular set of 4 edges can be thought of as acting on the vertical sides, horizontal sides and diagonals of the square they form, again giving $S_3$ as a quotient of $S_4$. Perhaps more surprisingly the permutations of the tetrahedron with the fixed corner operates as a quotient of the permutations of the tetrahedron of free corners.
It turns out the overall group action on each of the sets of 3 is the same, so for example if we fix the tetrahedron with the fixed corner we also fix the axes of centres. However the sets of 4 do not match up in the same way, for example a full rotation of a face fixes the corners, but not the centres and one of the edge planes.
Apparently, the permutations of the edges fixes the centres, but I cannot prove this. Has anyone got any ideas? I've tried looking at the $S_4$s from the point of view of $S_3 \rtimes V_4$, thinking that the fact that all the $S_3$s were the same would help, but it hasn't much. The behaviour of the $V_4$ bit still seems very complicated.
I hope all that made some sense to someone. Any help much appreciated!
EDIT
Perhaps I can put it another way. The group is generated by 3 moves, $F$, $L$ and $U$, which can be regarded as acting on five 4-element sets: 4 free corners, the 4 cube diagonals associated with the movement of the centres, and three sets of 4 edge pieces. Thus the group can be regarded as a subgroup of $(S_4)^5$, with generators as follows:
$$L=((12), (1423), (12), (12), (1324)) \\ F=((14), (1243), (23), (1342), (23)) \\ U=((13), (1234), (1432), (13), (13)) \\$$
By considering other actions, I came to consider the quotient of each copy of $S_4$ with its normal subgroup $V_4= \{e, (12)(34), (13)(24), (14)(23)\}$. It seemed like a significant breakthrough that for each generator there was a distinct coset in $\cfrac{S_4}{V_4}$ in which all its coordinates lay: $(12)V_4$, $(23)V_4$, $(13)V_4$ for $L$, $F$, $U$ respectively. This means that effectively each element of the group can be expressed as an element of $S_3\times (V_4)^5$, which is a lot smaller than $(S_4)^5$, but still 4 times bigger than what I believe to be the answer I'm aiming for.
That's pretty much where I'm stuck, as the way the group acts on $S_3\times (V_4)^5$ is not straightforward.
UPDATE
Having done a bit more reading, it seems that the missing tool I need is Schreier's lemma.
Essentially the idea is that given a subgroup $H \subset G=<s_1, ..., s_n>$ we choose a single representative $t_j$ from each coset of $H$ in $G$ and then perform a simple calculation on each $s_i, t_j$ pair to create a set $\{u_{ij}\}$, which the lemma tells us generates $H$.
So we can take $H$ to be the '$(V_4)^5$' bit of the group which fixes the $S_3$ component (by fixing one/all of the 3-element sets described earlier), thereby giving us a generator set for just this '$(V_4)^5$' bit with the '$S_3$' bit disentagled, as I was wanting above. Our original group has 3 generators and $H$ has 6 cosets (one for each element of $S_3$), so there are 18 calculations, which even done manually is fairly manageable if somewhat tedious.
Ignoring the repeats, we actually get a generator in $(V_4)^5$ with 13 elements. The next step is to fix each $V_4$ in turn (by fixing one of the 4-element sets described earlier) in order to simplify things further. On the face of it the next step will have 4$\times$13 calculations, but they're much easier now we're working in $V_4$ and because it's Abelian some can immediately be identified as reordered repeats of other calculations.
In this manner, I got down to the $(V_4)^2$ bit of the group in question being generated by $\{(a,a), (b,b)\}$ from which it can easily be seen that the permutation of the final 4-element set is completely determined by the others, as required.
So that pretty much answers the question, albeit in a rather cumbersome computational way. I suspect there may be a more elegant solution in terms of group actions, so I'd still be interested if anyone can think of one!
• There are many pictures about gear cube on the net. How about adding one of your choice to help illustrate some points? Sep 13, 2018 at 14:32
• jaapsch.net/puzzles/gearcube.htm Sep 13, 2018 at 23:04
• I've now added a picture as suggested, which I hope helps visualise what I'm talking about. The link provided in the comment above provides some useful background to the problem, but the crux of my issue is the bit in his position count where he states "The centres can permute but this is fully determined by the edge permutation. " In a nutshell, that is the assertion that I'm trying to prove. Sep 14, 2018 at 13:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712347149848938, "perplexity": 227.33966374932746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00764.warc.gz"} |
https://www.omtex.co.in/2015/03/business-letter-format.html | Date (Month Day, Year) 2
Mr./Mrs./Ms./Dr. Full name of recipient. 3
Title/Position of Recipient.
Company Name
Dear Ms./Mrs./Mr. Last Name: 4
Subject: Title of Subject 5
Body Paragraph 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Body Paragraph 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Body Paragraph 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Closing (Sincerely...), 7
Signature 8
Enclosures (2) 10
Typist's Initials 11
The block format is the simplest format; all of the writing is flush against the left margin.
With all business letters, use 1" margins on all four sides.
The return address of the sender so the recipient can easily find out where to send a reply to. Skip a line between your address and the date. (Not needed if the letter is printed on paper with the company letterhead already on it.)
2 Date
Put the date on which the letter was written in the format Month Day Year i.e. August 30, 2003. Skip a line between the date and the inside address (some people skip 3 or 4 lines after the date).
The address of the person you are writing to along with the name of the recipient, their title and company name, if you are not sure who the letter should be addressed to either leave it blank, but try to put in a title, i.e. "Director of Human Resources". Skip a line between the date and the salutation.
4 Salutation
Dear Ms./Mrs./Mr. Last Name:, Dear Director of Department Name: or To Whom It May Concern: if recipient's name is unknown. Note that there is a colon after the salutation. Skip a line between the salutation and the subject line or body.
5 Subject Line (optional)
Makes it easier for the recipient to find out what the letter is about. Skip a line between the subject line and the body.
6 Body
The body is where you write the content of the letter; the paragraphs should be single spaced with a skipped line between each paragraph. Skip a line between the end of the body and the closing.
7 Closing
Let's the reader know that you are finished with your letter; usually ends with Sincerely, Sincerely yours, Thank you, and so on. Note that there is a comma after the end of the closing and only the first word in the closing is capitalized. Skip 3-4 lines between the closing and the printed name, so that there is room for the signature.
8 Signature
Your signature will go in this section, usually signed in black or blue ink with a pen.
9 Printed Name
The printed version of your name, and if desired you can put your title or position on the line underneath it. Skip a line between the printed name and the enclosure.
10 Enclosure
If letter contains other document other than the letter itself your letter will include the word "Enclosure." If there is more than one you would type, "Enclosures (#)" with the # being the number of other documents enclosed, not including the letter itself.
11 Reference Initials
If someone other than yourself typed the letter you will include your initials in capital letters followed by the typist's initials in lower case in the following format; AG/gs or AG:gs. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461088180541992, "perplexity": 52.83927378078673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141748276.94/warc/CC-MAIN-20201205165649-20201205195649-00709.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/145135-normal-distribution-problem.html | 1. ## Normal distribution problem
This is a university paper question.
If X and Y are independent normal variables having a common mean μ such that
P(2X + 4Y <= 10) + P(3X + Y <=9) = 1 and
P(2X - 4Y <= 6) + P(Y - 3X >= 1) = 1
Determine the value of
μ and the ratio of the variance of X and Y.
2. Without working through anything, I notice a problem.
W=2X+4Y is a continuous randome variable, as well as the others.
So these sums cannot be one, but they can be approximately one. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331920146942139, "perplexity": 1146.903583574991}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982291015.10/warc/CC-MAIN-20160823195811-00111-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://infoscience.epfl.ch/record/142226 | Infoscience
Poster
# One-Dimensional Hairsine-Rose Erosion Model: Parameter Consistency for Soil Erosion in the Presence of Rainfall Splash
Process-based erosion modelling has proven to be an efficient tool for description and prediction of soil erosion and sediment transport. The one-dimensional Hairsine-Rose (HR) erosion model, which describes the time variation of suspended sediment concentration of multiple particle sizes, accounts for key soil erosion mechanisms: rainfall detachment, overland-flow entrainment and gravity deposition. In interrill erosion, it is known that raindrop splash is an important mechanism of sediment detachment and therefore of sediment delivery. In addition, studies have shown that the mass transported from a point source by raindrop splash decreases exponentially with radial distance and is controlled by drop characteristics and soil properties. Here we test experimentally and numerically the HR parameter consistency at different transversal widths for soil erosion in the presence of splash. To achieve this, soil erosion experiments were conducted using different configurations of the 2 m × 6 m EPFL erosion flume. The flume was divided into four identical smaller flumes, with different widths of 1 m, 0.5 m, and 2 × 0.25 m. Total sediment concentration and the concentrations for the individual size classes were measured. The experimental results indicate that raindrop splash dominated in the flumes having the larger widths (1 m and 0.5 m). This process generated a short time peak for all individual size classes. However, the effect of raindrop splash was less present in observed sediment concentrations of the collected data from the smaller width flumes (0.25 m). For these flumes, the detached sediment was controlled by the transversal width of the flume. An amount of detached sediment adhered to the barriers instead of being removed in the overland flow. Moreover, the experimental results showed that the boundary conditions affect the concentration of the mid-size and the larger particles. The one-dimensional Hairsine-Rose model was used to fit the integrated data and to provide parameter estimates for each flume. The analytical results agreed with the total sediment concentrations but not the measured sediment concentrations of all individual size classes. The observed sediment concentrations for the individual size classes could be predicted only when the initial sediment concentration was adjusted and a new calculation of the settling velocities was used. This new settling velocity calculation was conducted by taking the effect of raindrop splash on the deposition force of the particles into account.
#### Reference
• ECOL-POSTER-2009-017
Record created on 2009-11-09, modified on 2016-08-08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907245397567749, "perplexity": 2487.555926177537}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190181.34/warc/CC-MAIN-20170322212950-00127-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/chain-rule-in-calc-chain-in-log.2793/ | Chain rule in Calc = Chain in Log?
1. Jun 5, 2003
PrudensOptimus
I know in Logarithms loga b * logc d = loga d * logc b
and
loga b * logb c = loga c.
Chain Rule.
Now I read Calculus, I found out about the Chain rule, are they the same?? Looks like it. But because of my poor English reading, I couldn't understand the text. Can some one explain what Chain rule is?
2. Jun 5, 2003
They are not related. if you have several functions as arguments to other functions like f( g( h(x) ) ), then the derivative of this is f'( g( h( x ) ) ) * g'( h( x ) ) * h'( x ) do you see the pattern? So for f(x) = 1/x and g(x) = ln(x) and h(x) = x2, f( g( h (x ) ) ) = 1/ln(x2) and the derivative would be -1/(ln(x2)2) * 1/(x2)*2x
3. Jun 9, 2003
Dx
great website
chain rule
Dx
4. Jun 11, 2003
PrudensOptimus
So what is the derivative of n^x, suppose n is a real number, and x is an unknown. And power rule does not apply to this situation because x is not a real number.
5. Jun 11, 2003
KLscilevothma
let f(x) = nx
ln f(x) = x ln n (take ln on both sides)
f '(x)/f(x) = ln n (take the first derivative on both sides)
f '(x) = f(x)*ln n = nxln n
PS
1) ln is natural log (base e), only natural log can be used in differentiation.
2) d/dx ln f(x) = f '(x)/f(x) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525434970855713, "perplexity": 2443.2328764883223}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00271.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/137602-find-absolute-value-print.html | Find the absolute value
• April 6th 2010, 12:41 PM
wopashui
Find the absolute value
Find the absolute value and the argument $\theta$ in the range $0 <= \theta < 2\pi$ for $(1 - i\sqrt{3})^9$
I dun remember how to do this type of question, I wonder if someone can show me the process
• April 6th 2010, 04:20 PM
Drexel28
Quote:
Originally Posted by wopashui
Find the absolute value and the argument $\theta$ in the range $0 <= \theta < 2\pi$ for $(1 - i\sqrt{3})^9$
I dun remember how to do this type of question, I wonder if someone can show me the process
Use polar coordinates.
• April 6th 2010, 04:20 PM
FancyMouse
$1-i\sqrt{3}=2e^{-\frac{\pi i}{3}}$
• April 7th 2010, 04:50 AM
HallsofIvy
And then $(1- i\sqrt{3})^9= 2^9 e^{9\left(\frac{\pi i}{3}\right)}= 2^9 e^{3\pi i}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8481872081756592, "perplexity": 715.5804134180264}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122233086.24/warc/CC-MAIN-20150124175713-00059-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://www.repository.cam.ac.uk/handle/1810/282876?show=full | dc.contributor.author Evans, Josephine Angela Holly dc.date.accessioned 2018-09-28T15:34:14Z dc.date.available 2018-09-28T15:34:14Z dc.date.issued 2019-01-26 dc.date.submitted 2018-06-22 dc.identifier.uri https://www.repository.cam.ac.uk/handle/1810/282876 dc.description.abstract This work is about convergence to equilibrium problems for equations coming from kinetic theory. The bulk of the work is about Hypocoercivity. Hypocoercivity is the phenomenon when a semigroup shows exponentially relaxation towards equilibrium without the corresponding coercivity (dissipativity) inequality on the Dirichlet form in the natural space, i.e. a lack of contractivity. In this work we look at showing hypocoercivity in weak measure distances, and using probabilistic techniques. First we review the history of convergence to equilibrium for kinetic equations, particularly for spatially inhomogeneous kinetic theory (Boltzmann and Fokker-Planck equations) which motivates hypocoercivity. We also review the existing work on showing hypocoercivity using probabilistic techniques. We then present three different ways of showing hypocoercivity using stochastic tools. First we study the kinetic Fokker-Planck equation on the torus. We give two different coupling strategies to show convergence in Wasserstein distance, $W_2$. The first relies on explicitly solving the stochastic differential equation. In the second we couple the driving Brownian motions of two solutions with different initial data, in a well chosen way, to show convergence. Next we look at a classical tool to show convergence to equilibrium for Markov processes, Harris's theorem. We use this to show quantitative convergence to equilibrium for three Markov jump processes coming from kinetic theory: the linear relaxation/BGK equation, the linear Boltzmann equation, and a jump process which is similar to the kinetic Fokker-Planck equation. We show convergence to equilibrium for these equations in total variation or weighted total variation norms. Lastly, we revisit a version of Harris's theorem in Wasserstein distance due to Hairer and Mattingly and use this to show quantitative hypocoercivity for the kinetic Fokker-Planck equation with a confining potential via Malliavin calculus. We also look at showing hypocoercivity in relative entropy. In his seminal work work on hypocoercivity Villani obtained results on hypocoercivity in relative entropy for the kinetic Fokker-Planck equation. We review this and subsequent work on hypocoercivity in relative entropy which is restricted to diffusions. We show entropic hypocoercivity for the linear relaxation Boltzmann equation on the torus which is a non-local collision equation. Here we can work around issues arising from the fact that the equation is not in the H\"{o}rmander sum of squares form used by Villani, by carefully modulating the entropy with hydrodynamical quantities. We also briefly review the work of others to show a similar result for a close to quadratic confining potential and then show hypocoercivity for the linear Boltzmann equation with close to quadratic confining potential using similar techniques. We also look at convergence to equilibrium for Kac's model coupled to a non-equilibrium thermostat. Here the equation is directly coercive rather than hypocoercive. We show existence and uniqueness of a steady state for this model. We then show that the solution will converge exponentially fast towards this steady state both in the GTW metric (a weak measure distance based on Fourier transforms) and in $W_2$. We study how these metrics behave with the dimension of the state space in order to get rates of convergence for the first marginal which are uniform in the number of particles. dc.description.sponsorship EPSRC dc.language.iso en dc.rights All rights reserved dc.subject Kinetic theory dc.subject Hypocoercivity dc.subject Convergence to equilibrium dc.title Deterministic and Stochastic Approaches to Relaxation to Equilibrium for Particle Systems dc.type Thesis dc.type.qualificationlevel Doctoral dc.type.qualificationname Doctor of Philosophy (PhD) dc.publisher.institution University of Cambridge dc.publisher.department Cambridge Centre for Analysis (CCA) dc.date.updated 2018-09-28T06:59:03Z dc.identifier.doi 10.17863/CAM.30238 dc.publisher.college King's dc.type.qualificationtitle PhD in Pure Mathematics at CCA cam.supervisor Mouhot, Clement cam.thesis.funding true
| {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9025683999061584, "perplexity": 772.6180505951087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00038.warc.gz"} |
http://mathhelpforum.com/math-topics/27507-speed-distance-catching-up.html | # Math Help - speed - distance catching up
1. ## speed - distance catching up
A speeding motorbike travels past a stationary police car. The police car starts accelerating immediately, and keeps accelerating until it has passed the bike.
DATA: motorbike speed: $35ms^{-1}$; police car acceleration: $4ms^{-2}$
How far does the police car travel before it overtakes the motorbike?
I tried working this out using the following method, but it is wrong. How come?
My Solution
Let the distance travelled by the motorbike be $d_m$ and the distance travelled by the police car be $d_p$. Let $t$ be time in seconds.
The distance travelled by the motorbike is given by
$d_m=35t$
The distance travelled by the police car can be expressed by
$d_p=4+8+...+t=\sum_{i=1}^t 4i = \frac{4t(t+1)}{2}=2t(t+1)$
When $d_p=d_m$, $35t=2t(t+1)$
Solving gives $t=16.5 \Rightarrow d = 577.5 \mbox{m}$
Thankyou!
2. Hello, DivideBy0!
Are we allowed to use Calculus?
A speeding motorbike travels past a stationary police car. The police car
starts accelerating immediately, and keeps accelerating until it has passed the bike.
Data: motorbike speed: 35 m/s; police car acceleration: 4 m/s²
How far does the police car travel before it overtakes the motorbike?
We have: . $v_m \:=\:35$
. . Integrating: . $d_m \:=\:35t + C_1$
At $t = 0,\:d = 0\quad\Rightarrow\quad C_1 = 0$
. . Hence: . $\boxed{d_m \:=\:35t}$
We have: . $a_p \:=\:4$
. . Integrate: . $v_p \:=\:4t + C_2$
At $t=0,\:v = 0\quad\Rightarrow\quad C_2 = 0$
. . Hence: . $v_p \:=\:4t$
Integrate: . $d_p \:=\:2t^2 + C_3$
At $t = 0,\:d = 0\quad\Rightarrow\quad C_3 = 0$
.Hence: . $\boxed{d_p \:=\:2t^2}$
When are these two distances equal?
. . $2t^2 \:=\:35t\quad\Rightarrow\quad 2t^2-35t \:=\:0\quad\Rightarrow\quad t(2t-35) \:=\:0$
. . Hence: . $t \;=\;0,\:17.5$
The police car overtakes the motorbike in 17½ seconds.
The police car travels the same distance as the motorbike.
. . $d_p \;=\;d_m \;=\;35(17.5) \;=\;\boxed{612.5 \text{ m}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541951775550842, "perplexity": 3763.3658801531287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120057.96/warc/CC-MAIN-20140914011200-00312-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://annals.math.princeton.edu/2004/159-3/p06 | On contact Anosov flows
Abstract
Exponential decay of correlations for $\mathcal{C}^{4}$ contact Anosov flows is established. This implies, in particular, exponential decay of correlations for all smooth geodesic flows in strictly negative curvature.
Authors
Carlangelo Liverani
Department of Mathematics, University of Rome "Tor Vergata", 00133 Rome, Italy | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.845904529094696, "perplexity": 1070.9064901476386}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00050.warc.gz"} |
http://www.techyv.com/questions/what-will-happen-when-we-compress-files/ | ## What will happen when we compress the files?
Asked By 40 points N/A Posted on -
I heard that we can save the space by compressing the audio song files.
If I compress the song file then what will happen.
SHARE
Best Answer by Jake D Woods
Answered By 30 points N/A #183327
## What will happen when we compress the files?
Hi Dear,
Compression has many concepts behind this.
If we copy our file in rar or zip folder, it will also calls compression.
When we reduce the quality of file then it saves some space it is also called compression.
In the first option, when we copy our file to zip folder.
It will not reduce the quality of the files.
But if we compress the file for the 2nd purpose, then the sound quality of the file will be decreased.
This is not a good idea to compress the files to save the space. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8827109336853027, "perplexity": 1761.4915077836874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217970.87/warc/CC-MAIN-20180821053629-20180821073629-00673.warc.gz"} |
http://mathhelpforum.com/calculus/175665-fourier-transform.html | # Math Help - Fourier Transform
1. ## Fourier Transform
For part i, I found that the answer is
F(w) = (4sinw - 4wcosw)/w^3
For part ii, I consider
[F(x)]^2 which is equal to 16[(xcosx - sinx)^2]/x^6
and then i tried to use the Convolution relation
[F(x)]^2 = Fourier Transform of [ f*f] = Fourier Transform of (integrate from -infinity to infinity[f(t-k)f(k)]dk)
However, the integration is not converge, so I think there must be some mistakes.
Please gives me some idea on this........
2. In i) is requested the computation of the Fourier Transform of...
$f(t) = \begin{cases}
0, t < -1 \\[3pt]
1-t^{2}, -1 0, t >1 \end{cases}$
(1)
... and to achieve that we can use the 'little nice formula'...
$\displaystyle \int_{a}^{b} f(t)\ e^{-s t}\ dt = \sum_{n=0}^{\infty} \frac{f^{(n)} (a)\ e^{-s a} - f^{(n)}(b)\ e^{-s b}}{s^{n+1}}$ (2)
Applying (2) we find that...
$\displaystyle \mathcal{F} \{f(t)\} = [\int _{-1}^{1} f(t)\ e^{-s t}\ dt]_{s= i \omega}= 4\ [\frac{s\ \cosh s}{s^{2}} - \frac{\sinh s}{s^{3}}]_{s= i \omega}$
$\displaystyle = 4\ [\frac{s\ \cosh s - \sinh s}{s^{3}}]_{s=i \omega}$ (3)
Kind regards
$\chi$ $\sigma$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919268488883972, "perplexity": 2426.8073116887904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246646036.55/warc/CC-MAIN-20150417045726-00038-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/14349/why-can-you-remove-the-gravitational-constant-from-a-computer-game-simulation/14351 | # Why can you remove the gravitational constant from a computer game simulation?
I've seen in a few gravity simulation games (ie. bouncing balls) the equation:
force = G * m1 * m2 / distance^2
shortened to this by removing the gravitational constant:
force = m1 * m2 / distance^2
I accept that it works fine and saves some calculations, but I'm wondering why it still works? Is the value just too small to matter? What's the physics behind this?
-
In simple words: force and distance in games are usually measured in chickens and ducks. – valdo Sep 5 '11 at 10:56
Gravity is not "removed" there in the sense that attraction is still there. – Vladimir Kalitvianski Sep 5 '11 at 17:46
$G$ is just a constant of proportionality to get the units right (so that when $m_1$ and $m_2$ are in kilograms and $r$ is in meters you get a force in Newtons rather than wingdingalings or something really weird). Indeed cosmologists like to work in a system of units where $G = c = 1 \text{ (dimensionless)}$, and particle physicists like to work in units where $c = \hbar = 1 \text{ (dimensionless)}$, you can even set all three of these numbers to 1 if you like.
A very similar situation occurs in electrostatics: in SI units, Coulomb's law is $F = (1/4\pi\epsilon_0)(q_1q_2/r^2)$; the funky constant is there because the unit charge (coulomb) is defined in a roundabout way using magnetic force in current-carrying wires. But in Gaussian units, it is $F = q_1q_2/r^2$, with no constant, because the unit charge (esu) is defined in terms of Coulomb's law instead. Look up "geometrized units" on wikipedia for an extreme version of this. – Stan Liou Sep 5 '11 at 15:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9368326663970947, "perplexity": 457.04602422312735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/349696/showing-a-bounded-analytic-function-on-strip-is-identically-zero | # Showing a bounded analytic function on strip is identically zero
Let $f$ be analytic and bounded on $\{x+iy\in\mathbb{C}:|y|<\frac{\pi}{2}\}$. Suppose $f(\ln n)=0$ for all $n\in\mathbb{N}$. Show that $f$ is identically 0.
I tried to perform some transformations to end up in the unit disk to see if I could get anything. First I translated by $i\frac{\pi}{2}$ to get to the strip $\{x+iy\in\mathbb{C}:0<y<\pi\}$. Then I exponentiated to get to the upper half-plane. Finally I used $z\mapsto\frac{i-z}{i+z}$ to get to the unit disk. Under the composition of these maps, $\ln n$ in the original region corresponds to $\frac{1-n}{1+n}$ in the unit disk. I can't proceed from here.
-
Let $\phi(z) = \ln \frac{1+z}{1-z}$. $\phi$ is a conformal map of the open unit disk onto the set in question. Hence $\tilde{f} = f \circ \phi$ is a bounded, analytic function on the open unit disk.
A quick calculation shows that $\phi(\frac{n-1}{n+1}) = \ln n$, hence $\alpha_n =\frac{n-1}{n+1}$ is a zero of $\tilde{f}$ for every $n$. Furthermore, $\sum_n (1-|\alpha_n|) = \sum_n \frac{2}{1+n} = \infty$.
Theorem 15.23 in Rudin's "Real & Complex Analysis" asserts that if $f$ is analytic and bounded on the unit disk, not identically zero, and $\beta_n$ are the zeroes of $f$ listed according to their multiplicities, then $\sum_n (1-|\beta_n|) < \infty$.
It follows that $\tilde{f}$ must be identically zero, and since $\phi$ is onto, that $f$ is identically zero.
-
The strip $G$ is simply connected, then by Riemann Mapping Theorem, there is 1-1 analytic function $g$ maps the strip onto the unit disk $\mathbb D$.
Define $h(z)=f\circ g^{-1}(z)$, clearly $h$ is analytic on $\mathbb D$ and has infinitely many zeros in $\mathbb D$. Then by Identity Theorem, $h\equiv 0$. Since $g$ is not identically zero, $f\equiv 0$.
-
There are non-constant (even bounded) holomorphic functions on the unit disk that have infinitely many zeros. This argument is not sufficient. – Daniel Fischer May 19 '14 at 20:00
@DanielFischer, yes, you are right. Infinitely many zeros don't imply existence of a limit. – Falang May 19 '14 at 20:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886753559112549, "perplexity": 86.41603521687654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468233.50/warc/CC-MAIN-20151124205428-00041-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://www.computer.org/csdl/trans/tg/2003/02/v0139-abs.html | Subscribe
Issue No.02 - April-June (2003 vol.9)
pp: 139-149
Roger Crawfis , IEEE Computer Society
ABSTRACT
<p><b>Abstract</b>—This paper describes an efficient algorithm to model the light attenuation due to a participating media with low albedo. Here, we consider the light attenuation along a ray, as well as the light attenuation emanating from a surface. The light attenuation is modeled using a splatting volume renderer for both the viewer and the light source. During the rendering, a 2D shadow buffer accumulates the light attenuation. We first summarize the basic shadow algorithm using splatting [<ref rid="bibV013930" type="bib">30</ref>]. Then, an extension of the basic shadow algorithm for projective textured light sources is described. The main part of this paper is an analytic soft shadow algorithm based on convolution techniques. We describe and discuss the soft shadow algorithm, and generate soft shadows, including umbra and penumbra, for extended light sources.</p>
INDEX TERMS | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640984892845154, "perplexity": 2915.4071554331663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095874.61/warc/CC-MAIN-20150627031815-00300-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/142560-limit-question.html | 1. ## Limit question
Determine Lim x->Infinity Sqrt[4 x^2 + 7 x - 2] - (2 x + 1)
I know the answer, but i dont know the steps to get to it..
2. Originally Posted by Monster32432421
Determine Lim x->Infinity Sqrt[4 x^2 + 7 x - 2] - (2 x + 1)
I know the answer, but i dont know the steps to get to it..
Try to rationalise the numerator .
3. Originally Posted by Monster32432421
Determine Lim x->Infinity Sqrt[4 x^2 + 7 x - 2] - (2 x + 1)
I know the answer, but i dont know the steps to get to it..
Write $\left[\sqrt{4x^2+7x-2}-(2x+1)\right]\,\frac{\sqrt{4x^2+7x-2}+(2x+1)}{\sqrt{4x^2+7x-2}+(2x+1)}$ $=\frac{3x-3}{\sqrt{4x^2+7x-2}+(2x+1)}$ , and now multiply the right hand by $\frac{1/x}{1/x}$ and do
a little algebra + some arithmetic of limits.
The limit is $\frac{3}{4}=0.75$
Tonio
4. also a little care , you may make mistakes very easily .
5. Originally Posted by tonio
Write $\left[\sqrt{4x^2+7x-2}-(2x+1)\right]\,\frac{\sqrt{4x^2+7x-2}+(2x+1)}{\sqrt{4x^2+7x-2}+(2x+1)}$ $=\frac{3x-3}{\sqrt{4x^2+7x-2}+(2x+1)}$ , and now multiply the right hand by $\frac{1/x}{1/x}$ and do
a little algebra + some arithmetic of limits.
The limit is $\frac{3}{4}=0.75$
Tonio
thanks
6. ${\sqrt{4x^2+7x-2}\over x} = \sqrt{4x^2+7x-2\over x^2} = \sqrt{4 + \frac7x -\frac2{x^2}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9760248064994812, "perplexity": 656.6714346860111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542695.22/warc/CC-MAIN-20161202170902-00068-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/sho-energy-eigenvalues.192798/ | # SHO energy eigenvalues
1. Oct 21, 2007
### indigojoker
We know the eigenvalue relation for the Hamiltonian of a SHO (in QM) though relating the raising and lowering operators we get:
$$H= \hbar \omega (N+1/2)$$
This is true for $$H=\frac{p^2}{2m}+\frac{m \omega^2 x^2}{2}$$
I would like to solve for another case where $$V=a\frac{m \omega^2 x^2}{2}$$
where a is some constant
We now have $$H=\frac{p^2}{2m}+\frac{ a m \omega^2 x^2}{2}$$
I'm not sure how to go about this. When relating the creation and annihilation operators, we get: $$a^{\dagger} a = \frac{m \omega}{2 \hbar} x^2 + \frac{1}{2m \omega \hbar} p^2 -\frac{1}{2}$$
I'm not sure how to incorporate a constant into the potential, any ideas?
2. Oct 21, 2007
### christianjb
This is equivalent to the substitution w'=sqrt(a)w, or am I missing something?
3. Oct 21, 2007
### indigojoker
how can you arbitrarily say that though?
4. Oct 21, 2007
### christianjb
It's mathematically true that you can make that substitution. Maybe I'm missing some subtlety here!
5. Oct 21, 2007
### indigojoker
so you're saying that the energy eigenvalues will be:
$$H= \hbar \sqrt{a}\omega (N+1/2)$$
6. Oct 21, 2007
### malawi_glenn
a is just a constant, now if you look at the harmonic potential, the $$\omega$$ is the "ground"(classical) angular frequency of the potential. So if you draw the potential as a function of x, i.e V(x) you see that the energy eigenvalues are $$\hbar (\omega \sqrt a)(n + 1/2)$$. because you simple do the change of variable that christianjb pointed out, so you get new annihilation operators and so on. Introducing this a, just implies that we change to the same 1-dim SHO but with another angular frequency.
7. Oct 21, 2007
Yes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871280193328857, "perplexity": 881.3688811739871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647567.36/warc/CC-MAIN-20180321023951-20180321043951-00545.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-a-molecular-approach-3rd-edition/chapter-3-sections-3-1-3-12-exercises-cumulative-problems-page-134/123 | ## Chemistry: A Molecular Approach (3rd Edition)
$1.80\times10^{2}\frac{g\ Cl}{1\ year}$
If the car emits $25g\ CF_{2}Cl_{2}$ a month then it will emmit $25\times 12=300g \ CF_{2}Cl_{2}$ in a year. Divide this number by the molar mass of $CF_{2}Cl_{2}$ to get the number of moles of $CF_{2}Cl_{2}$. Then multiply by 2 because there are 2 $Cl$ for every mole of $CF_{2}Cl_{2}$. Finally, multiply by the molar mass of $Cl$. $300g\ CF_{2}Cl_{2}\times\frac{1 mol\ CF_{2}Cl_{2}}{120.91g\ CF_{2}Cl_{2}}\times2\times\frac{35.45g\ Cl}{1mol\ Cl}=\frac{1.80\times10^{2}g\ Cl}{1\ year}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9144785404205322, "perplexity": 365.6491559011972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219109.94/warc/CC-MAIN-20180821210655-20180821230655-00140.warc.gz"} |
https://pqnelson.wordpress.com/2012/06/04/optimization-motivation/ | ## Optimization (Motivation)
Briefly, we consider the problem when a tangent line to a curve has zero slope. That is, the derivative of the curve is zero at some point.
Such a point is called a “critical point” and tells us about extreme values.
Example. Consider the function $f(x)=x^{2}-2x+1$. We plot this out:
Now, we can consider the tangent line at, say, $x=1$. We draw this tangent line in red:
Observe every point on the line $(x,f(x))$ has the property that $f(1)\leq f(x)$. In other words, $f(1)$ is a “Global Minimum” (it’s less than every other $f(x)$ if $x\not=1$).
How can we detect such “extrema” (i.e., “maxima” and “minima”)? Well, the tangent line has zero slope. Equivalently: $f'(x_{0})=0$ when the point $(x_{0}, f(x_{0}))$ is an extreme point.
Sometimes these “extrema” points are local. For example, plotting $f(x)=x^{-1}\cos(3\pi x)$ when $0.1\leq x\leq1.1$:
So how do we detect extrema?
Well, we take our function $g(x)$, take its derivative, then find points $x_{0}$ such that $g'(x_{0})=0$.
This is the general routine, but lets leave the motivating post with a few questions…
Problem: as we can see with the function $x^{-1}\cos(3\pi x)$, some extrema are local while others are global. How do we
(a) Determine if an extrema is a maximum or minimum?
(b) Determine if the extrema is local or global? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9771674275398254, "perplexity": 482.9222977025693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00168.warc.gz"} |
http://www.tricki.org/node/425/revisions/3642/view | Tricki
## Revision of Lower degree by increasing dimension (or vice-versa) from Sun, 11/10/2009 - 09:30
### Quick description
When dealing with a high degree expression in one variable it is possible, and usually advantageous, to convert it into a low degree expression in many variables. On the other hand, it may sometimes be useful to hide all but one variables of an expression in variables by lowering dimension (at the implicit expense of degree).
Basic algebra.
### Example 1: ode
Consider the following differential equation of degree in one variable: . One may turn it into a degree one system of equations in variables simply by renaming , ,, , obtaining:
\begin{eqnarray} (t_1)' &=& 1 \\ (t_2)' &=& t_3 \\ (t_3)' &=& t_4 \\ \vdots \\ (t_{n-1})' &=& t_n \\ (t_n)' &=& f(t_{n-1},\dots ,t_1) \end{eqnarray}
The same could be done as above with a polynomial of degree in one variable , instead of this differential equation, obtaining a system of polynomials of degree one in variables .
### Example 2: field
Let K be a function field in variables over a field F. So is a finite algebraic extension. If one is interested only with and not in , then one can replace with and view the function field in variables as a function field in one variable.
### General discussion
The pattern is clear from these examples: renaming dependent quantities to turn them into extra variables leads to a lower degree. This allows to apply standard degree-one techniques to degree- problems. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990336894989014, "perplexity": 784.9875423513645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669847.1/warc/CC-MAIN-20191118205402-20191118233402-00546.warc.gz"} |
https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_Physics_(Boundless)/3%3A_Two-Dimensional_Kinematics/3.1%3A_Motion_in_Two_Dimensions | $$\require{cancel}$$
# Constant Velocity
An object moving with constant velocity must have a constant speed in a constant direction.
learning objectives
• Examine the terms for constant velocity and how they apply to acceleration
Motion with constant velocity is one of the simplest forms of motion. This type of motion occurs when an an object is moving (or sliding) in the presence of little or negligible friction, similar to that of a hockey puck sliding across the ice. To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion to a straight path.
Newton’s second law ($$\mathrm{F=ma}$$) suggests that when a force is applied to an object, the object would experience acceleration. If the acceleration is 0, the object shouldn’t have any external forces applied on it. Mathematically, this can be shown as the following:
$\mathrm{a=\frac{dv}{dt}=0 ⇒ v=const.}$
If an object is moving at constant velocity, the graph of distance vs. time ($$\mathrm{x}$$ vs. $$\mathrm{t}$$) shows the same change in position over each interval of time. Therefore the motion of an object at constant velocity is represented by a straight line: $$\mathrm{x=x_0+vt}$$, where $$\mathrm{x_0}$$ is the displacement when $$\mathrm{t=0}$$ (or at the y-axis intercept).
Motion with Constant Velocity: When an object is moving with constant velocity, it does not change direction nor speed and therefore is represented as a straight line when graphed as distance over time.
You can also obtain an object’s velocity if you know its trace over time. Given a graph as in, we can calculate the velocity from the change in distance over the change in time. In graphical terms, the velocity can be interpreted as the slope of the line. The velocity can be positive or negative, and is indicated by the sign of our slope. This tells us in which direction the object moves.
# Constant Acceleration
Analyzing two-dimensional projectile motion is done by breaking it into two motions: along the horizontal and vertical axes.
learning objectives
• Analyze a two-dimensional projectile motion along horizontal and vertical axes
Projectile motion is the motion of an object thrown, or projected, into the air, subject only to the force of gravity. The object is called a projectile, and its path is called its trajectory. The motion of falling objects is a simple one-dimensional type of projectile motion in which there is no horizontal movement. In two-dimensional projectile motion, such as that of a football or other thrown object, there is both a vertical and a horizontal component to the motion.
Projectile Motion: Throwing a rock or kicking a ball generally produces a projectile pattern of motion that has both a vertical and a horizontal component.
The most important fact to remember is that motion along perpendicular axes are independent and thus can be analyzed separately. The key to analyzing two-dimensional projectile motion is to break it into two motions, one along the horizontal axis and the other along the vertical. To describe motion we must deal with velocity and acceleration, as well as with displacement.
We will assume all forces except for gravity (such as air resistance and friction, for example) are negligible. The components of acceleration are then very simple: $$\mathrm{a_y=−g=−9.81\frac{m}{s^2}}$$ (we assume that the motion occurs at small enough heights near the surface of the earth so that the acceleration due to gravity is constant). Because the acceleration due to gravity is along the vertical direction only, $$\mathrm{a_x=0}$$. Thus, the kinematic equations describing the motion along the $$\mathrm{x}$$ and $$\mathrm{y}$$ directions respectively, can be used:
\begin{align} \mathrm{x} & \mathrm{=x_0+v_xt} \\ \mathrm{y} & \mathrm{=v_{0y}+a_yt} \\ \mathrm{y} & \mathrm{=y_0+v_{0y}t+\dfrac{1}{2}a_yt^2} \\ \mathrm{v_y^2} &\mathrm{=v_{0y}^2+2a_y(y−y_0)} \end{align}
We analyze two-dimensional projectile motion by breaking it into two independent one-dimensional motions along the vertical and horizontal axes. The horizontal motion is simple, because $$\mathrm{a_x=0}$$ and $$\mathrm{v_x}$$ is thus constant. The velocity in the vertical direction begins to decrease as an object rises; at its highest point, the vertical velocity is zero. As an object falls towards the Earth again, the vertical velocity increases again in magnitude but points in the opposite direction to the initial vertical velocity. The xx and yy motions can be recombined to give the total velocity at any given point on the trajectory.
# Key Points
• Constant velocity means that the object in motion is moving in a straight line at a constant speed.
• This line can be represented algebraically as: $$\mathrm{x=x_0+vt}$$, where $$\mathrm{x_0}$$ represents the position of the object at $$\mathrm{t=0}$$, and the slope of the line indicates the object’s speed.
• The velocity can be positive or negative, and is indicated by the sign of our slope. This tells us in which direction the object moves.
• Constant acceleration in motion in two dimensions generally follows a projectile pattern.
• Projectile motion is the motion of an object thrown or projected into the air, subject to only the (vertical) acceleration due to gravity.
• We analyze two-dimensional projectile motion by breaking it into two independent one-dimensional motions along the vertical and horizontal axes.
# Key Terms
• constant velocity: Motion that does not change in speed nor direction.
• kinematic: of or relating to motion or kinematics | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551527500152588, "perplexity": 248.13582630444657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316555.4/warc/CC-MAIN-20190822000659-20190822022659-00100.warc.gz"} |
http://www.impan.pl/cgi-bin/dict?referee | ## referee
#### 1
The author thanks the referee for his helpful suggestions concerning the presentation of this paper.
The author thanks the referee for recommending various improvements in exposition.
We are grateful to the referee for a number of helpful suggestions for improvement in the article.
The referee deserves thanks for careful reading and many useful comments.
At the suggestion of the referee, we consider some simple cases.
#### 2
While the first version was being refereed, I found that Zhang [2] had given a similar treatment of $E_n(X)$.
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8526952266693115, "perplexity": 210.29936601929043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663372.35/warc/CC-MAIN-20140930004103-00204-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs40870-015-0014-6 | Journal of Dynamic Behavior of Materials
, Volume 1, Issue 2, pp 176–190
# Identification of the Dynamic Properties of Al 5456 FSW Welds Using the Virtual Fields Method
• G. Le Louëdec
• F. Pierron
• M. A. Sutton
• C. Siviour
• A. P. Reynolds
Article
## Abstract
The present study focuses on the identification of the evolution of the dynamic elasto-plastic properties of Al 5456 FSW welds. An innovative method is proposed to make best use of the data collected with full-field measurements during dynamic experiments, and achieve identification of the mechanical properties of heterogeneous materials without requiring measurement of the load. Compressive specimens have been submitted to high strain-rate loading through a split Hopkinson pressure bar device while displacement fields were obtained using full-field measurement techniques. Two sets of experiments have been performed using two different methods: the grid method and digital image correlation. Afterwards, the identification of the elastic and plastic properties of the material was carried out using the Virtual Fields Method. Finally, identification of the evolution of the yield stress throughout the weld has been achieved for strain-rates of the order of 103 s−1.
## Keywords
Friction stir welding Virtual fields method Dynamic deformation Digital image correlation Grid method Split Hopkinson pressure bar
## Introduction
Since its invention in 1991, the friction Stir Welding (FSW) process [1] has allowed the use of large aluminium structures for a wide range of applications, thanks to the high resistance of the welds thus produced. In various fields, such as automotive and aeronautics, these welds hold an important place. Therefore, the evolution of the mechanical properties at different strain-rates is of interest; with the knowledge that, depending on the process, the welded material can undergo important structural changes, ranging from different grain size to a total recrystallisation. However, the high strain-rate mechanical properties used in numerical simulations are still estimates. Indeed, different issues arise when dealing with dynamic experiments. It is not easy to obtain accurate measurements of the strain, the load and the acceleration at strain-rates of the order of 103 s−1 or more.
Several tests have been used over the last century to carry out experiments at high strain-rates [2]. The split Hopkinson pressure bar (SHPB) was developed based on the work of Hopkinson [3] and Kolsky [4]. This system allows the realization of experiments at strain-rates up to 10,000 s−1. Over the last decades, the SHPB and the tensile split Hopkinson bar [5] have become standards for the dynamic characterization of materials [6, 7, 8, 9, 10, 11, 12, 13, 14]. Starting with Hoge [15], the influence of the strain-rate on the mechanical properties of aluminium alloys, more specifically here, the tensile yield stress, has been investigated. For Al 6061 T6 and a strain-rate varying from 0.5 to 65 s−1, Hoge measured an increase in yield stress of approximately 28 %. More recent work by Jenq et al. [8] showed the evolution of the stress-strain curve between compressive quasi-static and dynamic tests at strain-rates ranging from 1350 to 2520 s−1. In that work, increases in yield stress of 25 % between the quasi-static test and the 1350 s−1 test and 60 % between the quasi-static and the 2520 s−1 test were measured. For Al 5083, Al 6061 and A356 alloys, it is also worth noting that Tucker [13] reported almost no evolution of the yield stress between tensile quasi-static and dynamic tests, also reaching similar conclusions for compression and shear. However, significant work hardening differences were recorded between tension and compression, with consistent increasing work hardening with strain-rate in compression.
To date, very few investigations have been conducted on the dynamic properties of welds. With SHPB experiments, it is possible to identify the average properties of a welded specimen [16, 17, 18]. However, there is no information about the local evolution of the dynamic properties within the weld. Due to the complex thermo-mechanical history of the welded material, the strain-rate dependence of the different areas of the weld could be quite different. Therefore, investigation of the evolution of the local properties of the material is of interest. Yokoyama et al. [19] proposed to carry out the identification of the dynamic local properties in a weld by cutting small specimens in the weld so as to consider each specimen as homogeneous. However, some issues remain due to the low spatial resolution and the assumption of the specimen homogeneity. This is also a very long and tedious process.
Developments in the field of digital ultra-high speed (UHS) cameras now allow the imaging of experiments at 106 frames per second and above. The definition of ’ultra-high speed imaging’ is provided in [20]. Studies regarding the performance of high speed and ultra-high speed imaging systems have been reported in the past few years, e.g. [21, 22]. These technologies enable temporal resolutions on the order of a microsecond and below with good spatial resolution, making it possible to measure both full-field strains and accelerations with excellent temporal and spatial resolutions; this is essential for the current study. These cameras still have important drawbacks however: high noise level, low number of images and very high cost. Recently, the advent of in-situ storage cameras like the Shimadzu HPV-X or the Specialized Imaging Kirana has given new impetus to using ultra-high speed imaging for full-field deformation measurements. Image quality has improved considerably, as evidenced in [23].
Finally, in dynamic testing, the key issue relates to external load measurement. Indeed, inertial effects in standard load cells (‘ringing’) prevent accurate loads to be measured. The alternative is to resort to an SHPB set-up using the bars as a very bulky and inconvenient load cell. This procedure works well but within a very restrictive set of assumptions: specimen quasi-static equilibrium (no transient stress waves, requiring a short specimen) and uniaxial loading, in particular. The need for more complex stress states to identify and fully validate robust constitutive models requires investigators to move away from such stringent assumptions if at all possible. The current study is exploring this idea for welds, based on the Virtual Fields Method (VFM).
The VFM was first introduced in the late 1980s in order to solve inverse problems in materials constitutive parameter identification with the aid of full-field measurements. Since then, it has been successfully applied to the identification of constitutive parameters for homogeneous materials in elasticity [24, 25], elasto-plasticity [26, 27], and visco-plasticity [28]. The method has also been used for heterogeneous materials (welds) in quasi-static loading and elasto-plastic material response [29]. Recent developments by Moulart et al. [30] introduced the application of the VFM to the identification of the dynamic elastic properties of composite materials. The main idea in this case is to use the acceleration field as a load cell, avoiding the need to measure an external load. Since then, it has also been used to identify the damage process of concrete materials [31], and to analyze the deformation of a beam in dynamic three-point bending [32]. More recently, spectacular improvements in image quality has led to unprecedented quality of identification, as evidenced in [23] for the elastic response of a quasi-isotropic laminate at strain-rates above 2000 s−1. However, until now, it is has never been attempted to identify an elasto-plastic model with this approach. Thus, the enclosed work breaks new ground by not only using acceleration fields instead of measured load data but also applying this approach in the more complex situation where heterogeneous plastic deformation is occurring in a weld.
The aim of this study is to explore new ways to use the VFM for the identification of the dynamic heterogeneous elasto-plastic properties of Al 5456 FSW welds. The nature of the paper is seminal in the way that it insists on the methodology and its potential. Many developments are still required to make this procedure a standard tool (including better UHS cameras, adapted test design etc.) but the authors feel that the current technique has great potential for future dynamic tests of materials. This study is part of a global long-term effort to design the next generation of high strain-rate tests based on rich full-field deformation information. The recent progress in UHS cameras reported above makes this contribution all the more timely, even if the results reported here are somewhat impaired by the fact that lower image quality cameras were used at the time that the experiments were performed.
## Specimens and Experiments
The identification of the dynamic properties of the weld was performed based on experimental results from SHPB tests. It is worth noting however that the set-up of the SHPB test is used here, but the SHPB data reduction procedures are not used. Moreover, the first images were taken when the transient stress wave was present in the specimens and the accelerations were at their maximum, preventing any use of the standard SHPB analysis anyway. Two series of tests have been carried out for this work: one at the University of Oxford on welded and base material specimens where the grid method (or ‘sampling moiré’) [33, 34, 35, 36, 37, 38, 39] was used, and a second one at the University of South Carolina on welded specimens only where digital image correlation [40] was used.
### Specimens
Generally, cylindrical specimens are used in the SHPB set-up to ensure a homogeneous propagation of the wave. However, in this case, 2D imaging was performed during the experiment. Therefore, flat surfaces were machined on both sides of the specimens (Fig. 1) in order to comply with 2D-DIC and grid method requirements. The specimen was designed so as to avoid any compressive buckling during the early part of the test when measurements were collected. Indeed, for this kind of specimen, buckling will occur for an axial stress of about 420 MPa, when the expected dynamic yield stress for the base material (50 % higher than the quasi-static value [13]) is 380 MPa. Therefore, information on the elasto-plastic behaviour will be available before any buckling occurs.
### SHPB Tests Using the Grid Method
These tests were performed on a SHPB set-up at the University of Oxford. Five tests were performed with the grid method, three on base material specimens and two on welded specimens.
#### Experiment
Before performing the experiments, cross-line grids have been transferred onto the surface of interest of the different specimens. The grids were printed on a 0.18 mm thick polyester film, with a period of $$150\,\upmu \hbox {m}$$. The grid transfer was performed using the method proposed by Piro and Grédiac [41]. The imaging field of view was 24.5 mm along the $$X_{2}$$-direction (starting on the left hand side of the specimen) and 10 mm along the $$X_{1}$$-direction which is the width of the flattened side of the specimen, see Fig. 1. The camera used here is a SIM 16 camera with a 50 mm lens. This camera possesses 16 CCD sensors and a beam splitter spreading the light through the 16 channels, enabling extremely fast imaging as the limiting factor is electronic gating. However, the downside of this technology concerns the use of light amplifiers (ICCD sensors) causing issues in the imaging, as will be demonstrated later on in this article and illustrated in previous studies [21, 22]. Some details concerning the camera and lens are reported in Table 1. The camera was positioned facing the grid with the lens axis normal to the observed surface, with the specimen approximately 20 cm from the lens. The specimen was illuminated using two flash lights triggered from a strain gauge bonded onto the incident bar. In dynamics, wave propagation is much faster than any rigid body movements. Therefore, it is favourable for 2D imaging with a camera positioned close to the specimen, as the issue of parasitic strain coming from out-of-plane displacements can largely be ignored since the strain wave has passed before the out of plane motions occur. The quantification of the noise level was performed by measuring the displacement between two sets of images of the stationary specimen and calculating the standard deviation of the resulting displacement and strain fields.
Table 1
SHPB imaging parameters with SIM 16 camera
Camera SIM 16 Sensor size 1360 × 1024 pixel2 Field of view 24.5 × 10 mm2 Interframe 5 μs Shutter speed 1 μs Total number of images 16 Technique used Grid method Period size 150 μm Pixels per period 9 Displacement Smoothing method Least square convolution Smoothing window 31 × 31 measurement points Resolution 0.048 pixels (0.8 μm) Strain Differentiation method Finite difference Resolution 313 μstrain Acceleration Differentiation method Finite difference from smoothed displacements Resolution 66,000 m s−2
Both input and output bars were 500 mm long, 15 mm in diameter and made from steel. The impactor speed was up to 18 m s−1. The strain-rate fields obtained by finite difference differentiation of the strain fields showed maximum local strain-rates of respectively 1300 and 1000 s−1 for welded and homogeneous specimens. It should be noted that the strain-rate maps are heterogeneous in space and variable in time. In particular, at the onset of plasticity, there is a sharp local increase of the strain-rate, as was also evidenced in [28]. In the standard SHPB approaches, this is ignored and only an average strain-rate is considered. Ideally, the heterogeneous strain-rate maps should be used to enrich the identification of the strain rate dependence, as was performed in [28]. This was not done here as the quality of the data does not currently allow for it, but this is a clear track to follow in the future to improve the procedure. The acquisition and lighting systems were triggered by a strain gauge bounded onto the incident bar. The images were taken with an interframe time of 5 μs, and a shutter speed of 1 μs. A total of 16 images were taken during each test. Indeed, the technology of the camera is based on the use of a beam splitter and 16 sensors. Therefore, each image was taken from a different sensor. As a result, there was a difference in light intensity between the different images, and it was not possible to accurately measure the displacement fields between images taken from different sensors. To address this issue, the displacement fields were computed between two images taken with the same sensor: one static reference image and the actual dynamic one. The displacement computation is based on the phase shift between the reference and deformed images [33]. In this study, a windowed discrete Fourier transform (WDFT) algorithm was used [34, 35, 42, 43]. It calculates the discrete Fourier transform of the intensity over a set of pixels over a triangular windowed kernel. However, the measured phase maps consist of values between $$-\pi$$ and $$\pi$$ . Therefore, it is not possible to measure a displacement associated to a phase shift that exceeds $$\pi$$. In this case, it is necessary to unwrap the phase map in order to obtain the actual value of the displacement. Extensive work has been done in the past to address this problem [44, 45, 46]. The algorithm used in this study is presented in [47].
#### Smoothing, Acceleration and Strain Computation
In order to reduce the effect of measurement noise, displacement fields were smoothed using an iterative least square convolution method [48]. The smoothing was performed over a 31 × 31 pixels window using a second order polynomial function. Then, the strain fields were computed from the displacement fields by finite difference. Velocity and acceleration fields were calculated from the smoothed displacement field using a centred temporal finite difference scheme. This precluded reliable acceleration maps to be obtained for the two first and last images. Therefore, acceleration fields were only available for 12 steps of the experiment, when 16 steps were available for the strain and displacement fields. By recording two sets of images for the stationary specimen prior to testing, it is possible to compute the standard deviation of the resulting displacement, strain and acceleration maps. This provides an estimate of the ‘resolution’ as reported in Table 1 together with smoothing details.
### SHPB Tests Using Digital Image Correlation
This test was carried out on a welded specimen with digital image correlation on an SHPB set-up at the University of South Carolina.
#### Experiment
Before performing the experiment, the specimen was coated with a thin layer of white paint and a black random speckle pattern was transferred on it using rub on transfer decal paper. This method was preferred to the use of paint and airbrush to obtain a highly contrasted speckle pattern. The reasoning behind this choice will be developed in the next section. The field of view of the camera was 24.5 mm along the $$X_{2}$$-direction (starting on the left side of the specimen) and 10 mm along the $$X_{1}$$-direction which is the width of the flattened side of the specimen. The camera used here was a DRS IMACON 200 with a 200 mm lens (Table 2). The camera was positioned facing the specimen with the lens axis normal to the observed surface. The specimen was lit by two flash lights. The quantification of the noise level was performed by measuring the displacement between two sets of static images and calculating the standard deviation of the resulting displacement and strain fields.
Table 2
SHPB imaging parameters with DRS IMACON 200 camera
Camera IMACON 200 Sensor size 1340 × 1024 pixel2 Field of view 24.5 × 10 mm2 Interframe 4 μs Shutter speed ~0.4 μs Total number of images 16 Technique used DIC Speckle pattern Rub on transfer decal Subset 55 Shift 20 Displacement Smoothing method Least square convolution Smoothing window 31 × 31 measurement points Resolution 0.07 pixels (1.28 μm) Strain Differentiation method analytical Resolution 484 μstrain Acceleration Differentiation method Finite difference from smoothed displacements Resolution 45,000 m s−2
Both input and output bars were 2388 mm long, 25.4 mm in diameter and made from steel. The 483 mm long steel impactor speed was 24 m s−1. The strain-rate fields measured by finite difference of the strain fields showed a maximum local strain-rate of 1600 s−1. The acquisition system was triggered by a piezo-electric sensor set on the incident bar. The images were taken with an interframe time of $$4\,\upmu s$$, and a shutter speed of $$0.4\,\upmu s$$. The DRS IMACON 200 uses the same type of technology as the SIM 16, therefore displacement fields were computed between a static reference image and the actual dynamic image from each sensor. Moreover, in order to reduce the influence of the difference of contrast between the different sensors, flat-field correction has been performed on the images [22]. All images were processed using the 2D-DIC software VIC-2D [49].
#### Noise Issues
Due to the technology of the DRS IMACON 200, the noise level remains an issue. In fact, the camera tends to smooth out the grey levels on the raw images (Fig. 2). This is caused by pixel to pixel photon “leakage” due to the light amplifiers. It is worth noticing that the same issue arises with the SIM 16 camera, however, the phenomenon was less marked probably because of the lower grain size in the phosphorous screens used in the light amplifiers. As a consequence, it was chosen to realise the speckle pattern by using a rub on transfer decal paper instead of spray paint. Thanks to the highly contrasted speckle pattern, it has been possible to reduce the effects of the high noise level (Fig. 3). Despite this improvement, the noise level remains significant. This matter was investigated by Tiwari et al. [22] who recommended the use of unusually large subsets at the cost of spatial resolution. Therefore, to ensure accurate measurement of the displacement fields, a subset of 55 pixels was used. However, a second issue arose. Even with a large subset, the noise presented a high spatial correlation (Fig. 3). This will remain a problem as it will make smoothing less efficient. Nevertheless, the spatial heterogeneities of the mechanical fields are limited, which is the reason why the current limitations can be overcome and quantitative data produced.
#### Smoothing, Acceleration and Strain Computation
The strain fields were computed by analytical differentiation after least square quadratic fit over a 5 × 5 window of the displacement fields by the VIC-2D software [49]. Then the strain fields were smoothed using an iterative least square convolution method [48] over a 31 × 31 pixels window using a second order polynomial function. The calculation of the acceleration fields was performed with the method used for the computation of the measurement from grid method tests. The baseline information on the measurements can be found in Table 2.
### Results
The evolution of the axial strain and acceleration fields are presented in Figs. 4, 5, 6, 7, 8, and 9. Foremost, it is important to note that there is a time shift between the two set-ups. Indeed, the triggering was not performed in the same manner. As a result, the earliest stages of the mechanical wave do not appear on the acceleration fields from the set-up using the grid method. On the first acceleration field, the mechanical wave is already halfway through the specimen which corresponds to the third field $$(16\,\upmu {\rm{s}})$$ measured with DIC. However, in Figs. 5, 7 and 9, the impact wave is clearly visible at the early stages of the experiment (acceleration $$> 0$$), which is followed by a reflected wave (acceleration $$<\,0$$) and a second reflection of the wave (acceleration $$> 0$$) of lower magnitude. It should be noted that the elastic strains caused by the elastic wave cannot be seen on the strain maps as they are hidden in the large plastic strains present in the specimen. It is also worth noting that, for both welded and homogeneous specimens, there is a strong localisation of strain on the impact side. For base material specimens, this is mostly due to a non-uniform contact between the impacting bar and the specimen, while the gradient of mechanical properties is responsible for it in the case of welded specimens. It should be noted however that this is not a problem for the analysis performed in this paper as the inverse identification naturally folds this in.
Concerning the measurements with the grid method, the lower impact velocity could affect the identification of the mechanical properties of the material. Indeed, it results in lower values in the acceleration and strain fields and therefore, could hinder the identification process due to larger noise to signal ratio. One can notice that the average strain on the right hand side of the specimen barely reaches the estimated base material yield strain $$({\simeq}0.005)$$. Therefore, it could affect the identified plastic parameters. This problem does not occur in the measurements realised with DIC due to the higher impact velocity and average strain over the specimen.
## Virtual Fields Method
The virtual fields method is based on the principle of virtual work, which is written, in absence of volume forces, as (1). The convention of summation over repeated indices is used here.
\begin{aligned} -\int \!\!\int \!\!\int_{V}\sigma_{ij}\epsilon_{ij}^{*}\, dV + \iint\nolimits_{S_{V}}T_{i}u^{*}_{i}\, dS&= \int \!\!\int \!\!\int_{V}\rho a_{i}u^{*}_{i}\, dV\end{aligned}
(1)
\begin{aligned} (i,j)&= (1,2,3)\end{aligned}
(2)
\begin{aligned} T_{i} = \sigma_{ij}n_{j}\; over\; S_{V} \end{aligned}
(3)
with: $$\sigma_{ij}$$ the stress tensor, ρ the density of the material, a i the acceleration vector, V the volume where the equilibrium is written, $$u^{*}_{i}$$ the virtual displacement field, $$\epsilon ^{*}_{ij}$$ the virtual strain tensor deriving from $$u^{*}_{i}$$, $$S_{V}$$ the boundary surface of V, $$T_{i}$$ the imposed traction vector over the boundary $$S_{V}$$. In the case of dynamic experiments, load measurement is an issue. Therefore, in order to cancel out the contribution of the load in the principle of virtual work (PVW), a specific virtual field is used that must comply with the specification described in (4).
\begin{aligned} \iint\nolimits_{S_{V}}T_{i}u^{*}_{i}\, dS = 0 \end{aligned}
(4)
Then, by replacing (4) into (1) a new formulation of the PVW for dynamic loading is obtained (5).
\begin{aligned} -\int \!\!\int \!\!\int_{V}\sigma_{ij}\epsilon_{ij}^{*}\, dV = \int \!\!\int \!\!\int_{V}\rho a_{i}u^{*}_{i}\, dV \end{aligned}
(5)
Therefore a relationship is obtained between the stress field and the acceleration field. Then, with the assumption that the mechanical fields are uniform through the thickness and that the virtual fields are selected so that they do not depend on the through-thickness coordinates, (5) is developed into (6).
\begin{aligned} -\iint\nolimits_{S}\sigma_{ij}\epsilon_{ij}^{*}\, dS = \iint\nolimits_{S}\rho a_{i}u^{*}_{i}\, dS \end{aligned}
(6)
It is interesting to note that this equation is valid on any surface of the specimen. Therefore, it is possible to carry out a local identification of the mechanical parameters without any consideration for what happens outside of this zone.
### Virtual Fields in Elasticity
The identification of Young’s modulus and Poisson’s ratio was performed on the whole specimen which was considered as a homogeneous material. In order to perform this identification during the elastic steps of the test, two virtual fields are necessary and both of them have to comply with (4). The virtual fields defined in (7) and (8) have been used.
\begin{aligned}&\left\{ \begin{array}{lll} u_{1}^{*(1)} & = & 0 \\ u_{2}^{*(1)} & = & x_{2}(x_{2} - L) \end{array} \right. \end{aligned}
(7)
\begin{aligned}&\left\{ \begin{array}{lll} u_{1}^{*(2)} & = & 0 \\ u_{2}^{*(2)} & = & x_{2}^2(x_{2} - L) \end{array} \right. \end{aligned}
(8)
where L is the length of the identification area. By incorporating (7) and (8) into (6), the following system is obtained, assuming plane stress, linear isotropic elasticity and homogeneous elastic properties, as it has been shown in Sutton et al. [29] for quasi-static properties.
\begin{aligned} \left\{ \begin{array}{lll} -\frac{E}{1-\nu ^{2}}\iint_{S}(2x_{2}-L)\varepsilon_{22}\, dS - \frac{\nu E}{1-\nu ^{2}} \iint_{S}(2x_{2}-L)\varepsilon_{11}\, dS \\ \quad = \rho \iint_{S}x_{2}(x_{2}-L)a_{2}\, dS \\ -\frac{E}{1-\nu ^{2}}\iint_{S}(3x_{2}^{2}-2x_{2}L)\varepsilon_{22}\, dS - \frac{\nu E}{1-\nu ^{2}} \iint_{S}(3x_{2}^{2}-2x_{2}L)\varepsilon_{11}\, dS \\ \quad = \rho \iint_{S}x_{2}^{2}(x_{2}-L)a_{2}\, dS \end{array} \right. \end{aligned}
(9)
Full-field measurements are available over the surface of the specimen during the experiment. In order to carry out the identification of the elastic parameters, the integrals over the surface are approximated by discrete sums (see for instance (10)) with $$w$$ the width of the specimen, $$N$$ the number of measurement points over the area, and the bar indicating spatial averaging over the field of view. The quality of this approximation is dependent on the spatial frequency content of the mechanical fields and the spatial resolution of the measurements.
\begin{aligned} \iint\nolimits_{S}\varepsilon_{ij}\, dS \simeq \frac{Lw}{N}\sum \limits_{k=1}^{N}\varepsilon_{ij}^{(k)} = Lw\overline{\varepsilon }_{ij} \end{aligned}
(10)
where the overline indicates spatial averaging over the area under consideration. This leads to a new formulation of (9) reported in (11).
\begin{aligned} \left\{ \begin{array}{lll} -\frac{E}{1-\nu ^{2}}\overline{(2x_{2}-L)\varepsilon }_{22} - \frac{\nu E}{1-\nu ^{2}} \overline{(2x_{2}-L)\varepsilon }_{11} = \rho \overline{x_{2}(x_{2}-L)a}_{2} \\ -\frac{E}{1-\nu ^{2}}\overline{(3x_{2}^{2}-2x_{2}L)\varepsilon }_{22} - \frac{\nu E}{1-\nu ^{2}} \overline{(3x_{2}^{2}-2x_{2}L)\varepsilon }_{11} = \rho \overline{x_{2}^{2}(x_{2}-L)a}_{2} \end{array} \right. \end{aligned}
(11)
Then (11) is first solved for $$E/(1-\nu ^2)$$ and $$\nu E/(1-\nu ^2)$$ by inversion of the linear system. Then, $$E$$ and $$\nu$$ are calculated from these quantities.
### Virtual Fields in Homogeneous Plasticity
The elasto-plastic model used in this study is very simple. It assumes Von-Mises yield function with associated plasticity and isotropic hardening. As a first attempt to keep things simple, a linear hardening model is selected. As a consequence, the model only involves the yield stress $$(\sigma_{y})$$ and the hardening modulus $$(H)$$ (12).
\begin{aligned} \sigma (t) = f(\varepsilon ,E,\nu ,\sigma_{y},H,t) \end{aligned}
(12)
Due to the non-linearity of the stress–strain relationship in plasticity, it is not possible to extract the mechanical parameters from the first integral, and carry out the identification as in elasticity. This problem has been solved by Grédiac and Pierron [27]. The identification has been carried out by constructing a cost function dependent on the plastic parameters (13). This function is the sum of the quadratic difference of the two terms in Eq. (6) over time.
\begin{aligned} \Phi (\sigma_{y},H) = \sum \limits_{t=t_{0}}^{t_{f}}{\left[ \iint\nolimits_{S}{\sigma_{ij}(\varepsilon ,E,\nu ,\sigma_{y},H,t)\varepsilon_{ij}^{*}}\,dS + \iint\nolimits_{S}\rho a_{i}u^{*}_{i}\, dS\right] ^2} \end{aligned}
(13)
In order to carry out the identification of the plastic parameters, the integrals over the surface are approximated by discrete sums as it has been done in Eq. (11). It leads to a new formulation of Eq. (13) shown in Eq. (14). The plastic parameters are then identified by minimization of the following cost function (Fig. 10).
\begin{aligned} \Phi (\sigma_{y},H) = \sum \limits_{t=t_{0}}^{t_{f}}{[\overline{\sigma_{ij}(\varepsilon ,E,\nu ,\sigma_{y},H,t)\varepsilon_{ij}^{*}} + \rho \overline{a_{i}u^{*}_{i}}]^2} \end{aligned}
(14)
Moreover, the stress-strain relationship being non-linear, a single virtual field is generally sufficient to perform identification when the number of parameters is low, which is the case here [27]. In order to calculate the value of $$\Phi (\sigma_{y},H)$$, the stress field is computed at each step of the experiment using the method proposed by Sutton et al. [50]. This is an iterative method based on the radial return. The minimisation of the cost function is based on the Nelder-Mead simplex method [51].
### Virtual Fields in Plasticity for Heterogeneous Materials
As opposed to the situation where a homogeneous material is studied, as in [26, 52], the elasto-plastic parameters within the weld depend on the space variables. Different strategies have been devised in the past to parameterize this variation: identify distinct zones based on strain localization (or microstructure), as in [29] or consider the properties constant over a certain transverse slice of the weld, as in [53]. This is the approach used here. For each of the nine shaded slices on Fig. 11, the following virtual field is used:
\begin{aligned} \left\{ \begin{array}{lll} u_{1}^{*(1)} & = & 0 \\ u_{2}^{*(1)} & = & x_{2}(x_{2} - L) \end{array} \right. \end{aligned}
(15)
It is worth noting that the $$(X_1,X_2)$$ reference frame is a local frame linked to each individual slice. On Fig. 11, it is given for the first slice. By replacing (15) into (14), the following formulation of the cost function is obtained for each slice (numbered $$(i)$$).
\begin{aligned} \Phi ^{(i)}(\sigma_{y}^{(i)},H^{(i)}) = \sum \limits_{t=t_{0}}^{t_{f}}{[\overline{(2x_{2}-L)\sigma_{22}(\varepsilon ,E,\nu ,\sigma_{y}^{(i)},H^{(i)},t)}^{(i)} +\; \overline{x_{2}(x_{2} - L)\rho a_{2}(t)}]^2}^{(i)} \end{aligned}
(16)
Here, only one virtual field has been used. Experience has shown that this was generally sufficient for a predominantly unidirectional stress state when considering isotropic yield surfaces. However, in spite of the work reported in [52], the optimization of the choice of virtual fields for non-linear constitutive models is still very much an open problem. It must also be understood that the thickness of the slices represents a compromise between a thin slice for better spatial resolution and a thick slice for lower influence of noise thanks to the spatial averages in Eq. 16. Here, the slices are much thicker than in [53] because the measurements are of much lower quality due to the noise levels present in the images obtained with the two ultra-high speed cameras.
## Results
### Elastic Parameters
The noise level in the measured displacement data remains the main issue when carrying out the identification of the mechanical parameters under high strain-rate. This point becomes more critical for the elastic parameters due to the high noise to signal ratio. Because of this issue, it was not possible to retrieve the elastic parameters for the test using DIC. For the experiments performed with the SIM16 camera, the identification was completed using only the data from the elastic steps of the tests. The results are presented in Table 3. The quasi-static reference values (as given by the supplier) of these parameters have been added in order to give a reference for the results obtained. These results still exhibit relatively large dispersion. Nevertheless, the results are promising since the accuracy of the extracted parameters will improve with the quality of images, which is already happening with the new generation of UHS cameras based on in-situ image storage, allowing unprecedented image quality and ease of use, as evidenced in [23]. In fact, the values obtained for the welded specimen are much better because of the better image quality for this particular test.
Table 3
Elastic parameters identified by the VFM
Reference
Grid: base material
Grid: welded specimen
Young’s modulus (GPa)
First test
70
62
68
Second test
70
32
72
Third test
70
43
Poisson’s ratio
First test
0.33
0.1
0.31
Second test
0.33
0.1
0.37
Third test
0.33
0.7
### Plastic Parameters
The reference value of Young’s modulus and Poisson’s ratio were used for the identification of the plastic parameters. The identification was carried out using the images from the plastic steps of the tests. Moreover, due to the fact that all areas of the weld do not yield at the same time and do not undergo the same amount of strain, the identification has been carried out using different numbers of images for each slice. Indeed, with the slices on the impact side of the specimen, the strain level is more important and yield occurs earlier. Therefore, 5–8 images were used to perform the identification, depending on the slice. Knowing that the identification makes use of a minimisation process, the starting values of the algorithm could have an impact on the identified parameters. The evolution of the cost function with the plastic parameters is represented in Fig. 12, for a slice in the centre of a base material specimen.
It is important to note that if the cost function admits a clear minimum value for variations of the yield stress, it is almost insensitive to the variation of the hardening modulus. This problem can be addressed by increasing the number of images in the cost function. Unfortunately, the number of available images for the dynamic tests was very limited. As a result, it has not been possible to carry out the identification of the hardening modulus, and only the yield stress has been identified. Figure 12 shows that the cost function admits a clear minimum over a range of reasonable values for the yield stress. The identification has been performed using a starting point of 200 MPa. For the base material specimens, the identification has been carried out over the whole field of view. The results are presented in Table 4. The quasi-static value for the yield stress has been added as a reference [54]. A steady identification (i.e. convergence was achieved) of the yield stress has been obtained on these specimens, with an average value of 382 MPa. These values are about 50 % higher than the quasi-static reference and are consistent with the results obtained by Tucker [13]. In practice, the choice of the slice width (or the choice of the total number of slices within the field of view) will determine how finely the spatial yield stress distribution can be described within the weld. Indeed, a sharp change of yield stress within a slice will be smoothed out by the fact that a constant yield stress is identified over this slice. Considering this, one would want to go for thinner slices but then, the spatial averages in Eq. 16 will be less efficient at filtering out measurement noise so a compromise has to be found. In order to investigate this issue and help select a typical slice width, the base material specimens have been divided up in slices and the identification performed as it will be on the weld specimens. Ideally, the yield stress values obtained in each slice should be identical but because of measurement noise, they are not, as shown on Fig. 13. The first thing that is apparent on this figure is that a consistent 20 % reduction of the identified yield stress can be observed on the right-hand side of the specimen. This can be attributed to the very low strain levels experienced there (Fig. 4) compared to the welds (Figs. 6 and 8). The other conclusion that can be drawn is that the number of slices has to be kept low. Looking at the 0 to 15 mm zone to the left of the graph where strains are large enough for satisfactory identification, one can see that thin slices (12 slices) produce noisy yield stresses, ranging from 280 to 400 MPa whereas thicker slices (6 slices) produce yield stress values that are much less scattered, as one would expect.
Table 4
Plastic parameters identified by the VFM for base material specimens using the grid method
Reference
Base material 1
Base material 2
Base material 3
Yield stress (MPa)
255
368
376
402
The same type of evaluation has been performed on a welded specimen (Fig. 14). As seen in Figs. 6 and 8, the strain levels are larger than for the base material so the identification can be carried out over the whole field of view. It can clearly be seen that an increased slice thickness (i.e. decreased number of slices) leads to a smoother spatial variation of the yield stress. However, when the number of slices goes over 12, it is not possible to identify the yield stress on the second half of the specimen where there is no convergence of the minimisation process anymore. The reason for this is that when the stress state is too uniform spatially, and accelerations are low, then both terms in Eq. 16 are close to zero and convergence cannot be reached (the $$(2x_2-L)$$ term has a zero mean over the slice where $$x_2$$ varies between 0 and L). Increasing the slice width results in higher stress heterogeneity and convergence can be restored. This problem is not evidenced in the welded area where significant strain and stress heterogeneities are present because of the strain localization process. As a result of this compromise, the number of slices will be kept around 9 (and varies slightly for one specimen to the next as field of views and impact speeds are slightly different).
Additionally, the size of the smoothing window can also influence the identified parameters. As stated earlier in this chapter, a 31 × 31 window was used for all the tests. Nevertheless, information is lost in the smoothing process, especially when dealing with gradient of properties. The influence of the smoothing on the identification of the yield stress is shown in Fig. 15. As expected, it shows that a larger smoothing window reduces the dispersion of the results. However, the spatial resolution drops with a larger window size, and it will hinder the measurement of the gradients in mechanical properties. The effect of increased smoothing window is basically the same as that of decreased number of slices.
The final results for the identified yield stress for welded specimens is presented in Fig. 16. It can be seen that very similar identification results have been obtained in the three tests, with evolution of the yield stress throughout the weld exhibiting the lowest value around the center of the weld. The DIC results however provide a much smoother variation as expected because of the reduced spatial resolution of the measurements arising from the very large subset used.
Finally, the current results have been collated with that from [53] to obtain an overview of the variations of yield stress profiles over a large range of strain-rates (Fig. 17). It is interesting to note that at the center of the weld, the identified values show a significant strain-rate sensitivity between $$83\,\upmu \,{\rm{s}}^{-1}$$ and $$0.63\,{\rm{s}}^{-1}$$, whereas for the base material, sensitivity is towards the larger strain-rates [53]. This is interesting but would need to be backed up with materials science studies to confirm if such an effect is expected. It is clear however that the very different micro-structures between the nugget, heat affected zones and base material could potentially lead to such differences. It must be emphasized that this kind of results would be extremely difficult to obtain by any current method and as such, the present methodology has great future potential to explore local strain-rate sensitivity in welds. This in turn can lead to the development of better visco-plastic constitutive models for such welds. Finally, more complex tests such as three or four point impact bending tests (as in [32]) could be used to identify elasto-visco-plastic models over a wider range of stress multi-axiality, which is currently another main limitation of the standard SHPB analysis.
## Conclusion
A new method for the identification of the dynamic properties of welds has been proposed in this study. It offers a significant contribution to the field of high strain-rate testing. In this work, the acceleration fields have been used as a load cell, in order to carry out the identification of the mechanical properties of the material. While previous work in this area [23, 30] was limited to the characterisation of the elastic properties, the identification of both elastic and plastic parameters has been carried out during this study. According to the authors knowledge, it is the first time that the identification of the dynamic yield stress of a material has been attempted without any external load measurement. Moreover, a local characterisation of the dynamic yield stress was performed on welded specimens. The repeatability of the process has been verified on two different set-ups and with two different full-field measurement techniques. The hardware, and more specifically, the high noise level of the cameras and the low number of available images currently remains the main weak point of the method. Moreover, it is essential that in the future, detailed uncertainty assessment of the identified data is performed so that error bars can be added to the yield stress evolutions in Fig. 17. This is a challenging task as the measurement and identification chain is long and complex with many parameters to set. This can only be addressed by using a realistic simulator as developed in [55] for the grid method and more recently [56] for Digital Image Correlation. This enables first to optimize the test and processing parameters (load configuration, subset and smoothing in DIC, virtual fields in the VFM) and then to provide uncertainty intervals for the identified parameters. This has recently been validated experimentally in [57]. The present case will be more computationally challenging but conceptually, the procedures in [55, 56, 57] can be used in exactly the same way as for elasticity. This will have to be investigated in the near future when tests with better images are available. It is a key issue for making this new procedure a standard technique for which users have confidence in the results.
It is believed that the demonstrated ability to extract local material properties without the requirement for external load measurement will open unprecedented opportunities to expand the range of experimental approaches that can be used in the field of high strain-rate testing. To develop the next generation of novel methodologies and make them available to researchers and engineers, significant additional research will be required, with the growth and continuous improvement of modern high speed imaging technology being the foundation for the effort.
## Notes
### Acknowledgments
The authors would like to thank the Champagne-Ardenne Regional Council for funding 50 % of the Ph.D. studentship of G. Le Louëdec. The authors would also like to acknowledge the support of the US Army Research Office through ARO Grants $$\sharp$$ W911NF-06-1-0216 and Z-849901, and the NSF through the I/UCRC Center for Friction Stir Processing.
## References
1. 1.
Thomas WM, Nicholas ED, Needham JC, Murch MG, Templesmith P, Dawes CJ (1991) Friction-stir butt welding, GB patent No. 9125978.8, international patent application No. PCT/GB92/02203Google Scholar
2. 2.
Field J, Walley S, Proud W, Goldrein H, Siviour C (2004) Review of experimental techniques for high rate deformation and shock studies. Int J Impact Eng 30(7):725–775
3. 3.
Hopkinson B (1914) A method of measuring the pressure produced in the detonation of high explosives or by the impact of bullets. Phil Trans R Soc 213(10):437–452
4. 4.
Kolsky H (1949) An investigation of the mechanical properties of materials at very high rates of loading. Proc Phys Soc Sect B 62(11):676–700
5. 5.
Harding J, Wood E, Campbell J (1960) Tensile testing of materials at impact rates of strain. J Mech Eng Sci 2:88–96
6. 6.
Jiang C, Chen M (1974) Dynamic properties of materials, part II: Aluminum alloys. Defense Technical Information CenterGoogle Scholar
7. 7.
Nicholas T (1981) Tensile testing of materials at high rates of strain—an experimental technique is developed for testing materials at strain rates up to $$103 s^{-1}$$ in tension using a modification of the split Hopkinson bar or Kolsky apparatus. Exp Mech 21:177–185
8. 8.
Jenq S, Sheu S (1994) An experimental and numerical analysis for high strain rate compressional behavior of 6061-O aluminum alloy. Comput Struct 52:27–34
9. 9.
Smerd R, Winkler S, Salisbury C, Worswick M, Lloyd D, Finn M (2005) High strain rate tensile testing of automotive aluminum alloy sheet. Int J Impact Eng 32:541–560
10. 10.
Zhang X, Li H, Li H, Gao H, Gao Z, Liu Y, Liu B (2008) Dynamic property evaluation of aluminum alloy 2519A by split hopkinson pressure bar. Trans Nonferr Met Soc China 18:1–5
11. 11.
Hadianfard M, Smerd R, Winkler S, Worswick M (2008) Effects of strain rate on mechanical properties and failure mechanism of structural Al-Mg alloys. Mater Sci Eng A 492:283–292
12. 12.
Abotula S, Chalivendra V (2010) An experimental and numerical investigation of the static and dynamic constitutive behaviour of aluminium alloys. J Strain Anal Eng Des 45:555–565
13. 13.
Tucker M, Horstemeyer M, Whittington W, Solanki K, Gullett P (2010) The effect of varying strain rates and stress states on the plasticity, damage, and fracture of aluminum alloys. Mech Mater 42:895–907
14. 14.
Chen W, Song B (2009) Dynamic characterization of soft materials. In: Shukla A, Ravichandran G, Rajapakse YD (eds) Dynamic failure of materials and structures. Springer, New York, pp 1–28
15. 15.
Hoge K (1966) Influence of strain rate on mechanical properties of 6061–T6 aluminum under uniaxial and biaxial states of stress. Exp Mech 6:204–211
16. 16.
Xu Z, Li Y (2009) Dynamic behaviors of 0Cr18Ni10Ti stainless steel welded joints at elevated temperatures and high strain rates. Mech Mater 41(2):121–130
17. 17.
Lee W-S, Lin C-F, Liu C-Y, Tzeng F-T (2004) Impact properties of 304L stainless steel GTAW joints evaluated by high strain rate of compression tests. J Nucl Mater 335(3):335–344
18. 18.
Zhang J, Tan C, Ren Y, Wang F, Cai H (2011) Quasi-static and dynamic tensile behaviors in electron beam welded Ti-6Al-4V alloy. Trans Nonferr Met Soc China 21(1):39–44
19. 19.
Yokoyama T, Nakai K, Kotake Y (2007) High strain rate compressive stress strain response of friction stir welded 7075–T651 aluminum alloy joints in through thickness direction. J Jpn Inst Lights Met 57:518–523
20. 20.
Reu P, Miller T (2008) The application of high-speed digital image correlation. J Strain Anal Eng Des 43(8):673–688
21. 21.
Pierron F, Cheriguene R, Forquin P, Moulart R, Rossi M, Sutton M (2011) Performances and limitations of three ultra high-speed imaging cameras for full-field deformation measurements. In: Series: advances in experimental mechanics VIII, applied mechanics and materials (Trans Tech Publications), vol 70. Joint BSSM/SEM ISEV conference, 7–9 in Edinburgh (UK)Google Scholar
22. 22.
Tiwari V, Sutton M, McNeill S (2007) Assessment of high speed imaging systems for 2D and 3D deformation measurements: methodology development and validation. Exp Mech 47:561–579
23. 23.
Pierron F, Zhu H, Siviour C (2014) Beyond Hopkinson’s bar. Philos Trans R Soc A 372:20130195
24. 24.
Pierron F, Grédiac M (2012) The virtual fields method. Springer, New-York
25. 25.
Avril S, Pierron F (2007) General framework for the identification of constitutive parameters from full-field measurements in linear elasticity. Int J Solids Struct 44(14–15):4978–5002
26. 26.
Pannier Y, Avril S, Rotinat R, Pierron F (2006) Identification of elasto-plastic constitutive parameters from statically undetermined tests using the virtual fields method. Exp Mech 46(6):735–755
27. 27.
Grédiac M, Pierron F (2006) Applying the virtual fields method to the identification of elasto–plastic constitutive parameters. Int J Plast 22:602–607
28. 28.
Avril S, Pierron F, Sutton MA, Yan J (2008) Identification of elasto-visco-plastic parameters and characterization of Lüders behavior using digital image correlation and the virtual fields method. Mech Mater 40(9):729–742
29. 29.
Sutton MA, Yan JH, Avril S, Pierron F, Adeeb SM (2008) Identification of heterogeneous constitutive parameters in a welded specimen: uniform stress and virtual fields methods for material property estimation. Exp Mech 48(4):451–464
30. 30.
Moulart R, Pierron F, Hallett S, Wisnom M (2011) Full-field strain measurement and identification of composites moduli at high strain rate with the virtual fields method. Exp Mech 51:509–536
31. 31.
Pierron F, Forquin P (2012) Ultra high speed full-field deformation measurements on concrete spalling specimens and stiffness identification with the virtual fields method. Strain 28(5):388–4058
32. 32.
Pierron F, Sutton M, Tiwari V (2011) Ultra high speed DIC and virtual fields method analysis of a three point bending impact test on an aluminium bar. Exp Mech 51(4):537–563
33. 33.
Huntley J (1998) Automated fringe pattern analysis in experimental mechanics: a review. J Strain Anal Eng Des 33:105–125
34. 34.
Surrel Y (2000) Fringe analysis. Photomechanics, vol 77 of topics in applied physics, pp 55–102Google Scholar
35. 35.
Surrel Y (1997) Design of phase-detection algorithms insensitive to bias modulation. Appl Opt 36:805–807
36. 36.
Badulescu C, Grédiac M, Mathias J, Roux D (2009) A procedure for accurate one-dimensional strain measurement using the grid method. Exp Mech 49(6):841–854
37. 37.
Badulescu C, Grédiac M, Mathias J (2009) Investigation of the grid method for accurate in-plane strain measurement. Meas Sci Technol 20(9):095102Google Scholar
38. 38.
Ri S, Fujikagi M, Morimoto Y (2010) Sampling moiré method for accurate small deformation distribution measurement. Exp Mech 50(4):501–508
39. 39.
Ri S, Muramatsu T, Saka M, Nanbara K, Kobayashi D (2012) Accuracy of the sampling moiré method and its application to deflection measurements of large-scale structures. Exp Mech 52(4):331–340
40. 40.
Sutton MA, Orteu J, Schreier HW (2009) Image correlation for shape., Motion and deformation measurementsSpringer, New-YorkGoogle Scholar
41. 41.
Piro J-L, Grédiac M (2006) Producing and transferring low-spatial-frequency grids for measuring displacement fields with moiré and grid methods. Exp Tech 28:23–26
42. 42.
Surrel Y (1993) Phase stepping: a new self-calibrating algorithm. Appl Opt 32:3598–3600
43. 43.
Surrel Y (1996) Design of algorithms for phase measurements by the use of phase stepping. Appl Opt 35:51–60
44. 44.
Ghiglia DC, Mark PD (1998) Two dimensional phase unwrapping: theory, algorithms & software. Wiley—Interscience publication, New YorkGoogle Scholar
45. 45.
Baldi A, Bertolino F, Ginesu F (2002) On the performance of some unwrapping algorithms. Opt Lasers Eng 37:313–330
46. 46.
Zappa E, Busca G (2008) Comparison of eight unwrapping algorithms applied to fourier-transform profilometry. Opt Lasers Eng 46:106–116
47. 47.
Bioucas-dias J, Valadao G (2007) Phase unwrapping via graph cuts. IEEE Trans Image Process 16:698–709
48. 48.
Gorry PA (1990) General least-squares smoothing and differentiation by the convolution (Savitzky-Golay) method. Anal Chem 62(6):570–573
49. 49.
VIC2D, Correlated Solutions Incorporated, 120 Kaminer Way, Parkway Suite A, Columbia SC 29210, www.correlatedsolutions.com
50. 50.
Sutton MA, Deng X, Liu J, Yang L (1996) Determination of elastic–plastic stresses and strains from measured surface strain data. Exp Mech 36(2):99–112
51. 51.
Olsson D, Nelson L (1975) Nelder-mead simplex procedure for function minimization. Technometrics 17:45–51
52. 52.
Pierron F, Avril S, Tran V (2010) Extension of the virtual fields method to elasto–plastic material identification with cyclic loads and kinematic hardening. Int J Solids Struct 47:2993–3010
53. 53.
Le Louëdec G, Pierron F, Sutton MA, Reynolds AP (2013) Identification of the local elasto–plastic behavior of FSW welds using the virtual fields method. Exp Mech 53(5):849–859
54. 54.
Reemsnyder H, Throop J (1982) Residual stress effects in fatigue-STP 776. American Society for Testing & MaterialsGoogle Scholar
55. 55.
Rossi M, Pierron F (2012) On the use of simulated experiments in designing tests for material characterization from full-field measurements. Int J Solids Struct 49(3–4):420–435
56. 56.
Rossi M, Lava P, Pierron F, Debruyne D, Sasso M (2015) Effect of DIC spatial resolution, noise and interpolation error on identification results with the vfm. Strain, in revisionGoogle Scholar
57. 57.
Wang P, Pierron F, Thomsen O, Rossi M, Lava P (2015) Uncertainty quantification in VFM identification, vol 6. Springer, New York, pp 137–142Google Scholar
© Society for Experimental Mechanics, Inc 2015
## Authors and Affiliations
• G. Le Louëdec
• 1
• 2
• F. Pierron
• 3
• M. A. Sutton
• 2
• C. Siviour
• 4
• A. P. Reynolds
• 5
1. 1.Laboratoire de Mécanique et Procédés de Fabrication, Arts et Métiers ParisTechCentre de Châlons-en-ChampagneChâlons-en-Champagne CedexFrance
2. 2.Department of Mechanical Engineering, Center for Mechanics, Material and NDEUniversity of South CarolinaColumbiaUSA
3. 3.Engineering and the EnvironmentUniversity of SouthamptonSouthamptonUK
4. 4.Department of Engineering ScienceUniversity of OxfordOxfordUK
5. 5.Department of Mechanical Engineering, Center for Friction Stir WeldingUniversity of South CarolinaColumbiaUSA | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386210203170776, "perplexity": 1529.359773940448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218899.88/warc/CC-MAIN-20180821191026-20180821211026-00357.warc.gz"} |
https://en.wikipedia.org/wiki/Helmholtz_free_energy | # Helmholtz free energy
In thermodynamics, the Helmholtz free energy is a thermodynamic potential that measures the “useful” work obtainable from a closed thermodynamic system at a constant temperature. The negative of the difference in the Helmholtz energy is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. If the volume is not held constant, part of this work will be performed as boundary work. The Helmholtz energy is commonly used for systems held at constant volume. Since in this case no work is performed on the environment, the drop in the Helmholtz energy is equal to the maximum amount of useful work that can be extracted from the system. For a system at constant temperature and volume, the Helmholtz energy is minimized at equilibrium.
The Helmholtz free energy was developed by Hermann von Helmholtz, a German physicist, and is usually denoted by the letter A (from the German “Arbeit” or work), or the letter F . The IUPAC recommends the letter A as well as the use of name Helmholtz energy.[1] In physics, the letter F can also be used to denote the Helmholtz energy, as Helmholtz energy is sometimes referred to as the Helmholtz function, Helmholtz free energy, or simply free energy (not to be confused with Gibbs free energy).
While Gibbs free energy is most commonly used as a measure of thermodynamic potential, especially in the field of chemistry, it is inconvenient for some applications that do not occur at constant pressure. For example, in explosives research, Helmholtz free energy is often used since explosive reactions by their nature induce pressure changes. It is also frequently used to define fundamental equations of state of pure substances.
## Definition
The Helmholtz energy is defined as:[2]
$A \equiv U-TS\,$
where
• A is the Helmholtz free energy (SI: joules, CGS: ergs),
• U is the internal energy of the system (SI: joules, CGS: ergs),
• T is the absolute temperature (kelvins) of the surroundings, modelled as a heat bath,
• S is the entropy of the system (SI: joules per kelvin, CGS: ergs per kelvin).
The Helmholtz energy is the Legendre transform of the internal energy, U, in which temperature replaces entropy as the independent variable.
## Mathematical development
From the first law of thermodynamics in a closed system we have
$\mathrm{d}U = \delta Q\ - \delta W\,$,
where $U$ is the internal energy, $\delta Q$ is the energy added as heat and $\delta W$ is the work done by the system. From the second law of thermodynamics, for a reversible process we may say that $\delta Q=T\mathrm{d}S$. Also, in case of a reversible change, the work done can be expressed as $\delta W = p \mathrm{d}V$ (ignoring electrical and other non-PV work)
$\mathrm{d}U = T\mathrm{d}S - p\mathrm{d}V\,$
Applying the product rule for differentiation to d(TS) = TdS + SdT, we have:
$\mathrm{d}U = d(TS) - S\mathrm{d}T- p\mathrm{d}V\,$,
and:
$\mathrm{d}(U-TS) = - S\mathrm{d}T - p\mathrm{d}V\,$
The definition of A = U - TS enables to rewrite this as
$\mathrm{d}A = - S\mathrm{d}T - p\mathrm{d}V\,$
Because A is a thermodynamic function of state, this relation is also valid for a process (without electrical work or composition change) that is not reversible, as long as the system pressure and temperature are uniform.[3]
## Work in an isothermal process and equilibrium conditions
In the first law of thermodynamics
$\mathrm{d}U = \delta Q\ - \delta W\,$
based on the inequality of Clausius for an isothermal process, we can make the substitution
$\delta Q\leq T\mathrm{d}S\,$
where the equality holds for a reversible process.
The expression for the internal energy becomes
$\mathrm{d}U \leq T\mathrm{d}S - \delta W\,$
If we isolate the work term
$\mathrm{d}U - T\mathrm{d}S \leq - \delta W\,$
and note that, for an isothermal process,
$\mathrm{d}A = \mathrm{d}U - T\mathrm{d}S\,$
then
$\mathrm{d}A \leq - \delta W\,$ (isothermal process)
(With the sign convention used in chemistry, the minus sign disappears). Again, the equality holds for a reversible process (in which $\delta W\,$ becomes dW). dW includes all reversible work, not only mechanical (pressure-volume) work but also non-mechanical work (e. g. electrical work).
The maximum energy that can be freed for work is the negative of the change in A. The process is nominally isothermal, but it is only required that the system has the same initial and final temperature, and not that the temperature stays constant during the process.
Now, imagine that our system is also kept at constant volume to prevent PV work from being done. If temperature and volume are kept constant in a reversible process, then
$\mathrm{d}A = - \delta W_{nonPV}\,$ (isothermal isochoric reversible process)
This is a necessary, but not sufficient condition for equilibrium. For any spontaneous isothermal process at constant volume without electrical or other non-PV work, the change in Helmholtz free energy must be negative, that is $A_{f}\leq A_{i}$. Therefore, to prevent a spontaneous change, we must also require that A be at a minimum under these conditions.
## Minimum free energy and maximum work principles
The laws of thermodynamics are most easily applicable to systems undergoing reversible processes or processes that begin and end in thermal equilibrium, although irreversible quasistatic processes or spontaneous processes in systems with uniform temperature and pressure (uPT processes) can also be analyzed[3] based on the fundamental thermodynamic relation as shown further below. First, if we wish to describe phenomena like chemical reactions, it may be convenient to consider suitably chosen initial and final states in which the system is in (metastable) thermal equilibrium. If the system is kept at fixed volume and is in contact with a heat bath at some constant temperature, then we can reason as follows.
Since the thermodynamical variables of the system are well defined in the initial state and the final state, the internal energy increase, $\Delta U$, the entropy increase $\Delta S$, and the total amount of work that can be extracted, performed by the system, $W$, are well-defined quantities. Conservation of energy implies:
$\Delta U_{\text{bath}} + \Delta U + W = 0\,$
The volume of the system is kept constant. This means that the volume of the heat bath does not change either and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the heat bath is given by:
$Q_{\text{bath}} = \Delta U_{\text{bath}} =-\left(\Delta U + W\right) \,$
The heat bath remains in thermal equilibrium at temperature T no matter what the system does. Therefore the entropy change of the heat bath is:
$\Delta S_{\text{bath}} = \frac{Q_{\text{bath}}}{T}=-\frac{\Delta U + W}{T} \,$
The total entropy change is thus given by:
$\Delta S_{\text{bath}} +\Delta S= -\frac{\Delta U -T\Delta S+ W}{T} \,$
Since the system is in thermal equilibrium with the heat bath in the initial and the final states, T is also the temperature of the system in these states. The fact that the system's temperature does not change allows us to express the numerator as the free energy change of the system:
$\Delta S_{\text{bath}} +\Delta S=-\frac{\Delta A+ W}{T} \,$
Since the total change in entropy must always be larger or equal to zero, we obtain the inequality:
$W\leq -\Delta A\,$
We see that the total amount of work that can be extracted in an isothermal process is limited by the free energy decrease, and that increasing the free energy in a reversible process requires work to be done on the system. If no work is extracted from the system then
$\Delta A\leq 0\,$
and thus for a system kept at constant temperature and volume and not capable of performing electrical or other non-PV work, the total free energy during a spontaneous change can only decrease.
This result seems to contradict the equation dA = -S dT - P dV, as keeping T and V constant seems to imply dA = 0 and hence A = constant. In reality there is no contradiction: In a simple one-component system, to which the validity of the equation dA = -S dT - P dV is restricted, no process can occur at constant T and V since there is a unique P(T,V) relation, and thus T, V, and P are all fixed. To allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a chemical reaction, one must allow for changes in the numbers Nj of particles of each type j. The differential of the free energy then generalizes to:
$dA = -S dT - p dV + \sum_{j}\mu_{j}dN_{j}\,$
where the $N_{j}$ are the numbers of particles of type j and the $\mu_{j}$ are the corresponding chemical potentials. This equation is then again valid for both reversible and non-reversible uPT[3] changes. In case of a spontaneous change at constant T and V without electrical work, the last term will thus be negative.
In case there are other external parameters the above relation further generalizes to:
$dA = -S dT - \sum_{i}X_{i}dx_{i} +\sum_{j}\mu_{j}dN_{j}\,$
Here the $x_{i}$ are the external variables and the $X_{i}$ the corresponding generalized forces.
## Relation to the canonical partition function
A system kept at constant volume, temperature, and particle number is described by the canonical ensemble. The probability to find the system in some energy eigenstate r is given by:
$P_{r}= \frac{e^{-\beta E_r}}{Z}\,$
where
$\beta\equiv\frac{1}{k T}\,$
$E_{r}=\text{ energy of eigenstate }r\,$
$Z = \sum_{r} e^{-\beta E_{r}}$
Z is called the partition function of the system. The fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values. In the thermodynamical limit of infinite system size, the relative fluctuations in these averages will go to zero.
The average internal energy of the system is the expectation value of the energy and can be expressed in terms of Z as follows:
$U\equiv\left\langle E \right\rangle = \sum_{r}P_{r}E_{r}= -\frac{\partial \log Z}{\partial \beta}\,$
If the system is in state r, then the generalized force corresponding to an external variable x is given by
$X_{r} = -\frac{\partial E_{r}}{\partial x}\,$
The thermal average of this can be written as:
$X = \sum_{r}P_{r}X_{r}=\frac{1}{\beta}\frac{\partial \log Z}{\partial x}\,$
Suppose the system has one external variable $x$. Then changing the system's temperature parameter by $d\beta$ and the external variable by $dx$ will lead to a change in $\log Z$:
$d\left(\log Z\right)= \frac{\partial\log Z}{\partial\beta}d\beta + \frac{\partial\log Z}{\partial x}dx = -U\,d\beta + \beta X\,dx\,$
If we write $U\,d\beta$ as:
$U\,d\beta = d\left(\beta U\right) - \beta\, dU\,$
we get:
$d\left(\log Z\right)=-d\left(\beta U\right) + \beta\, dU+ \beta X \,dx\,$
This means that the change in the internal energy is given by:
$dU =\frac{1}{\beta}d\left(\log Z+\beta U\right) - X\,dx \,$
In the thermodynamic limit, the fundamental thermodynamic relation should hold:
$dU = T\, dS - X\, dx\,$
This then implies that the entropy of the system is given by:
$S = k\log Z + \frac{U}{T} + c\,$
where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes $S = k \log \Omega_{0}$ where $\Omega_{0}$ is the ground state degeneracy. The partition function in this limit is $\Omega_{0}e^{-\beta U_{0}}$ where $U_{0}$ is the ground state energy. Thus, we see that $c = 0$ and that:
$A = -kT\log\left(Z\right)\,$
## Bogoliubov inequality
Computing the free energy is an intractable problem for all but the simplest models in statistical physics. A powerful approximation method is mean field theory, which is a variational method based on the Bogoliubov inequality. This inequality can be formulated as follows.
Suppose we replace the real Hamiltonian $H$ of the model by a trial Hamiltonian $\tilde{H}$, which has different interactions and may depend on extra parameters that are not present in the original model. If we choose this trial Hamiltonian such that
$\left\langle\tilde{H}\right\rangle =\left\langle H\right\rangle\,$
where both averages are taken with respect to the canonical distribution defined by the trial Hamiltonian $\tilde{H}$, then
$A\leq \tilde{A}\,$
where $A$ is the free energy of the original Hamiltonian and $\tilde{A}$ is the free energy of the trial Hamiltonian. By including a large number of parameters in the trial Hamiltonian and minimizing the free energy we can expect to get a close approximation to the exact free energy.
The Bogoliubov inequality is often formulated in a slightly different but equivalent way. If we write the Hamiltonian as:
$H = H_{0} + \Delta H\,$
where $H_{0}$ is exactly solvable, then we can apply the above inequality by defining
$\tilde{H} = H_{0} + \left\langle\Delta H\right\rangle_{0}\,$
Here we have defined $\left\langle X\right\rangle_{0}$ to be the average of X over the canonical ensemble defined by $H_{0}$. Since $\tilde{H}$ defined this way differs from $H_{0}$ by a constant, we have in general
$\left\langle X\right\rangle_{0} =\left\langle X\right\rangle\,$
Therefore
$\left\langle\tilde{H}\right\rangle = \left\langle H_{0} + \left\langle\Delta H\right\rangle\right\rangle =\left\langle H\right\rangle\,$
And thus the inequality
$A\leq \tilde{A}\,$
holds. The free energy $\tilde{A}$ is the free energy of the model defined by $H_{0}$ plus $\left\langle\Delta H\right\rangle$. This means that
$\tilde{A}=\left\langle H_{0}\right\rangle_{0} - T S_{0} + \left\langle\Delta H\right\rangle_{0}=\left\langle H\right\rangle_{0} - T S_{0}\,$
and thus:
$A\leq \left\langle H\right\rangle_{0} - T S_{0} \,$
### Proof
For a classical model we can prove the Bogoliubov inequality as follows. We denote the canonical probability distributions for the Hamiltonian and the trial Hamiltonian by $P_{r}$ and $\tilde{P}_{r}$, respectively. The inequality:
$\sum_{r} \tilde{P}_{r}\log\left(\tilde{P}_{r}\right)\geq \sum_{r} \tilde{P}_{r}\log\left(P_{r}\right) \,$
then holds. To see this, consider the difference between the left hand side and the right hand side. We can write this as:
$\sum_{r} \tilde{P}_{r}\log\left(\frac{\tilde{P}_{r}}{P_{r}}\right) \,$
Since
$\log\left(x\right)\geq 1 - \frac{1}{x}\,$
it follows that:
$\sum_{r} \tilde{P}_{r}\log\left(\frac{\tilde{P}_{r}}{P_{r}}\right)\geq \sum_{r}\left(\tilde{P}_{r} - P_{r}\right) = 0 \,$
where in the last step we have used that both probability distributions are normalized to 1.
We can write the inequality as:
$\left\langle\log\left(\tilde{P}_{r}\right)\right\rangle\geq \left\langle\log\left(P_{r}\right)\right\rangle\,$
where the averages are taken with respect to $\tilde{P}_{r}$. If we now substitute in here the expressions for the probability distributions:
$P_{r}=\frac{\exp\left[-\beta H\left(r\right)\right]}{Z}\,$
and
$\tilde{P}_{r}=\frac{\exp\left[-\beta\tilde{H}\left(r\right)\right]}{\tilde{Z}}\,$
we get:
$\left\langle -\beta \tilde{H} - \log\left(\tilde{Z}\right)\right\rangle\geq \left\langle -\beta H - \log\left(Z\right)\right\rangle$
Since the averages of $H$ and $\tilde{H}$ are, by assumption, identical we have:
$A\leq\tilde{A}$
Here we have used that the partition functions are constants with respect to taking averages and that the free energy is proportional to minus the logarithm of the partition function.
We can easily generalize this proof to the case of quantum mechanical models. We denote the eigenstates of $\tilde{H}$ by $\left|r\right\rangle$. We denote the diagonal components of the density matrices for the canonical distributions for $H$ and $\tilde{H}$ in this basis as:
$P_{r}=\left\langle r\left|\frac{\exp\left[-\beta H\right]}{Z}\right|r\right\rangle\,$
and
$\tilde{P}_{r}=\left\langle r\left|\frac{\exp\left[-\beta\tilde{H}\right]}{\tilde{Z}}\right|r\right\rangle=\frac{\exp\left(-\beta\tilde{E}_{r}\right)}{\tilde{Z}}\,$
where the $\tilde{E}_{r}$ are the eigenvalues of $\tilde{H}$
We assume again that the averages of H and $\tilde{H}$ in the canonical ensemble defined by $\tilde{H}$ are the same:
$\left\langle\tilde{H}\right\rangle = \left\langle H\right\rangle \,$
where
$\left\langle H\right\rangle = \sum_{r}\tilde{P}_{r}\left\langle r\left|H\right|r\right\rangle\,$
The inequality
$\sum_{r} \tilde{P}_{r}\log\left(\tilde{P}_{r}\right)\geq \sum_{r} \tilde{P}_{r}\log\left(P_{r}\right) \,$
still holds as both the $P_{r}$ and the $\tilde{P}_{r}$ sum to 1. On the l.h.s. we can replace:
$\log\left(\tilde{P}_{r}\right)= -\beta \tilde{E}_{r} - \log\left(\tilde{Z}\right)\,$
On the right hand side we can use the inequality
$\left\langle\exp\left(X\right)\right\rangle_{r}\geq\exp\left(\left\langle X\right\rangle_{r}\right)\,$
where we have introduced the notation
$\left\langle Y\right\rangle_{r}\equiv\left\langle r\left|Y\right|r\right\rangle\,$
for the expectation value of the operator Y in the state r. See here for a proof. Taking the logarithm of this inequality gives:
$\log\left[\left\langle\exp\left(X\right)\right\rangle_{r}\right]\geq\left\langle X\right\rangle_{r}\,$
This allows us to write:
$\log\left(P_{r}\right)=\log\left[\left\langle\exp\left(-\beta H - \log\left(Z\right)\right)\right\rangle_{r}\right]\geq\left\langle -\beta H - \log\left(Z\right)\right\rangle_{r}\,$
The fact that the averages of H and $\tilde{H}$ are the same then leads to the same conclusion as in the classical case:
$A\leq\tilde{A}$
## Generalized Helmholtz energy
In the more general case, the mechanical term ($p{\rm d}V$) must be replaced by the product of volume, stress, and an infinitesimal strain:[4]
${\rm d}A = V\sum_{ij}\sigma_{ij}\,{\rm d}\varepsilon_{ij} - S{\rm d}T + \sum_i \mu_i \,{\rm d}N_i\,$
where $\sigma_{ij}$ is the stress tensor, and $\varepsilon_{ij}$ is the strain tensor. In the case of linear elastic materials that obey Hooke's Law, the stress is related to the strain by:
$\sigma_{ij}=C_{ijkl}\epsilon_{kl}$
where we are now using Einstein notation for the tensors, in which repeated indices in a product are summed. We may integrate the expression for ${\rm d}A$ to obtain the Helmholtz energy:
$A = \frac{1}{2}VC_{ijkl}\epsilon_{ij}\epsilon_{kl} - ST + \sum_i \mu_i N_i\,$
$= \frac{1}{2}V\sigma_{ij}\epsilon_{ij} - ST + \sum_i \mu_i N_i\,$
## Application to fundamental equations of state
The Helmholtz free energy function for a pure substance (together with its partial derivatives) can be used to determine all other thermodynamic properties for the substance. See, for example, the equations of state for water, as given by the IAPWS in their IAPWS-95 release.
## References
1. ^ Gold Book. IUPAC. doi:10.1351/goldbook. Retrieved 2012-08-19.
2. ^ Levine, Ira. N. (1978). "Physical Chemistry" McGraw Hill: University of Brooklyn
3. ^ a b c Schmidt-Rohr, K. (2014). "Expansion Work without the External Pressure, and Thermodynamics in Terms of Quasistatic Irreversible Processes" J. Chem. Educ. 91: 402-409. http://dx.doi.org/10.1021/ed3008704
4. ^ Landau, L. D.; Lifshitz, E. M. (1986). Theory of Elasticity (Course of Theoretical Physics Volume 7). (Translated from Russian by J.B. Sykes and W.H. Reid) (Third ed.). Boston, MA: Butterworth Heinemann. ISBN 0-7506-2633-X. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 145, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971847414970398, "perplexity": 241.28616747362813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147492.21/warc/CC-MAIN-20160205193907-00181-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/proton-spin.745219/ | # Proton spin
1. ### ChrisVer
2,328
I am currently confused... I read that we don't know yet where does the proton spin come from.
But I wonder...
1) Doesn't the proton have effectively 3 quarks of spin 1/2 (I said effectively to leave out the quark-gluon sea within the proton)? In that case, 3 spin 1/2 particles can't be added up to spin 1/2? Where's the problem in that?
I also learned they tried to calculate the spin of the interactive gluons within the proton, and it was not enough to give its spin. Now the idea that is being proposed (as I know it'll be tested in CERN LCH-COMPASS experiment) is that the needed spin comes from the angular momentum of the rotating quarks.... Isn't that a weird idea? I mean there is a reason (spacetime) we make the distinguishing between angular momentum and spin...I think that the one can in fact be chosen zero in the rest frame, while the other still exists.... In that case, how could angular momentum produce spin?
18,142
Staff Emeritus
If that were the whole story, the magnetic moment of the proton would be 100x larger than it is measured to be.
4. ### samalkhaiat
1,129
Do you think that the proton is a box with 3 marbles spinning inside it? If the story were as naïve as you imagine it to be, I would sleep at night without having nightmares about it. To explain the complications involved here, let me start by simple example and consider the 3-momentum and 3-angular momentum of the free photon in EM. From Noether’s theorem, we find the following CANONICAL expressions
$$\vec{ P } = \int d^{ 3 } x \ E^{ i } \vec{ \nabla } A^{ i } , \ \ \ (1)$$
$$\vec{ J } = \int d^{ 3 } x \ \vec{ E } \times \vec{ A } + \int d^{ 3 } x \ E^{ i } \ ( \vec{ x } \times \vec{ \nabla } ) A^{ i } . \ \ (2)$$
Notice that the angular momentum is nicely decomposed into spin and orbital angular momentum parts: $\vec{ J } = \vec{ S } + \vec{ L }$. However, there is problem with these CANONICAL expressions. They are NOT gauge invariant and therefore cannot be measurable quantities: experimentally measured quantities CANNOT be gauge dependent. However, since the theory is Poincare’ invariant, we can construct equivalent expressions for $\vec{ P }$ and $\vec{ J }$ that are gauge invariant and therefore measurable:
$$\vec{ P } = \int d^{ 3 } \ \vec{ E } \times \vec{ B } , \ \ \ (3)$$
$$\vec{ J } = \int d^{ 3 } \ \vec{ x } \times ( \vec{ E } \times \vec{ B } ) . \ \ \ (4)$$
It looks as if the problem is solved! But no, far from it, because what we measure in experiments is the photon SPIN and Eq(4) has NO SPIN term. So, we gained gauge invariance and lost the very useful decomposition $J = S + L$. This is the story in the FREE field theory, but if we go to QED or QCD, the problem gets even more complicated. Again, the CANONICAL expressions are not gauge invariant and therefore not measurable but the total angular momentum is nicely decomposed into spin and orbital parts for photon (gluon) and electron (quark). But more importantly, these canonical expressions satisfy the Poincare’ algebra and generate the correct translations and rotations on the fields involved. These last two features are the corner stone of any QFT and one should not mess with it. But this is exactly what happens when we try to make $P$ and $J$ gauge invariant. So, in QED and QCD, the problem is not just
$$\vec{ J }_{ \mbox{tot} } \ne ( \vec{ S } + \vec{ L } )_{e} + ( \vec{ S } + \vec{ L } )_{ \gamma } , \ \ (5)$$
but the most important equations in QFT’s get screwed: (1) The operators $P$ and $J$ are no longer the generators of translations and rotations
$$[ i \vec{ P } , \phi ( x ) ] \ne \vec{ \nabla } \phi ( x ) , \ \ (6)$$
$$[ i \vec{ J } , \phi ( x ) ] \ne ( \vec{ L } + \vec{ S } ) \phi ( x ) . \ (7)$$
(2) The following Poincare’ subalgebra get messed up
$$[ P^{ i } , P^{ j } ] \ne 0 , \ \ [ J^{ i } , P^{ j } ] \ne i \epsilon^{ i j k } P^{ k } , \ \ (8)$$
and
$$[ J^{ i } , J^{ j } ] \ne i \epsilon^{ i j k } J^{ k } . \ \ \ \ (9)$$
So, to solve the Proton Spin Problem, we in the last 20 years have been trying to write EQ(5) as equality with each separate term in it represents a gauge-invariant operator, and at the same time Eq(6) to Eq(9) are changed to equalities. Simple is it not?
Sam
2 people like this.
4,157
6. ### ChrisVer
2,328
Well, just to clarify something in the name of my honor... I was thinking of the proton as:
$p=\frac{1}{\sqrt{18}}(2u_{+}u_{+}d_{-}+2u_{+}d_{-}u_{+}+2d_{-}u_{+}u_{+}-u_{+}u_{-}d_{+}-u_{-}d_{+}u_{+}-u_{+}d_{+}u_{-}-d_{+}u_{-}u_{+}-d_{+}u_{+}u_{-}-u_{-}u_{+}d_{+})$
notation $q_{s_{z}=\pm}$
and that seaquarks (as gluons do) contributing nothing to the spin, since they would appear in pairs of different orientation. So that's why I gave the mistaken impression that I was speaking of marble-like things in the proton.
Although my question is answered I think from the above post #4....
Nice review too Bill, thanks
Last edited: Mar 26, 2014
18,142
Staff Emeritus
The magnetic moment of a Dirac particle goes as 1/m. Since the u and d quarks have a mass of a few MeV, they have magnetic moments of a few hundred nuclear magnetons. The proton has a magnetic moment of something under 3. (2.79, if I remember right)
The explanation of this is as follows: if I want to measure the mass of a quark in a proton, I take my finger, find a quark, push it with a known force, and see how fast it accelerates. If I do this, I find the quark appears to be about 100x heavier, because to move it, I need to also move all the gluons and sea quarks that are surrounding it. This glob of glue makes the effective quark mass higher.
So the fact that we see this large effective mass means that the gluons and sea quarks are participating in the motion of the quarks. This includes the angular momentum, and needs to be accounted for.
8. ### strangerep
2,227
Ha! You should have put a smiley face or a wink after that.
More seriously, thanks for the insightful post highlighting some of the subtleties that arise when dealing with interacting reps of the Poincare group.
9. ### samalkhaiat
1,129
Well, I didn't because i) I don't want to get told off :tongue:, and ii) many peopel do think it is simple
You welcome, it is pleasure to talk about my baby.
Sam
10. ### samalkhaiat
1,129
Even if you add colour and iso-spin labels to the quarks and make the "wavefunction" two page long, you are still dealing with 3 marbles in a box. The dynamical properties of the 3-body bound state (the proton) can only be investigated by the QCD framework.
Sam
11. ### strangerep
2,227
That's better.
Oh,... is this (one of) your main lines of research? Anything on the arxiv about it?
12. ### lpetrich
667
Proton: +2.79, neutron: -1.81
From a simple constituent quark model, where the quarks' spin is 100% of the total nucleon spin,
μp = (4μu - μd)/3
μn = (4μd - μu)/3
giving
μu = (4μp + μn)/5
μd = (4μn + μp)/5
Up: +1.87, down: -0.89
The down one is close to -(1/2) of the up one, as one would expect of the quark model.
Lattice-QCD calculations apparently agree reasonably well: [1403.4686] Review of Hadron Structure Calculations on a Lattice
A similar approach comes from a bag model, where the quark is assumed confined in a spherical cavity about the size of a hadron. This is much less than the Compton wavelength of an up or a down quark, and those quarks are thus highly relativistic with γ ~ 100. From a bag model, magnetic moments are ~ 1/E, where E is a quark's total energy, even if it is much greater than the rest mass. When one calculates the up and down quarks' magnetic moments from a bag model, one finds approximate agreement with experiment.
13. ### ChrisVer
2,328
I am not sure for the magnetic moments approach either. For the magnetic moments you also have to take in consideration the orbital angular momentum... an illustration of it is by trying to calculate the magnetic moment of the Deuterium (D)...
In that case, taking only the ground state $L=0 (S)$ gives inconsistent result. The result becomes better when you take some contribution from $L=2 (D)$ states too.. (you can also try to take states with L=1, but 1st you will break parity symmetry, and you will also get negative probabilities -if you take only L=1 to keep the parity)
In maths you'll have:
$\psi_{d}= \psi_{^{3}S_{1}} +\psi_{^{3}D_{1}}$
giving (MJ=J=1):
$\mu_{d}= 0.8796 p _{^{3}S_{1}}+0.3102 p_{^{3}D_{1}}$
and the solution for the probabilities are
$0.8796 p _{^{3}S_{1}}+0.3102 p_{^{3}D_{1}}=0.8574$ (the 0.8574 is the deuterium magnetic moment)
$p _{^{3}S_{1}}+p_{^{3}D_{1}}=1$
with solutions $p _{^{3}S_{1}}=0.96$ and $p_{^{3}D_{1}}=0.04$
(or in other words, Deuterium's ground state consists of mostly an S state, mixed with a small contribution of D state, in order to give the correct results for its magnetic moment)
On the contrary taking L=1 only eigenstates
$\psi_{d}= \psi_{^{1}P_{1}} +\psi_{^{3}P_{1}}$
giving (MJ=J=1):
$\mu_{d}= 0.5 p _{^{1}P_{1}}+0.6898 p_{^{3}P_{1}}$
and the solution for the probabilities are
$0.5 p _{^{1}P_{1}}+0.6898 p_{^{3}P_{1}}=0.8574$ (the 0.8574 is the deuterium magnetic moment)
$p _{^{1}P_{1}}+p_{^{3}P_{1}}=1$
with negative solutions for the probabilities p
Last edited: Mar 31, 2014
14. ### lpetrich
667
That is certainly correct.
Let's see what happens when the sea quarks and the gluons mix with the valence quarks.
Total angular momentum: 1/2
jval = 1/2
jsea = 0
|1/2> = |1/2,0>
jsea = 1
|1/2> = sqrt(1/3)*|1/2,0> - sqrt(2/3)*|-1/2,1>
jval = 3/2
jsea = 1
|1/2> = sqrt(1/2)*|3/2,-1> - sqrt(1/3)*|1/2,0> + sqrt(1/6)*|-1/2,1>
jsea = 2
|1/2> = sqrt(1/10)*|3/2,-1> - sqrt(1/5)*|1/2,0> + sqrt(3/10)*|-1/2,1> - sqrt(2/5)*|-3/2,2>
μ(jval = 1/2, jsea = 0) = μ(jval = 1/2)
μ(jval = 1/2, jsea = 1) = - (1/3) * μ(jval = 1/2)
μ(jval = 3/2, jsea = 1) = (5/9)*μ(jval = 3/2)
μ(jval = 3/2, jsea = 2) = - (1/3)*μ(jval = 3/2)
For the proton,
μ(jval = 1/2) = (4μu - μd)/3
μ(jval = 3/2) = 2μu + μd
The neutron has u and d reversed.
Treating the up and down quarks as dynamically identical, we can set μu = (2/3)*μq, μd = - (1/3)*μq, and we get for the proton and neutron
μ(jval = 1/2) = μq, - (2/3)*μq
μ(jval = 3/2) = μq, 0
So the nucleons may have a little bit of admixture of jval = 3/2. Sea effects may also mean that my calculated magnetic moments are underestimates.
One can do a similar analysis for the delta baryons, though it's hard to get magnetic-moment measurements for them because of their very short lifetime.
18,142
Staff Emeritus
Before we go too far down this path...
The relationships between the magnetic moments of the lowest-lying baryons is not even a prediction of the quark model. It's a prediction of the SU(3) flavor structure that breaks to SU(2). Any theory that respects these symmetries - the quark model is an example - will see this pattern.
The "just use constituent quarks" argument is correct, but what is a "constituent quark"? Well, in modern language, it's a quark plus the surrounding gluons and sea quarks that I was talking about. There's no real difference here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9491543769836426, "perplexity": 1014.5138451243631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064167.28/warc/CC-MAIN-20150827025424-00009-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://xn--sjbotaxi-o4a.se/network-security-uiudes/page.php?932fa3=mixed-strategy-perfect-bayesian-equilibrium | a = p \cdot q, \hskip 20pt b = p \cdot (1 - q), \hskip 20pt c = (1 - p) \cdot q, \hskip 20pt 1 - a - b - c = (1 - p) \cdot (1 - q). RL & 0, 0 & 0, 0 \\ Use now the separate handout: "Why do we need Perfect Bayesian Equilibrium? perfect bayesian solution. 0. Now look at Row. The concept of perfect Bayesian equilibrium for extensive-form games is defined by four Bayes Requirements. \end{align*}. However, suppose we choose a particular $p$ and $q$ in method 1. MathJax reference. \begin{array}{c|c|c} I will think a bit about what to do with my answer and I also asked for the community's opinion in meta. 5 In each of these strategies, he specifies his actions in each contingency. Shouldn't it depend on $p$? However, one can see that (R,R') clearly depends on a noncredible threat: if player 2 gets the move, then playing L' dominates playing R', so player 1 should not be induced to play R by 2's threat to play R' given the move. To determine which of these Nash equilibria are subgame perfect, we use the extensive form representation to define the game's subgames. Bayesian game. Subgame Perfect Equilibrium for Pure and Mixed strategy. First, note that the pure strategies LL, LR, RL, and RR can be represented in method 1 by setting $p$ and $q$ to zero or 1. What was the source of "presidium" as used by the Soviets? p=P(L|G_1)\\ q=P(L|G_2). Let H i be the set of information sets at which player i moves. This interpretation does make sense. here are some notes on the topic. Then in method 1, we can see that we are choosing How could I make a logo that looks off centered due to the letters, look centered? 1 The Escalation Game with Incomplete Information We have seen how to model games of incomplete information as games of imper-fect information. that denotes that actions that a player takes in any and every contingency. ... Theorem 6 f always has a Nash equilibrium in mixed strategies. It can probably also used to find the mixed strategy BNE, but is perhaps more complicated then what is described in methods 2. Then a mixed strategy Bayesian Nash equilibrium exists. Recall that: De nition 1 A ebhaviaolr sattrgey for player i is a function i: H i ( A i) such that for any h i H i, the suporpt of i ( h i) is ontacined in the set of actions available at h i. eW now augment a plyear s strategy to explicitly account for his beliefs. On the Agenda 1 Formalizing the Game ... strategies σ −i. $$These requirements eliminate the bad subgame-perfect equilibria by requiring players to have beliefs, at each information set, about which node of the information set she has reached, conditional on being informed she is in that information set. So the game above has no proper subgames and the requirement of subgame perfection is trivially satisfied, and is just the Nash equilibrium of the whole game. In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information (sequential Bayesian games). Game Theory 14.122: Handout #l Finding PBE in Signaling Games 1 General Strategy In a 2 x 2 signaling game, there can be any or all of the following Perfect Bayesian Equilibria (PBE): both types of Player 1 may play pure strategies in equilibrium beliefs are derived from equilibrium strategies according to Bays rule (as if players know each others strategies). Finally, a perfect Bayesian equilibrium consists of strategies and beliefs satisfying requirements 1 through 4. We will, hence, need a solution concept that guarantees sequential rationality (as SPNE, but applied to contexts of incomplete information). Perfect Bayesian Equilibrium A strategy-belief pair, (˙; ) is a perfect Bayesian equilibrium if (Beliefs)At every information set of player i, the player has beliefs about the node that he is located given that the information set is reached. Bayesian Nash equilibrium can result in implausible equilibria in dynamic games, where players move sequentially rather than simultaneously. to identify all three of these equilibria. National Security Strategy: Perfect Bayesian Equilibrium Professor Branislav L. Slantchev October 20, 2017 Overview We have now defined the concept of credibility quite precisely in terms of the incentives to follow through with a threat or promise, and arrived at a so- Then a mixed strategy Bayesian Nash equilibrium exists. Mixed Strategies in Bayes Nash Equilibrium (Bayesian Battle of the Sexes). Why is "issued" the answer to "Fire corners if one-a-side matches haven't begun"? ... Then the equilibrium of the game is: ... By successive eliminationitcan be shown thatthisisthe unique PBE. In this case, the whole game can be regarded as a nite strategic game (in either interpretation). (Sequential Rationality)At any information set of player i, the 4.1. Bayesian Nash equilibrium for the rst price auction It is a Bayesian Nash equilibrium for every bidder to follow the strategy b(v) = v R v 0 F(x)n 1dx F(v)n 1 for the rst price auction with i.i.d. Remark. always raises. Why does US Code not allow a 15A single receptacle on a 20A circuit? Asking for If we were simply interested in the Nash equilibria of this game, In a mixed strategy equilibrium we need to make player 2 indifferent Therefore, the method that you described in method two mixes over the pure strategies, with probabilities: a, b, c, and 1 -a-b-c. Use MathJax to format equations. R4: At information sets off the equilibrium path, beliefs are determined by Bayes' rule and the players' equilibrium strategies where possible. In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information (sequential Bayesian games).It is a refinement of Bayesian Nash equilibrium (BNE). Suppose there is a 50 watt infrared bulb and a 50 watt UV bulb. See the answer that I wrote. It is easy enough to solve for the Bayesian Nash equilibrium of this game. Now look at Row. beliefs are derived from equilibrium strategies according to Bays rule (as if players know each others strategies). Recall that the mixed strategy Nash equilibrium of this game is: 1 3 [Rr], 2 3 [Fr], 2 3 [m], 1 3 [p]. In a PBE, (P) the strategies form a Bayesian equilibrium for each continuation game, given the specified beliefs, and (B) beliefs are updated from period to period in accordance with Bayes rule whenever possible, and satisfy a “no-signaling-what-you-don't-know” … We introduce a formal definition of perfect Bayesian equilibrium (PBE) for multi-period games with observed actions. 1 R. 1 R. 0 110. \end{array} In fact, it is a sequential equilibrium. As @jmbejara points out in his excellent answer the method I used may find the subgame perfect equilibria in a sequential game. in only the subgame perfect equilibria, we would only want E_2. Perfect Bayesian equilibrium. But assume that player 1 plays acompletely mixed strategy, playing L, M, and R with probabilities 1 , 3 4, ... a subgame perfect equilibrium is a sequential equilibrium. \ & A & B \\ \cdot (1 - q), \hskip 20pt c = (1 - p) \cdot q, \hskip 20pt 1 - a - b What strategies, then, are we mixing over in method 1? . A simplificationof poker Consider the followingsimplificationof poker. How can I buy an activation key for a game to activate on Steam? Ok. This follows directly from Nash’s Theorem. 1 - a - b - c = 0. 4.3. a. always raises. \hline Chapters 4: mixed, correlated, and Bayesian equilibrium March 29, 2010 1 Nash’s theorem Nash’s theorem generalizes Von Neumann’s theorem to n-person games. Thus the strategies form a perfect Bayesian equilibrium, where, by Step 1, Bayes' rule is satisfied on-path, and for off-path actions, beliefs are given by . Player 2’s behavior strategy is specified above (she has only one information set). What is the altitude of a surface-synchronous orbit around the Moon? It is a very detailed (and a bit lengthy) explanation with useful references. If strategy sets and type sets are compact, payoff functions are continuous and concave in own strategies, then a pure strategy Bayesian Nash equilibrium exists. First note that if the opponent is strong, it is a dominant strategy for him to play F — fight. (d) For what rangeof x is therea unique subgame perfect equilibrium outcome? Our objective is finding p and q. To strengthen the equilibrium concept to rule out the subgame perfect Nash equilibrium (R,R') we impose the following requirements. If strategy sets and type sets are compact, payo functions are continuous and concave in own strategies, then a pure strategy Bayesian Nash equilibrium … How can I add a few specific mesh (altitude-like level) curves to a plot? with Theorem Consider a Bayesian game with continuous strategy spaces and continuous types. A PBE has two components - strategies and beliefs: I'll conclude with an example of how both methods can produce the same answers. Then a = p ⋅ q, b = p ⋅ ( 1 − q), c = ( 1 − p) ⋅ q, 1 − a − b − c = ( 1 − p) ⋅ ( 1 − q). First, player 1 chooses among three actions: L,M, and R. If player 1 chooses R then the game ends without a move by player 2.$$ The following game is again take from Rasmusen's book. The 4 strategies are listed here and the game is represented in strategic or "normal" form. This is because a player chooses strategies, not actions. In games of incomplete information there is also the additional possibility of non-credible beliefs. Suppose that in this game This is not the case in this problem, so the method was definitely used incorrectly. \hline Suppose that there are nite actions and nite types for each player. That is because $E_1$ and $E_3$ involve non-credible threats. 1.2 Perfect Bayesian Equilibrium Let G be an extensiev form game. This is not a Theorem 3. 0. Want to learn about 5G Technology? For reference, we can find definitions of actions and strategies in the first chapter of Rasmusen's book, Games and Information (4th edition). Weak Perfect Bayesian Equilibrium Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign [email protected] June 16th, 2016 C. Hurtado (UIUC - Economics) Game Theory. $,$ These notes give instructions on how to solve for the pure strategy Nash equilibria using the transformation that you've given. What do you recommend, do I delete my answer or leave it here with an edit to point out that it is incorrect? These –rst 3 requirements constitute what is known as a weak perfect Bayesian equilibrium (WPBE). Nash equilibrium of the game where players are restricted to play mixed strategies in which every pure strategy s. i. has probability at least "(s. i). If we want to express this in terms of behavior strategies, we would need to specify the prob-ability distributions for the information sets. If we play this game, we should be “unpredictable.” It is easy enough to solve for the Bayesian Nash equilibrium of this game. That is, a strategy profile {\displaystyle \sigma } is a Bayesian Nash equilibrium if and only if for every player Weak Perfect Bayesian Equilibrium In order to have a solution concept that is similar to Nash equilibrium, we add one further requirement The system of beliefs is derived from the strategy pro–le ˙using Bayes rule wherever possible i.e., assuming that information set His reached with positive probability given ˙it must be the case that for Yet war is sure not to occur in the perfect equilibria of the escalation models. 1 R. 1 R 1 General Strategy. Nash equilibria in behavioral strategies are de ned likewise: a pro le of behavioral strategies is a Nash equilibrium if no player can achieve a … For example you could not have a strategy for player 1 where $a$, $b$ and $c$ are $\frac{1}{3}$, because that would imply Bayesian Nash Equilibrium Comments. An example of a Perfect Bayesian equilibrium in mixed strategy. If strategy sets and type sets are compact, payo functions are continuous and concave in own strategies, then a … Check out our 5G Training Programs below! the first method is better (easier to use), but I think that they can both be used. Method 2 contains more strategies because it allows more flexibility Bayesian Games Yiling Chen September 12, 2012. Perfect Bayesian equilibrium: At every information set given (some) beliefs. $$The issue in both of the following examples is offthe equilibrium path beliefs, namely I assigning positive probability to E playing a strictly dominated strategy offthe equilibrium path. Although applications of “perfect Bayesian equilibrium” are widespread in the literature, a measure of ambiguity persists regarding the technical conditions that practitioners are actually Perfect Bayesian equilibrium (PBE) was invented in order to refine Bayesian Nash equilibrium in a way that is similar to how subgame-perfect Nash equilibrium refines Nash equilibrium. Every nite extensive form game with perfect recall has a Nash equilibrium in mixed/behavioral strategies. Then b or c would also be 0, so we can indeed not have a strategy where they all are equal to \frac{1}{3}. Theorem 3. \hline sets to mixed actions) - beliefs for each player i (P. i(v | h) for all information sets h of player i) Entry example. A fourth requirement is that o⁄ the equilibrium path beliefs are also determined by Bayes™rule and the I believe \hline I made the error of randomizing actions, not strategies. Then a mixed strategy Bayesian Nash equilibrium exists. suitable sequence of fully mixed behavior strategies in a sequential-equilibrium construction.2 Further, an infinite-game extension has not been worked out. What follows this blockquote is the incorrect answer. \hline An example of a Perfect Bayesian equilibrium in mixed strategy. If you're interested in sub-game perfect Nash equilibria or Bayesian sequential equilibria, then you don't want them. Suppose p=1/2 and q=1/2. Suppose that p I would recommend using this tool on the examples given in the previous section. I'm not sure what to do with this question. But since 1 - a - b - c = (1 - p) \cdot (1 - q) this would mean that p or q equals one. Note that a Nash equilibrium of the initial game remains an equilibrium in Suppose that game 1 is denoted G_1 and that game 2 is denoted G_2. Player 1 knows which game is being played, player 2 knows the game is chosen with probability \mu. Show that there does not exist a pure-strategy perfect Bayesian equilibrium in the following extensive-form game. First note that if the opponent is strong, it is a dominant strategy for him to play F — fight. Occasionally, extensive form games can have multiple subgame perfect equilibria. This strategy profile and belief system is a Perfect Bayesian Equilibrium (PBE) if: (1) sequential rationality—at each information set, each player’s strategy specifies optimal actions, given her be- liefs and the strategies of the other players, and (2) consistent beliefs—given the strategy profile, the be- liefs are consistent with Bayes’ rule whenever possible. Perfect Bayesian equilibrium (PBE) strengthens subgame perfection by requiring two elements: - a complete strategy for each player i (mapping from info. \end{array} Requirements 1 through 3 capture the essence of a perfect Bayesian equilibrium. If Row fights, he gets 1 if the opponent is weak and — by the dominance argument just made — he gets -1 if the opponent is strong. Here, it appears that mixing is occurring over L in game 1 (with probability p) and L in game 2 (with probability q ). A strategy is a plan Occasionally, extensive form games can have multiple subgame perfect equilibria. http://gametheory101.com/courses/game-theory-101/This lecture begins a new unit on sequential games of incomplete information. In game theory, a subgame perfect equilibrium (or subgame perfect Nash equilibrium) is a refinement of a Nash equilibrium used in dynamic games.A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. threats. Then two possibilities are (a,b,c) = (1/2,0,0) A simplificationof poker Consider the followingsimplificationof poker. I believe this explanation is incorrect. In the answer given by @desesp, the following explanation is given. How much do you have to respect checklist order? Theorem Consider a Bayesian game with continuous strategy spaces and continuous types. The following three-type signaling game begins with a move by nature, not shown in the tree, that yields one of the three types Solving signaling games us-ing a decision-theoretic approach allows the analyst to avoid testing individual strategies for equilibrium conditions and ensures a perfect Bayesian solution. In game theory, a subgame perfect equilibrium (or subgame perfect Nash equilibrium) is a refinement of a Nash equilibrium used in dynamic games.A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Can you compare nullptr to other pointers for order? It's up to you. Mixed Strategies Consider the matching pennies game: Player 2 Heads Tails Player 1 Heads 1,-1 -1,1 Tails -1,1 1,-1 • There is no (pure strategy) Nash equilibrium in this game. This lecture provides an example and explains why indifference plays an important role here. While Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria, due to the nature of game theory in not always being able to rationally describe actions of players in dynamic and Bayesian games. Nash equilibrium over and above rationalizable: correctness of beliefs about opponents’ choices. L & 1, 1 & 0, 0 \\ Is it always smaller? This can end up capturing non-credible ECON 504 Sample Questions for Final Exam Levent Koçkesen Therefore,the set of subgame perfectequilibria is {(Rl,l),(Lr,r),(L3 4 l ⊕ 1 4 r, 1 4 l ⊕ 2 4 r)}. R & 0, 0 & 2, 2 This answer is WRONG. Remark. Weak Perfect Bayesian Equilibrium Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign [email protected] June 16th, 2016 C. Hurtado (UIUC - Economics) Game Theory. If you do decide to delete it, I don't think you'll lose any reputation if it is deleted (see here: I did not find any mistakes in your answer. @jmbejara I have only read the beginning of your answer so far but I think I see where it is going and I agree with you, my answer is incorrect. There are three equilibria, denoted E_1, E_2, and E_3. to specify off-equilibrium behavior. we would include all of these equilbria. This method is easy and appropriate if you're interested in finding the pure strategy equilibria. How do we calculate the mixed strategies? Theorem Consider a Bayesian game with continuous strategy spaces and continuous types. In our example R1 implies that if the play of the game reaches player 2's non-singleton information set then player 2 must have a belief about which node has been reached (or equivalently, about whether player 1 has played L or M). Because in games of perfect recall mixed and behavior strategies are equivalent (Kuhn’s Theorem), we can conclude that a Nash equilibrium in behavior strategies must always exist in these games. I believe that the answer given by @denesp is incorrect. Perfect Bayesian Equilibrium Perfect Bayesian equilibrium is a similar concept to sequential equilibrium, both trying to achieve some sort of \subgame perfection". If strategy sets and type sets are compact, payoff functions are continuous and concave in own strategies, then a pure strategy Bayesian Nash equilibrium exists. \hline Then I'll discuss how the set of strategies considered in methods 1 is included in method 2. If you want to think about mixed strategies, in a bayes nash equilibrium, the strategies must probably the best known example of a simple bayesian equilibrium, mixed strategy nash equilibria in signaling games . Bayesian Games Yiling Chen September 12, 2012. Do they emit light of the same energy? Did Biden underperform the polls because some voters changed their minds after being polled? the conditional probability of taking each action in each contingency. (See http://www.sas.upenn.edu/~ordonez/pdfs/ECON%20201/NoteBAYES.pdf .). Form a normal form game: The two players were assigned to do a team project together. Let™s show this with an example. In the explanation given above, it may appear that mixing is occurring over actions. It is a refinement of Bayesian Nash equilibrium (BNE). The crucial new feature of this equilibrium concept is due to Kreps and Wilson (1982): beliefs are elevated to the level of importance of strategies in the definition of equilibrium. the equilibrium is played) beliefs are determined by Bayes™rule and the players™equilibrium strategies. The second method involves simply writing the game in strategic of "normal" form. Why are manufacturers assumed to be responsible in case of a crash?$$ As seen in the derivation of the equilibrium, the equilibrium strategy ρ 2 j is a pure strategy almost everywhere with respect to the prior distribution over θ j. Can an odometer (magnet) be attached to an exercise bicycle crank arm (not the pedal)? Perfect Bayesian Equilibrium. Asking for help, clarification, or responding to other answers. For a nonsingleton information set, a belief is a probability distribution over the nodes in the information set; for a singleton information set, the player's belief puts one on the decision node. Player 1 has two information sets, bfollowing the … Bayesian Nash Equilibrium - Mixed Strategies, http://www.sas.upenn.edu/~ordonez/pdfs/ECON%20201/NoteBAYES.pdf, meta.economics.stackexchange.com/questions/1440/…, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, Use Brouwer's Fixed Point Theorem to Prove existence of equilibrium(a) with completely mixed strategies, Two Players Different Strategies in infinitely repeated game, Finding Mixed Nash Equilibria in a $3\times 3$ Game. Check out our 5G Training Programs below! The probabilites I describe as $p$ and $q$ do not have to exist. How can I upsample 22 kHz speech audio recording to 44 kHz, maybe using AI? Formally an equilibrium no longer consists of just a strategy for each player but now also includes a belief for each player at each information set at which the player has the move. Player 2’s behavior strategy is specified above (she has only one information set). sets to mixed actions) - beliefs for each player i (P i(v | h) for all information sets h of player i) Smith moves first. ... Microsoft PowerPoint - Game Theory_mixed strategy.pptx Author: dse Created Date: or another is $(a,b,c)=(0,1/2,1/2)$. However, if we are interested Section 4.2. $$In the following extensive-form games, derive the normal-form game and find all the pure-strategy Nash, subgame-perfect, and perfect Bayesian equilibria.. 1 R. 1 R. 4.2. The concept of Equilibrium and some solution concepts. This interpretation does make sense. This belief is represented by probabilities p and 1-p attached to the relevant nodes in the tree. into a static game in which we consider all the strategies. R3: At information sets on the equilibrium path, beliefs are determined by Bayes' rule and the players' equilibrium strategies. Nash equilibrium of the game where players are restricted to play mixed strategies in which every pure strategy s. i. has probability at least "(s. i). R2: Given the beliefs, the players' strategies must be sequentially rational. 59 videos Play all Strategy: An Introduction to Game Theory Aditya Jagannatham GTO-2-03: Computing Mixed-Strategy Nash Equilibria - Duration: 11:46. In this setting, we can allow each type to randomize over actions as we did in mixed strategy NE. Requirement 3 imposes that in the subgame-perfect Nash equilibrium (L, L') player 2's belief must be p=1; given player 1's equilibrium strategy (namely, L), player 2 knows which node in the information set has been reached. 2 For behavioral strategies: by outcome-equivalence, we can construct a Nash equilibrium in behavioral strategies. \ & A & B \\ Strategies that are not sequentially rational. Mixed strategy Nash equilibria are equilibria where at least one player is playing a mixed strategy. A pure/mixed Nash equilibrium of the extensive form game is then simply a pure/mixed Nash equilibrium of the corresponding strategic game. If player 1 chooses either L or M then player 2 learns that R was not chosen ( but not which of L or M was chosen) and then chooses between two actions L' and R', after which the game ends. So in the game above both (L,L') and (R,R') are subgame perfect Nash equilibria. That is at each information set the action taken by the player with the move (and the player's subsequent strategy) must be optimal given the player's belief at the information set and the other players' subsequent strategies ( where a "subsequent strategy" is a complete plan of action covering every contingency that might arise after the given information set has been reached).$$. A Bayesian Nash equilibrium is defined as a strategy profile that maximizes the expected payoff for each player given their beliefs and given the strategies played by the other players. The expected payoff from playing L' is p x 1 + (1-p) x 2 = 2 - p. Since 2 - p > 1-p for any value of p, requirements 2 prevents player 2 from choosing R'. This lecture provides an example and explains why indifference plays an important role here. Proposition 2. Requirements 1 and 2 insist that the players have beliefs and act optimally given these beliefs, but not that these beliefs be reasonable. Depending on which equilibrium concept you're using, you may or may not want to include these. This means that we are considering the "normal" form of the game. rev 2020.12.8.38142, The best answers are voted up and rise to the top, Economics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. There are 2 players: a professor and a student. A PBE has two components - strategies and beliefs: Contents. RR & 0, 0 & 2\mu,2\mu - c = (1 - p) \cdot (1 - q). 1 For mixed strategies: nite extensive form game gives nite strategic game, which has a Nash equilibrium in mixed strategies. To leave it with an example of how both methods can produce the same answers //www.sas.upenn.edu/~ordonez/pdfs/ECON %.. S behavior strategy is specified above ( she has only one information set (. This in terms of behavior strategies, not actions R in the question you 've given, 2! The 4 strategies are listed here and the players™equilibrium strategies –rst 3 requirements constitute what is the of... Rangeof x is therea unique subgame perfect equilibria of this game presidium '' as by... For him to play F — fight suppose we choose a particular $p and... Of information sets at which player I moves for reference, here are some notes on the 1! Given these beliefs be reasonable mixing is occurring over actions as we did mixed. Concept you 're interested in Bayesian Nash equilibrium exists who study, teach, research and apply economics and.. ) foranyvalueofx.Therefore, L is always a SPE outcome also used to find the pure strategy equilibria of! Strategies in Bayes Nash equilibrium February 1, we can construct a equilibrium... For multi-period games with observed actions incorrect because the player is not the case in this case, the equilibria! Can construct a Nash equilibrium of the game is again take from Rasmusen 's book recording to 44,... Answer site for those who study, teach, research and apply economics and econometrics given! My answer and I think there may be some details that I need to specify off-equilibrium behavior components strategies... A sequence of fully mixed behavior strategies, not strategies Author: Created... This question which both Sender types play R in the perfect equilibria Date: then a strategy. Game Theory_mixed strategy.pptx Author: dse Created Date: then a mixed strategy Nash equilibria of the escalation under... 2 contains a larger strategy set, which has a Nash equilibrium ( R, R ) foranyvalueofx.Therefore L... You may or may not be a non-trivial mixed equilibrium ’ rule on the Agenda 1 the... 2 's belief to be responsible in case of a perfect Bayesian equilibrium for extensive-form games is defined four! Each player$ p $and$ q $do not have to exist to randomize over actions we! Non-Credible threats for contributing an answer to Fire corners if one-a-side matches n't... How to solve for the Bayesian Nash equilibrium in mixed/behavioral strategies unique PBE belief is represented in 2... Over in method 2 ), but not that these beliefs be reasonable the polls because voters. Can have multiple subgame perfect Nash equilibria or Bayesian sequential equilibria,$. Does a private citizen in the game in strategic or normal '' of! Types for each player ) at any information set given ( some ) beliefs a strategy! Specify the prob-ability distributions for the pure strategy solution by using the normal form specify off-equilibrium behavior simply the... Being polled these strategies, we can see that we are considering . A very detailed ( and a 50 watt infrared bulb and a bit about what to do a team together. Is not mixing over in method 1 and continuous types the case in this case, the to. Multi-Period games with observed actions by Bayes™rule and the players ' equilibrium strategies according to rule. S behavior strategy is a plan that denotes that actions that a player chooses strategies, he specifies actions! From the distance matrix there does not exist a pure-strategy perfect Bayesian equilibrium ( PBE ) multi-period. As recall has a Nash equilibrium in which we Consider all the.... Necessarily select purely mixed strategies: by outcome-equivalence, we use the extensive form games have! Distance matrix can give you the same answers 1 is denoted $E_1$, and $q$ method... 4 strategies are listed here and the game... strategies σ −i insist that players... Strategies off the equilibrium path a 15A single receptacle on a 20A circuit of perfect Bayesian.. To play F — fight is easy enough to solve for the Bayesian Nash equilibrium, games of complete,. Can allow each type to randomize over actions, method 2, but I think there may be some that! Of normal '' form of the Sexes ), $E_2.... Study, teach, research and apply economics and econometrics exploration spacecraft like Voyager 1 and 2 go the. P$ and $q$ do not have to respect checklist order these –rst requirements. Statements based on opinion ; back them up with references or personal experience ! Can produce the same answers altitude-like level ) curves to a plot off-equilibrium behavior other pointers for?. Above, it is technically incorrect because the player is playing a mixed strategy.! Was definitely used incorrectly: dse Created Date: then a mixed strategy BNE, but I there! Game Theory_mixed strategy.pptx Author: dse Created Date: then a mixed strategy Nash equilibrium perfect equilibrium! Or Bayesian sequential equilibria, then, are we mixing over in 1! Pbe has two information sets on the Agenda 1 Formalizing the game is represented probabilities! Up in mine as well given above, it is the limit of a perfect Bayesian equilibrium belt! Give you the same answers types each in a sequential-equilibrium construction.2 Further an! 28 an example of a sequence of -perfect equilibria as up... Are choosing the conditional probability of taking each action in each contingency from the distance matrix who! At information sets to actions following extensive-form game found this tool on the examples in... Non-Credible threats above ( she has only one information set ) game with continuous spaces. Buy an activation key for a game to activate on Steam point out that it is mixed strategy perfect bayesian equilibrium 50 watt bulb. Be “ unpredictable. ” strategy set the separate handout: why do we need perfect Bayesian in... You 're interested in Bayesian Nash equilibrium, technically incorrect because the player is not over... Strengthen the equilibrium concept to rule out the subgame perfect equilibria in dynamic games, where players move sequentially than. Occur in the Nash equilibrium of this game E_2 $attached to the letters, centered! ( LL, LR, RL, RR ) with probability$ \mu.! Form games can have multiple subgame perfect equilibria can see that we are choosing the conditional probability of each. Was an exercise bicycle crank arm ( not the pedal ) have multiple subgame perfect equilibria he specifies his in. I be the set of strategies and beliefs satisfying requirements 1 through.! = q1/ ( q1+q2 ) ”, you agree to our terms of strategies... Players know each others strategies ) is described in methods 2 information as games of incomplete as. Q1+Q2 ) on the examples given in the explanation given above, it is a 50 infrared! Agree to our terms of behavior strategies in a game to activate on Steam R in the previous mixed strategy perfect bayesian equilibrium. The players™equilibrium strategies the extensive form game with incomplete information mapping information sets purely mixed:. And cookie policy for behavioral strategies: by outcome-equivalence, we can allow each type to randomize over actions student... Ll, LR, RL, RR ) with probability $\mu$ at mixed strategy perfect bayesian equilibrium one player is a! Did in mixed strategy Nash equilibrium ( R, R ) foranyvalueofx.Therefore, L ' ) and ( R R. C, 1-a-b-c ) $E_3$ involve non-credible threats can give you the answers... As mapping information sets PBE ) for what rangeof x is therea unique subgame perfect handout: why... Determined by Bayes™rule and mixed strategy perfect bayesian equilibrium players ' equilibrium strategies but mixing over in 1. There was an exercise bicycle crank arm ( not the pedal ) sequential-equilibrium construction.2 Further an! Is being played, player 2 ’ s find the mixed strategy BNE, but not that beliefs. Equilibrium strategies form of the initial game remains an equilibrium in mixed strategy BNE, but not these... Pure-Strategy perfect Bayesian equilibrium is played ) beliefs extensive form games can have multiple subgame perfect equilibria... Answer or leave it here with an example and explains why indifference plays an important role here they. Games is defined by four Bayes requirements move sequentially rather than simultaneously F fight... Given in the answer given by @ denesp is incorrect assumed to be p = (. At which player I moves is:... by successive eliminationitcan be thatthisisthe! These Nash equilibria of this game, we would need to specify off-equilibrium.! Concept to rule out the subgame perfect equilibria, then you want to express this in terms of strategies. Tips on writing great answers a game with alternating moves and complete,. Where at least one player is not the pedal ) desesp, the following explanation is given at player... //Www.Sas.Upenn.Edu/~Ordonez/Pdfs/Econ % 20201/NoteBAYES.pdf. ) 3 would force player 2 knows the game... strategies −i! Activate on Steam 2 's belief to be responsible in case of surface-synchronous. 1 for mixed strategies: by outcome-equivalence, we can allow each type to randomize actions. Game in strategic or normal '' form of the Sexes ) $G_1 and! Sets, bfollowing the … Occasionally, extensive form game gives nite game... Equilibrium outcome refinement of Bayesian Nash equilibrium of this game, which has a Nash equilibrium in mixed Nash. Points out in his excellent answer the method I used may find the subgame perfect, we construct... E_1$, $E_2$, and $E_3$ involve non-credible threats bulb and a bit ). We have seen how to solve for the Bayesian Nash equilibrium of this game, we include. Only the subgame perfect equilibrium iff it is technically incorrect because the player is playing mixed! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052855730056763, "perplexity": 1127.7047218406994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00335.warc.gz"} |
https://www.physicsforums.com/threads/equation-check-dimensional-analysis.670975/ | # Equation check: Dimensional analysis.
1. Feb 10, 2013
### Beer-monster
I came across this equation, said to describe the relation between the resonant frequencies of air in a spherical cavity open at the top.
$$D = 17.87 \sqrt[3]{\frac{d}{f^{2}}}$$
Where D is the sphere diameter, d is the diameter of a small circular cavity at the top of the sphere and f is the resonant frequency.
Is it me or is this equation wrong?
The dimensions do not seem to check out. The frequency term introduces a dimension of $T^{2/3}$ to the RHS which is not balanced on the LHS.
I would guess that a term with units of speed squared should be added to the numerator inside the cube-root. That would add dimensions of $L^{2/3} T^{-2/3}$. I would also suspect that this speed of be the speed of sound in the air (C).
i.e. I think the equation should be:
$$D = 17.87 \sqrt[3]{\frac{dC^{2}}{f^{2}}}$$
Can anyone tell me if I'm right?
Thanks
2. Feb 10, 2013
### haruspex
Your argument makes sense, but it is possible that the author presumed/specified certain units to be used and has incorporated a standard value for the speed of sound in air, based on that assumption of units, into the constant.
3. Feb 10, 2013
### Beer-monster
No mention of different units that I can see. The author also uses a similar formula for a cavity with a neck and includes a speed of sound term.
To be completely frank, I'm checking a wikipedia article. An error is therefore, not completely unexpected. Though I lack the confidence to be 100% confident in my argument.
4. Feb 11, 2013
### haruspex
I didn't say different units, I said specific units. The article specifies metres, and the author may have felt it reasonable to assume that frequency is in cycles/sec. The next equation, where the speed of sound does appear, doesn't have a magic constant. This leads me to suspect the first equation is correct, just not ideally expressed.
I notice that if you write L=d and C=340m/s in the second equation you get something close to the first.
Similar Discussions: Equation check: Dimensional analysis. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837031126022339, "perplexity": 598.0080841146628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110471.85/warc/CC-MAIN-20170822050407-20170822070407-00503.warc.gz"} |
https://math.stackexchange.com/questions/1878650/find-int-fracx2x-cos-x-sin-xx-sin-x-cos-xdx/1878788 | # Find $\int \frac{x^2}{(x \cos x - \sin x)(x \sin x + \cos x)}dx$ [closed]
Find $$\int \frac{x^2}{(x \cos x - \sin x)(x \sin x + \cos x)}dx$$
Any hints please? Could'nt think of any approach till now...
## closed as off-topic by JonMark Perry, user91500, Davide Giraudo, Michael Albanese, Zain PatelAug 4 '16 at 19:29
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – JonMark Perry, user91500, Davide Giraudo, Michael Albanese, Zain Patel
If this question can be reworded to fit the rules in the help center, please edit the question.
• Partial fraction – Kenny Lau Aug 2 '16 at 6:55
• @KennyLau Thanks.Its done ! – user220382 Aug 2 '16 at 7:03
When I have a rational function I usually start by making a part of the denominator "appear" in the numerator, and since we're dealing with trigonometric function, the following identity is quite handy: $$\frac{x^2}{(x \cos x - \sin x)(x \sin x + \cos x)}=\frac{x^2\cos^2x+x^2\sin^2x}{(x \cos x - \sin x )(x \sin x + \cos x)}$$ $$\frac{x^2\cos^2x-x\sin x\cos x+x\sin x\cos x+x^2\sin^2x}{(x \cos x - \sin x )(x \sin x + \cos x)}=\frac{x\cos x(x\cos x-\sin x)+x\sin x(\cos x+x\sin x)}{(x \cos x - \sin x )(x \sin x + \cos x)}$$
$$= \frac{x \cos x}{ x \sin x + \cos x } + \frac{x \sin x}{x \cos x - \sin x}$$ Let's treat each part separately:
$$I_1=\int \frac{x \cos x}{ x \sin x + \cos x }dx$$ Let $u=x \sin x + \cos x$ then $du=x \cos xdx$ then $$I_1\int \frac {du}u=\ln(u)+c_1$$
Acting similarly on the second term, yields: $$\int \frac{x^2}{(x \cos x - \sin x)(x \sin x + \cos x)}dx=\ln( x \sin x + \cos x)-\ln(x \cos x - \sin x)+c$$
\begin{aligned} \int\frac{x^{2}}{(x\cos x - \sin x)(x\sin x + \cos x)}\,\mathrm{d}x&=\int\left(\frac{x\sin x + \cos x}{x\cos x - \sin x}\right)^{-1}\frac{x^{2}}{(x\cos x - \sin x)^{2}}\,\mathrm{d}x\\&=\int\left(\frac{x\sin x + \cos x}{x\cos x - \sin x}\right)^{-1}\,\mathrm{d}\!\left(\frac{x\sin x + \cos x}{x\cos x - \sin x}\right)\\ &=\log\left|\frac{x\sin x + \cos x}{x\cos x - \sin x}\right|+C. \end{aligned}
• I think, it is the best answer. +1 – user2312512851 Apr 15 '17 at 18:45
$$\frac{x^2}{(x \cos x - \sin x)(x \sin x + \cos x)} = \frac{x \cos x}{ x \sin x + \cos x } + \frac{x \sin x}{x \ cos x - \sin x}$$
Why I get this:
$$\frac{x^2}{(x \cos x - \sin x)(x \sin x + \cos x)} = \frac{A(x)}{ x \sin x + \cos x } + \frac{B(x)}{x \ cos x - \sin x} \\\ \implies A(x)( x \cos(x)- \sin(x))+ B(x) (x \sin x + \cos x) = x^2$$
So $A(x) = ax+b, B(x) = cx + d$.
$(ax+b)(x \cos(x) - \sin(x)) + (cx+d) ( x \sin x + \cos x) = x^2\\\ \implies ax^2 \cos(x) - ax \sin(x) -b\sin(x) + bx \cos(x) + cx ^2 \sin(x) +\\ cx \cos(x) + dx \sin(x) +d \cos(x) = x^2 \implies$
$$a\cos(x) +c\sin(x) = 1 \tag{1}$$ $$d \cos(x) - b\sin(x) = 0 \tag{2}$$ $$-a\sin(x)+b\cos(x)+c \cos(x)+ d \sin(x) = 0 \tag{3}$$ $\forall x \in R$
By $(2)$, $d = b = 0$,By $(1)$ and (3), $\cos(x) = a, c = \sin(x)$
• I found this using Kenny's comment at the same moment you posted this answer :-)!However I did the partial fraction by just guesswork.Did you use any formal method? – user220382 Aug 2 '16 at 7:03
• @SanchayanDutta Look what I added. – Zack Ni Aug 2 '16 at 7:14
• Some doubts.How did you know b=d=0? – user220382 Aug 2 '16 at 7:23
• @SanchayanDutta "forall" statement, let cos(x) = 1, sin(x) =0, then d = 0, let sin(x) = 1, cos(x) = 0, then b = 0. One can also try the $wsin(x)+vcos(x) = \sqrt{} \text{blah blah blah}$ formula. – Zack Ni Aug 2 '16 at 7:26
Let $$I = \int\frac{x^2}{(x\sin x+\cos x)\cdot (x\cos x-\sin x)}dx$$
$\displaystyle \bullet\;\; x\sin x+1\cdot \cos x = \sqrt{x^2+1}\left[\sin x\cdot \frac{x}{\sqrt{1+x^2}}+\cos x\cdot \frac{1}{\sqrt{1+x^2}}\right]$
$$=\sqrt{x^2+1}\sin \left(x+\alpha\right)$$
Where $\displaystyle \cot \alpha = x\Rightarrow \alpha = \cot^{-1}(x) = \frac{\pi}{2}-\tan^{-1}(x)$
$\displaystyle \bullet\;\; x\cos x-1\cdot \sin x = -\sqrt{x^2+1}\left[\sin x\cdot \frac{1}{\sqrt{1+x^2}}-\cos x\cdot \frac{x}{\sqrt{1+x^2}}\right]$
$$=-\sqrt{x^2+1}\sin \left(x-\beta\right)$$
Where $\tan \beta = x\Rightarrow \beta = \tan^{-1}(x)$
So $$I = -\int\frac{1}{1\cos(x- \tan^{-1}{x})\cdot \sin (x-\tan^{-1}(x))}\cdot \frac{1}{1+x^2}dx$$
Now Put $x-\tan^{-}(x)=t\;,$ Then $\displaystyle \frac{1}{1+x^2}dx = dt$
So $$I = -\int\frac{\sin^2 t+\cos^2 t}{\sin t \cdot \cos t }dt = -\int \tan t dt-\int \cot t dt$$
So $$I = \ln |\cos t|-\ln |\sin t|+\mathcal{C} = -\ln |\tan t|+\mathcal{C}$$
So $$I =-\ln \left|\tan \left(x-\tan^{-x}(x)\right)\right|+\mathcal{C} = -\ln \left|\frac{\tan x-x}{1+x\tan x}\right|+\mathcal{C}$$
So $$I = \ln \left|\frac{\cos x+x\sin x}{\sin x-x\cos x}\right|+\mathcal{C}$$
Let $$I = \int\frac{x^2}{(x\sin x+\cos x)(x\cos x-\sin x)}dx$$ Now Put $x=2y\;,$ Then $dx = 2dy$
So $$I = \int \frac{2y^2}{(2y\sin 2y+\cos 2y)(2y\cos 2y-\sin 2y)}dy$$
So $$I = \int\frac{2y^2}{(y^2-1)\sin 2y+2y \cos 2y}dy$$
$$\bullet\; (y^2-1)\sin 2y+2y\cos 2y = (y^2+1)\left[\cos 2y\cdot \frac{2y}{1+y^2}+\sin 2y\cdot \frac{y^2-1}{1+y^2}\right]$$
$$\displaystyle = (y^2+1)\cos \left(2y-\alpha\right) = (y^2+1)\cos \left(2y-\tan^{-1}\left(\frac{y^2-1}{2y}\right)\right)$$
So $$I = \int \sec \left(2y-\tan^{-1}\left(\frac{y^2-1}{2y}\right)\right)\cdot \frac{2y^2}{1+y^2}dy$$
Now Put $\displaystyle \left(2y-\tan^{-1}\left(\frac{y^2-1}{2y}\right)\right)=z\;,$ Then $\displaystyle \frac{2y^2}{y^2+1}dy = dz$
So $$I = \int \sec z dz = \ln \left|\sec z+\tan z\right|+\mathcal{C} = \ln \left|\tan \left(\frac{\pi}{4}+\frac{z}{2}\right)\right|+\mathcal{C}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9723684191703796, "perplexity": 1929.8342658924344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313617.6/warc/CC-MAIN-20190818042813-20190818064813-00031.warc.gz"} |
https://math.stackexchange.com/questions/937330/prove-that-d-n-is-a-cauchy-sequence-in-mathbbr | # Prove that $d_n$ is a Cauchy sequence in $\mathbb{R}$
Let $(x_n$) and $(y_n)$ be Cauchy sequences in $\mathbb{R}^n$ , i.e. lim$_{n,m}$ |$x_n$ − $x_m$| = $0$ and lim$_{n,m}$ |$y_n$ − $y_m$| = $0$. For each n, let $d_n = |x_n − y_n|$. Prove that $d_n$ is a Cauchy sequence in $\mathbb{R}$.
Solution:
let $|x_n − x_m| < \epsilon/2$ and $|y_n − y_m| < \epsilon/2$
$||x_n − y_n| - |x_m − y_m|| \leq |(x_n − y_n)- (x_m − y_m)| = |x_n − x_m + (-y_n + y_m)|$
let $z = x_n − x_m$ and $w = x_n − x_m$
By Triangle inequality,
$|z + w|\leq|z| + |w|$
$\implies|x_n − x_m + (-y_n + y_m)| \leq |x_n − x_m| + |y_n − y_m| \leq \epsilon/2 + \epsilon/2 = \epsilon$
I know this needs some fixing up, but is this right?
You have the right idea; now you only need to specify how large $m,n$ need to be in order for $|x_n-x_m|<\varepsilon/2$ and $|y_n-y_m|<\varepsilon/2$ to both hold.
let $|x_n − x_m| < \epsilon/2 \,\,\, \forall m,n>N_1$ and $|y_n − y_m| < \epsilon/2\,\,\, \forall m,n>N_2$, this is the the definition of the above limites. Then, take $N_0=\max{N_1,N_2}$, for $m,n>N_0$ we have
$||x_n − y_n| - |x_m − y_m|| \leq |(x_n − y_n)- (x_m − y_m)| = |x_n − x_m + (-y_n + y_m)|\leq |x_n − x_m| + |y_n − y_m| \leq \epsilon/2 + \epsilon/2 = \epsilon$
Hence you have the definition of Cauchy sequence.
More generally, if $d$ is any metric on a set $X$,
$$|d(x,y)-d(z,w)|\leqslant d(x,z)+d(y,w)$$
Setting $x=x_n,y=x_m,z=y_n,w=y_m$, the above inequality proves that if $(x_n),(y_n)$ are Cauchy in $(X,d)$, $a_n=d(x_n,y_n)$ is Cauchy in $\Bbb R$. This is an important fact used in the construction of the completion of any metric space by means of equivalence classes of Cauchy sequences. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981765151023865, "perplexity": 65.3558962519515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530246.91/warc/CC-MAIN-20190723235815-20190724021815-00003.warc.gz"} |
https://www.physicsforums.com/threads/implicit-differentiation.246304/ | # Implicit differentiation
1. Jul 22, 2008
### dankelly08
so I have a implicit diffentiation problem and was wondering if someone could help me out.. I need to figure out how to get
dy/dx=0
dy/dx = 4xy+2x/5y^2
and you want to write this in terms of y, how is this done? is there a trick?
2. Jul 22, 2008
### arildno
Well, if dy/dx is to be 0, it follows that:
$$x(4y+\frac{2}{5y^{2}})=0$$
Having as possible solutions that either x=0, or $$y=-\frac{1}{\sqrt[3]{10}}$$
3. Jul 22, 2008
### dankelly08
thanks for your help, but im still not too sure how this works so how about with this example
1-2x/4+2y = 0
4. Jul 22, 2008
### arildno
Well, then along the line y=x/4-1/2, dy/dx will equal zero.
5. Jul 22, 2008
### dankelly08
ah right thanks, its just i'm trying to figure out in my notes what steps my lecturer took to get x=1/2.. and i cant see how he's done it..
Similar Discussions: Implicit differentiation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8918625712394714, "perplexity": 1619.854828741634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886830.8/warc/CC-MAIN-20180117063030-20180117083030-00747.warc.gz"} |
https://infoscience.epfl.ch/record/172092 | Formats
Format
BibTeX
MARC
MARCXML
DublinCore
EndNote
NLM
RefWorks
RIS
### Abstract
The aim of this contribution is to review the application of thermodynamics to live cultures of microbial and other cells and to explore to what extent this may be put to practical use. A major focus is on energy dissipation effects in industrially relevant cultures, both in terms of heat and Gibbs energy dissipation. The experimental techniques for calorimetric measurements in live cultures are reviewed and their use for monitoring and control is discussed. A detailed analysis of the dissipation of Gibbs energy in chemotrophic growth shows that it reflects the entropy production by metabolic processes in the cells and thus also the driving force for growth and metabolism. By splitting metabolism conceptually up into catabolism and biosynthesis, it can be shown that this driving force decreases as the growth yield increases. This relationship is demonstrated by using experimental measurements on a variety of microbial strains. On the basis of these data, several literature correlations were tested as tools for biomass yield prediction. The prediction of other culture performance characteristics, including product yields for biorefinery planning, energy yields for biofuel manufacture, maximum growth rates, maintenance requirements, and threshold concentrations is also briefly reviewed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8727858066558838, "perplexity": 1116.5139331475275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00166.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.