url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://www.researchgate.net/publication/221596132_How_to_identify_and_estimate_the_largest_traffic_matrix_elements_in_a_dynamic_environment | Conference Paper
# How to identify and estimate the largest traffic matrix elements in a dynamic environment.
DOI: 10.1145/1005686.1005698 Conference: Proceedings of the International Conference on Measurements and Modeling of Computer Systems, SIGMETRICS 2004, June 10-14, 2004, New York, NY, USA
Source: DBLP
ABSTRACT In this paper we investigate a new idea for traffic matrix estimation that makes the basic problem less under-constrained, by deliberately changing the routing to obtain additional measurements. Because all these measurements are collected over disparate time intervals, we need to establish models for each Origin-Destination (OD) pair to capture the complex behaviours of internet traffic. We model each OD pair with two components: the diurnal pattern and the fluctuation process. We provide models that incorporate the two components above, to estimate both the first and second order moments of traffic matrices. We do this for both stationary and cyclo-stationary traffic scenarios. We formalize the problem of estimating the second order moment in a way that is completely independent from the first order moment. Moreover, we can estimate the second order moment without needing any routing changes (i.e., without explicit changes to IGP link weights). We prove for the first time, that such a result holds for any realistic topology under the assumption of . We highlight how the second order moment helps the identification of the top largest OD flows carrying the most significant fraction of network traffic. We then propose a refined methodology consisting of using our variance estimator (without routing changes) to identify the top largest flows, and estimate only these flows. The benefit of this method is that it dramatically reduces the number of routing changes needed. We validate the effectiveness of our methodology and the intuitions behind it by using real aggregated sampled netflow data collected from a commercial Tier-1 backbone.
0 Bookmarks
·
125 Views
• Source
• ##### Conference Paper: A toolchain for simplifying network simulation setup
[Hide abstract]
ABSTRACT: Arguably, one of the most cumbersome tasks required to run a network simulation is the setup of a complete simulation scenario and its implementation in the target simulator. This process includes selecting a topology, provision it with all required parameters and, finally, configure traffic sources or generate traffic matrices. Many tools exist to address some of these tasks. However, most of them do not provide methods for configuring network and traffic parameters, while others only support a specific simulator. As a consequence, a user often needs to implement the desired features personally, which is both time-consuming and error-prone. To address these issues, we present the Fast Network Simulation Setup (FNSS) toolchain. It provides capabilities for parsing topologies from datasets or generating them synthetically, assign desired configuration parameters and generate traffic matrices or event schedules. It also provides APIs for a number of programming languages and network simulators to easily deploy the simulation scenario in the target simulator.
Proceedings of the 6th International ICST Conference on Simulation Tools and Techniques; 03/2013
• Source
24 Downloads
Available from
May 19, 2014 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333761692047119, "perplexity": 966.7933522272342}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122220909.62/warc/CC-MAIN-20150124175700-00107-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://mprnotes.wordpress.com/2009/08/14/changing-background-image-of-latex-beamer/ | ## Changing background image of LaTeX Beamer
I’ve learned a very nice trick to change the background image of your LaTeX Beamer presentations. First of all I will give an example how to change the background image for all your frames. All you have to do is to put the following code into the preamble of your .tex document:
\usebackgroundtemplate{ \includegraphics[width=\paperwidth, height=\paperheight]{my_bg_image} }
Now if you want to change the background only for one specific frame, then you have to create a block and set an image (in this example my_bg_image) as the background of this block and then you can enter the code of your frame, like the following example:
{ \usebackgroundtemplate{\includegraphics[width=\paperwidth]{my_bg_image}} \begin{frame} \frametitle{Frame with nice background} \begin{itemize} \item 1 \item 2 \item 3 \end{itemize} \end{frame} }
That’s all. Now we are able to create some beautiful slides. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721744656562805, "perplexity": 664.1972097191174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770130.120/warc/CC-MAIN-20141217075250-00018-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://rank1neet.com/5-1-intermolecular-forces/ | # 5.1 Intermolecular Forces
Intermolecular forces are the forces of attraction and repulsion between interacting particles (atoms and molecules). This term does not include the electrostatic forces that exist between the two oppositely charged ions and the forces that hold atoms of a molecule together i.e., covalent bonds.
Attractive intermolecular forces are known as van der Waals forces, in honour of Dutch scientist Johannes van der Waals (1837-1923), who explained the deviation of real gases from the ideal behaviour through these forces.
We will learn about this later in this unit. van der Waals forces vary considerably in magnitude and include dispersion forces or London forces, dipole-dipole forces, and dipole-induced dipole forces.
A particularly strong type of dipole-dipole interaction is hydrogen bonding. Only a few elements can participate in hydrogen bond formation, therefore it is treated as a separate category. We have already learnt about this interaction in Unit 4.
At this point, it is important to note that attractive forces between an ion and a dipole are known as ion-dipole forces and these are not van der Waals forces. We will now learn about different types of van der Waals forces. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223234057426453, "perplexity": 524.8830909562816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00707.warc.gz"} |
http://www.r-bloggers.com/bivariate-densities-with-n01-margins/ | # Bivariate Densities with N(0,1) Margins
February 18, 2014
By
(This article was first published on Freakonometrics » R-english, and kindly contributed to R-bloggers)
This Monday, in the ACT8595 course, we came back on elliptical distributions and conditional independence (here is an old post on de Finetti’s theorem, and the extension to Hewitt-Savage’s). I have shown simulations, to illustrate those two concepts of dependent variables, but I wanted to spend some time to visualize densities. More specifically what could be the joint density is we assume that margins are $\mathcal{N}(0,1)$ distributions.
• The Bivariate Gaussian distribution
Here, we consider a Gaussian random vector, with margins $\mathcal{N}(0,1)$, and with correlation $r\in[-1,+1]$. This is the standard graph, with elliptical isodensity curves
```r=.5
library(mnormt)
S=matrix(c(1,r,r,1),2,2)
f=function(x,y) dmnorm(cbind(x,y),varcov=S)
vx=seq(-3,3,length=201)
vy=seq(-3,3,length=201)
z=outer(vx,vy,f)
set.seed(1)
X=rmnorm(1500,varcov=S)
xhist <- hist(X[,1], plot=FALSE)
yhist <- hist(X[,2], plot=FALSE)
top <- max(c(xhist\$density, yhist\$density,dnorm(0)))
nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE)
par(mar=c(3,3,1,1))
image(vx,vy,z,col=rev(heat.colors(101)))
points(X,cex=.2)
par(mar=c(0,3,1,1))
barplot(xhist\$density, axes=FALSE, ylim=c(0, top), space=0,col="light green")
lines((density(X[,1])\$x-xhist\$breaks[1])/diff(xhist\$breaks)[1],
dnorm(density(X[,1])\$x),col="red")
par(mar=c(3,0,1,1))
barplot(yhist\$density, axes=FALSE, xlim=c(0, top), space=0,
horiz=TRUE,col="light green")
lines(dnorm(density(X[,2])\$x),(density(X[,2])\$x-yhist\$breaks[1])/
diff(yhist\$breaks)[1],col="red")```
That was the simple part.
• The Bivariate Student-t distribution
Consider now another elliptical distribution. But we want here to normalize the margins. Thus, instead of a pair $(X,Y)$, we would like to consider the pair $(\Phi^{-1}(T_\nu(X)),\Phi^{-1}(T_\nu(Y)))$, so that the marginal distributions are $\mathcal{N}(0,1)$. The new density is obtained simply since the transformation is a one-to-one increasing transformation. Here, we have
```k=3
r=.5
G=function(x) qnorm(pt(x,df=k))
dg=function(x) dt(x,df=k)/dnorm(qnorm(pt(x,df=k)))
Ginv=function(x) qt(pnorm(x),df=k)
S=matrix(c(1,r,r,1),2,2)
f=function(x,y) dmt(cbind(Ginv(x),Ginv(y)),S=S,df=k)/(dg(x)*dg(y))
vx=seq(-3,3,length=201)
vy=seq(-3,3,length=201)
z=outer(vx,vy,f)
set.seed(1)
Z=rmt(1500,S=S,df=k)
X=G(Z)```
Because we considered a nonlinear transformation of the margins, the level curves are no longer elliptical. But there is still some kind of symmetry.
• The Exchangeable Case with Conditionally Independent Random Variables
We did consider the case where $X$ and $Y$ with independent random variables, given $\Theta$, and that both variables are exponentially distributed, with parameter $\Theta$. As we’ve seen in class, it might be difficult to visualize that sample, unless we have log scales on both axis. But instead of a log transformation, why not consider a transformation so that margins will be $\mathcal{N}(0,1)$. The only technical problem is that we do not have the (nonconditional) distributions of the margins. Well, we have them, but they are integral based. From a computational point of view, that’s not a bit deal… Computations might take a while, but we can visualize the density using the following code (here, we assume that is Gamma distributed)
```a=.6
b=1
h=.0001
G=function(x) qnorm(ifelse(x<0,0,integrate(function(z) pexp(x,z)*
dgamma(z,a,b),lower=0,upper=Inf)\$value))
Ginv=function(x) uniroot(function(z) G(z)-x,lower=-40,upper=1e5)\$root
dg=function(x) (Ginv(x+h)-Ginv(x-h))/2/h
H=function(xy) integrate(function(z) dexp(xy[2],z)*dexp(xy[1],z)*
dgamma(z,a,b),lower=0,upper=Inf)\$value
f=function(x,y) H(c(Ginv(x),Ginv(y)))*(dg(x)*dg(y))
vx=seq(-3,3,length=151)
vy=seq(-3,3,length=151)
z=matrix(NA,length(vx),length(vy))
for(i in 1:length(vx)){
for(j in 1:length(vy)){
z[i,j]=f(vx[i],vy[j])}}
set.seed(1)
Theta=rgamma(1500,a,b)
Z=cbind(rexp(1500,Theta),rexp(1500,Theta))
X=cbind(Vectorize(G)(Z[,1]),Vectorize(G)(Z[,2]))```
There is a small technical problem, but no big deal.
Here, the joint distribution is quite different. Margins are – one more time – standard Gaussian, but the shape of the joint distribution is quite different, with an asymmetry from the lower (left) tail to the upper (right) tail. More details when we’ll introduce copulas. The only difference will be that the margins will be uniform on the unit interval, and not standard Gaussian.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249824643135071, "perplexity": 1130.5618607951483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00047-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathcs.chapman.edu/~jipsen/structures/doku.php/monoids | ## Monoids
Abbreviation: Mon
### Definition
A monoid is a structure $\mathbf{M}=\langle M,\cdot ,e\rangle$, where $\cdot$ is an infix binary operation, called the monoid product, and $e$ is a constant (nullary operation), called the identity element , such that
$\cdot$ is associative: $(x\cdot y)\cdot z=x\cdot (y\cdot z)$
$e$ is an identity for $\cdot$: $e\cdot x=x$, $x\cdot e=x$.
##### Morphisms
Let $\mathbf{M}$ and $\mathbf{N}$ be monoids. A morphism from $\mathbf{M}$ to $\mathbf{N}$ is a function $h:Marrow N$ that is a homomorphism:
$h(x\cdot y)=h(x)\cdot h(y)$, $h(e)=e$
### Examples
Example 1: $\langle X^{X},\circ ,id_{X}\rangle$, the collection of functions on a sets $X$, with composition, and identity map.
Example 1: $\langle M(V)_{n},\cdot ,I_{n}\rangle$, the collection of $n\times n$ matrices over a vector space $V$, with matrix multiplication and identity matrix.
Example 1: $\langle \Sigma ^{\ast },\cdot ,\lambda \rangle$, the collection of strings over a set $\Sigma$, with concatenation and the empty string. This is the free monoid generated by $\Sigma$.
### Properties
Classtype Variety decidable in polynomial time undecidable undecidable no unbounded no no no no no no no no no
### Finite members
$\begin{array}{lr} f(1)= &1\\ f(2)= &2\\ f(3)= &7\\ f(4)= &35\\ f(5)= &228\\ f(6)= &2237\\ f(7)= &31559\\ \end{array}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126765131950378, "perplexity": 542.0187361183499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191986.44/warc/CC-MAIN-20170322212951-00290-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/186379/axiom-of-union?answertab=oldest | Axiom of Union?
I'm reading Comprehensive Mathematics for Computer Scientists 1. On the second chapter: Axiomatic Set Theory.
He first states the axiom of the empty set, the axiom of equality and then he proceeds to the axiom of union:
Axiom 3 (Axiom of Union) If $a$ is a set, then there is a set:
$\{$$x | there exists an element b\in a such that x\in b$$\}$.
This set is denoted by $\bigcup a$ and is called the union of $a$.
Notation 2 If a = {b,c}. or a = {b,c,d}, respectively, one also writes b $\cup$ c, or b $\cup$ c $\cup$ d, respectively, instead of $\cup$a
I've learned the definition of Union while I was in school, but it wasn't with axioms, they just gave an intuitive example:
$a=\{1,2,3\}$
$b=\{4,5\}$
$a\bigcup b=\{1,2,3,4,5\}$
I can't see how the notion of this intuitive example happens on the axiom of union. In my example, it's easy to understand because there's a mention to another set, where's the mention in this axiom?
-
In ZFC, any element of a set is itself a set, so the interpretation of $\cup a$ is the union of all the sets in $a$. – Kris Aug 24 '12 at 20:54
The exposition in that book is rather terse, especially for a book addressed to non-mathematicians (something noted in its Amazon reviews.) There are better examples of the axiom of union at work in Hrbacek and Jech, p. 10, although unlike everything in the answers shown below, the examples of Hrbacek and Jech use "pure" set theory i.e. without urelements like $a$ or $b$. (In layman's terms, this means only the empty set and braces are used to build sets.) It's somewhat insightful to see how the axiom works out in that context as well. – Respawned Fluff Apr 12 at 3:16
Also if you do have urelements (aka atoms) other than the empty set, then the axiom of union loses the "top level" ones. E.g. $\bigcup \{a,\{b\}\}$ is just $\{b\}$. This is one of the troubles with urelements and why the axiom looks strange with urelements. Hat tip to Tourlakis' book for mentioning this. – Respawned Fluff Apr 12 at 4:06
The connection between your example and the more general definition is that $\bigcup\{a,b\}=a\cup b$. Written out in all its gory details, this is
$$\bigcup\Big\{\{1,2,3\},\{4,5\}\Big\}=\{1,2,3\}\cup\{4,5\}=\{1,2,3,4,5\}\;.$$
Let’s check that against the definition:
\begin{align*} &\bigcup\Big\{\{1,2,3\},\{4,5\}\Big\}\\ &\qquad=\left\{x:\text{there exists an element }y\in\Big\{\{1,2,3\},\{4,5\}\Big\}\text{ such that }x\in y\right\}\\ &\qquad=\Big\{x:x\in\{1,2,3\}\text{ or }x\in\{4,5\}\Big\}\\ &\qquad=\{1,2,3\}\cup\{4,5\}\\ &\qquad=\{1,2,3,4,5\}\;. \end{align*}
Take a slightly bigger example. Let $a,b$, and $c$ be any sets; then
\begin{align*} \bigcup\{a,b,c\}&=\Big\{x:\text{there exists an element }y\in\{a,b,c\}\text{ such that }x\in y\Big\}\\ &=\{x:x\in a\text{ or }x\in b\text{ or }x\in c\}\\ &=a\cup b\cup c\;. \end{align*}
One more, even bigger: for $n\in\Bbb N$ let $A_n$ be a set, and let $\mathscr{A}=\{A_n:n\in\Bbb N\}$. Then
\begin{align*} \bigcup\mathscr{A}&=\Big\{x:\text{there exists an }n\in\Bbb N\text{ such that }x\in A_n\Big\}\\ &=\{x:x\in A_0\text{ or }x\in A_1\text{ or }x\in A_2\text{ or }\dots\}\\ &=A_0\cup A_1\cup A_2\cup\dots\\ &=\bigcup_{n\in\Bbb N}A_n\;. \end{align*}
-
If there exists an element $y\in\Big\{\{1,2,3\},\{4,5\}\Big\}$, then shouldn't we have a $\Big\{\{1,2,3\},\{4,5\},y\Big\}$? – Jesus Christ Aug 24 '12 at 18:15
@Gustavo: No: $y$ is a dummy name used here to stand for any member of the set $\Big\{\{1,2,3\},\{4,5\}\Big\}$. Here the possible values of $y$ are $\{1,2,3\}$ and $\{4,5\}$. – Brian M. Scott Aug 24 '12 at 18:17
@Gustavo: Yes, $x$ in expressions like $\{x:\text{something}\}$ is also a dummy variable; it can stand for anything that satisfies the condition $\text{something}$. – Brian M. Scott Aug 24 '12 at 18:34
Then I have to choose one of the sets on: $\{\{1,2,3 \},\{4,5 \}\}$ and then choose an element inside the chosen one? – Jesus Christ Aug 24 '12 at 18:37
Yep. Your last comment seems to tell me that I should consider all possible options, isn't it? – Jesus Christ Aug 24 '12 at 18:38
Let $A=\{a,b\}$ (the set whose only elements are $a$ and $b$). Then the union of $a$ and $b$ that you described is what the Axiom of Union produces from $A$.
Remark: Informally, let $A$ be a set whose elements are a bunch of plastic bags with stuff in them (so $A$ is a set of sets). Then the set produced by the Axiom of Union from $A$ dumps the stuff contained in the bags into a single bag. (Duplicates are thrown away.)
-
Oh, then $\bigcup a$ acts like a variable. When he says: "There is a set..." he refers to a set that have no name, it's only $\{$x | such that... $\}$ and not $z=\{$x | such that... $\}$, Then I thought that this nameless set were implicit here $\bigcup a$, if it had a name $z$, it would be written like: $z\bigcup a$. Is this right? – Jesus Christ Aug 24 '12 at 18:02
@GustavoBandeira: I am having trouble understanding what you mean. Here $\cup$ acts as a unary operator (function). If you apply it to the set $\{a,b,c,d\}$ of sets it produces $a\cup b\cup c\cup d$. It operates similarly on an infinite collection $\{a_1,a_2,a_3,\dots\}$ of sets. – André Nicolas Aug 24 '12 at 18:11
I wasn't aware that I could write it like LISP. – Jesus Christ Aug 24 '12 at 18:59
@GustavoBandeira: It is a common notation. In principle, all we use is $\in$ and logical symbols, but in doing doing set theory it is then useful (indeed almost necessary!) to introduce abbreviations for important constructions. – André Nicolas Aug 24 '12 at 19:56
When we write $a\cup b$ we actually mean $\bigcup\{a,b\}$. This is a shorthand instead of writing long formulas every time we want to talk about the union of two sets.
-
Yep. Same as I commented here. – Jesus Christ Aug 24 '12 at 18:04
I've read your comment with attention now. This reminds me of LISP where you can write (+ 2 3). – Jesus Christ Aug 24 '12 at 19:41
@Gustavo: Think of $\bigcup$ as a LISP function "union": $$(\textrm{union }a\ b\ \ldots)$$ It takes a list of sets and returns their union. The $a\cup b$ notation is a bit like C syntax. – Asaf Karagila Aug 24 '12 at 20:35
Think of $a$ as a set (or collection, if you like) of other sets. Then $\bigcup a$ is the union of all these sets. So, for instance, in your example:
$$\bigcup \lbrace\lbrace 1,2,3\rbrace,\lbrace 4,5\rbrace\rbrace = \lbrace 1,2,3,4,5\rbrace$$
You may think of $A\cup B$ as shorthand for $\bigcup \lbrace A,B\rbrace$.
-
Yep, it's the same as I pointed here. – Jesus Christ Aug 24 '12 at 18:03
This axiom talks about a set of sets.
This is because the axiom states $b\in a$ and $x\in b$: $x$ in $b$ tells you that $b$ is a set (and is an element of $a$).
For example: $a=\{\{1\},\{2,3\}\}$ then the axiom states that $\{1\}\cup\{2,3\}=\{1,2,3\}$ exists.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89206463098526, "perplexity": 820.3518864556513}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929418.92/warc/CC-MAIN-20150521113209-00233-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://arxiv.org/abs/1101.5834 | math.AG
(what is this?)
# Title:Thom-Sebastiani & Duality for Matrix Factorizations
Abstract: The derived category of a hypersurface has an action by "cohomology operations" k[t], deg t=-2, underlying the 2-periodic structure on its category of singularities (as matrix factorizations). We prove a Thom-Sebastiani type Theorem, identifying the k[t]-linear tensor products of these dg categories with coherent complexes on the zero locus of the sum potential on the product (with a support condition), and identify the dg category of colimit-preserving k[t]-linear functors between Ind-completions with Ind-coherent complexes on the zero locus of the difference potential (with a support condition). These results imply the analogous statements for the 2-periodic dg categories of matrix factorizations. Some applications include: we refine and establish the expected computation of 2-periodic Hochschild invariants of matrix factorizations; we show that the category of matrix factorizations is smooth, and is proper when the critical locus is proper; we show how Calabi-Yau structures on matrix factorizations arise from volume forms on the total space; we establish a version of Knörrer Periodicity for eliminating metabolic quadratic bundles over a base.
Comments: 78 pages. Draft Subjects: Algebraic Geometry (math.AG); Category Theory (math.CT) Cite as: arXiv:1101.5834 [math.AG] (or arXiv:1101.5834v1 [math.AG] for this version)
## Submission history
From: Anatoly Preygel [view email]
[v1] Sun, 30 Jan 2011 23:34:40 UTC (95 KB) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709758520126343, "perplexity": 2387.020857849088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669422.96/warc/CC-MAIN-20191118002911-20191118030911-00143.warc.gz"} |
https://www.physicsforums.com/threads/black-holes-at-lhc.229888/ | # Black holes at LHC
1. Apr 19, 2008
### hammertime
This may seem like a stupid question that's been brought up several times but it is regarding the possible creation of mini-black holes at the LHC. It's said that these MBH's pose no threat to the planet because of their small size and the fact that they will evaporate by Hawking radiation. However, while there is much mathematical and theoretical evidence pointing towards HR, it has never been physically observed. So how can we be sure that the MBH's will simply evaporate?
Basically, the saying is that, if HR is correct, MBH's evaporate. Isn't that a pretty big if?
2. Apr 19, 2008
### ZapperZ
Staff Emeritus
Last edited: Apr 19, 2008 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084911704063416, "perplexity": 1114.7017444870166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00245-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/49949/what-is-the-basis-for-the-universal-enveloping-algebra-of-su2 | What is the basis for the Universal Enveloping Algebra of su(2)?
Given the standard basis for the Lie algebra $\mathfrak{su}(2)$ of SU(2), $\{i\sigma_1,i\sigma_2,i\sigma_3\}$ where
$\sigma_1=\Biggl(\begin{array}{cc} 0&1\\ 1&0\end{array}\Biggr),\quad\sigma_2=\Biggl(\begin{array}{cc} 0&-i\\ i&0\end{array}\Biggr),\quad\sigma_3=\Biggl(\begin{array}{cc} 1&0\\ 0&-1\end{array}\Biggr),$
I want to find a basis for the universal enveloping algebra, $\mathcal{U}(\mathfrak{su}(2))$. By the Poincare-Birkoff-Witt Theorem I believe we have
$\{i\sigma_1,i\sigma_2,i\sigma_3,-i\sigma_1\sigma_2,-i\sigma_1\sigma_3,-i\sigma_2\sigma_3,-i\sigma_1\sigma_2\sigma_3\}$,
in other words all lexicographically ordered monomials. However, since products of the Pauli matrices are Pauli matrices (ie $\sigma_1\sigma_2=i\sigma_3$) it would seem that the two algebras have the same basis, just with the Lie bracket $[,]$ replaced with matrix multiplication. Can someone tell me if this is correct?
-
In a canonical monomial the sequences are allowed to be non-decreasing, not just increasing. There are infinitely many such monomials, so your basis should be infinite. The universal enveloping algebra is not the algebra generated by the matrices $\sigma_i$: although $\sigma_1 \sigma_2 = i \sigma_3$ as matrices, this does not imply that $\rho(\sigma_1) \rho(\sigma_2) = i \rho(\sigma_3)$ in any representation $\rho$ of $\mathfrak{su}(2)$. – Qiaochu Yuan Jul 6 '11 at 20:43
(1) The Poincare-Birkoff-Witt basis is the infinite set $$(i \sigma_1)^a (i \sigma_2)^b (i \sigma_3)^c \ \mbox{for} \ a,\ b,\ c,\ \geq 0.$$ You have only listed the cases where $a$, $b$ and $c$ are $0$ or $1$.
(2) The relation $\sigma_1 \sigma_2 = i \sigma_3$ does not hold in $U(\mathfrak{su}_2)$. That relation holds in the standard two dimensional representation of $\mathfrak{su}_2$, but it doesn't hold in (for example) the $3$ dimensional representation. The relations in $U(\mathfrak{su}_2)$ are those which hold in all representations of $\mathfrak{su}_2$. (Are you clear on what a representation of a Lie algebra means?)
Ok yes I see my confusion. But I wasn't saying that $\sigma_1\sigma_2=i\sigma_3$ should hold for all dimensions (or all representations) - and yes I am pretty clear on what a representation of a Lie Algebra is. However, I would still like to have some concrete ways of writing (and using) the representations of $\mathcal{U}(\mathfrak{su}(2))$. Can one say anything further then "here are the basis elements and any relations which hold for all representations of the Lie Algebra also hold for these basis elements"? – levitopher Jul 9 '11 at 17:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658253192901611, "perplexity": 184.90227137773937}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558065752.33/warc/CC-MAIN-20141017150105-00211-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://hal-insu.archives-ouvertes.fr/insu-03635034 | HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
# Dust modeling of the combined ALMA and SPHERE datasets of HD 163296. Is HD 163296 really a Meeus group II disk?
Abstract : Context. Multiwavelength observations are indispensable in studying disk geometry and dust evolution processes in protoplanetary disks.
Aims: We aim to construct a three-dimensional model of HD 163296 that is capable of reproducing simultaneously new observations of the disk surface in scattered light with the SPHERE instrument and thermal emission continuum observations of the disk midplane with ALMA. We want to determine why the spectral energy distribution of HD 163296 is intermediary between the otherwise well-separated group I and group II Herbig stars.
Methods: The disk was modeled using the Monte Carlo radiative transfer code MCMax3D. The radial dust surface density profile was modeled after the ALMA observations, while the polarized scattered light observations were used to constrain the inclination of the inner disk component and turbulence and grain growth in the outer disk.
Results: While three rings are observed in the disk midplane in millimeter thermal emission at 80, 124, and 200 AU, only the innermost of these is observed in polarized scattered light, indicating a lack of small dust grains on the surface of the outer disk. We provide two models that are capable of explaining this difference. The first model uses increased settling in the outer disk as a mechanism to bring the small dust grains on the surface of the disk closer to the midplane and into the shadow cast by the first ring. The second model uses depletion of the smallest dust grains in the outer disk as a mechanism for decreasing the optical depth at optical and near-infrared wavelengths. In the region outside the fragmentation-dominated regime, such depletion is expected from state-of-the-art dust evolution models. We studied the effect of creating an artificial inner cavity in our models, and conclude that HD 163296 might be a precursor to typical group I sources.
Keywords :
Document type :
Journal articles
Domain :
https://hal-insu.archives-ouvertes.fr/insu-03635034
Contributor : Nathalie Pothier Connect in order to contact the contributor
Submitted on : Friday, April 8, 2022 - 2:05:29 PM
Last modification on : Thursday, May 12, 2022 - 8:56:01 AM
### File
aa32299-17.pdf
Publisher files allowed on an open archive
### Citation
G. A. Muro-Arena, C. Dominik, L. B. F. M. Waters, M. Min, L. Klarmann, et al.. Dust modeling of the combined ALMA and SPHERE datasets of HD 163296. Is HD 163296 really a Meeus group II disk?. Astronomy & Astrophysics, 2018, 614, ⟨10.1051/0004-6361/201732299⟩. ⟨insu-03635034⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8032952547073364, "perplexity": 3077.6836352870796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00233.warc.gz"} |
http://mathoverflow.net/questions/43019/geometrical-structure-of-critical-points-of-harmonic-functions?answertab=active | # Geometrical structure of critical points of harmonic functions
For a harmonic function $\Phi$ on a simply connected subset $\Gamma$ of $\mathbb{R}^3$, define a guide curve $\gamma: I \mapsto \Gamma$ of $\Phi$ as a simple regular $C^1$ curve such that
• all point in $\gamma(I)$ are critical points of $\Phi$, and
• for all points $p$ in $\gamma(I)$ there exists a neighborhood $V$ of $p$ so that all critical points of $\Phi$ within $V$ are also in $\gamma(I)$.
My question is whether there are any such guide curves which do not have an analytic parametrization?
For a concrete example, consider $\Phi(x,y,z)=x\ y\ z$ for which any part of a coordinate axis not including the origin is a guide curve.
-
Background: The question is relevant to networks of rf ion traps, where the trapping potential is the ponderomotive potential associated with an oscillating electric field. The local amplitude of electric potential oscillations is described by a harmonic function, and for practical reasons it is preferable to trap ions on critical points of this function so that transport would have to take place along guide curves as introduced above. The aim of my question is to establish what intersection topologies are possible for guide curves. – Janus Wesenberg Oct 21 '10 at 7:28
You need to be a lot more careful with your quantifiers. Let $\Phi$ be a harmonic function without a critical point, say $\Phi(x,y,z) = x$. then trivially any curve $\gamma$ has the property that for every point $p\in \gamma$, for any neighborhood $V$ of $p$, all critical points of $\Phi$ in $V$ (which comprise the empty set) is in $\gamma$. Now, I am not sure what you mean by "analytical parametrization", but given that ALL curves $\gamma$ are allowed in the above example, if there exists curves that does not have analytical parametrization, then you can draw the obvious conclusion. – Willie Wong Oct 21 '10 at 9:39
In analogy with the imaginary part of $(x + i y)^n$ being zero along $n$ lines through the origin in the plane, my guess is that one can arrange guide curves as lines through the origin in $R^3$ and the vertices of a regular polyhedron, not just the octahedron as you point out. Furthermore I thing harmonic polynomials suffice to do this. What seems less clear is larger numbers of points on the unit sphere. Note that for fixed constants $A,B,C$ the Laplacian commutes with $$A \frac{\partial}{ \partial x} + B \frac{\partial}{ \partial y} + C \frac{\partial}{ \partial z}$$ – Will Jagy Oct 21 '10 at 18:40
@Willie Wong: Thank you very much for pointing this one out. I have corrected my definition of guide curves so it now hopefully describes what I am looking for. – Janus Wesenberg Oct 22 '10 at 7:00
Since the laplacian is elliptic with real-analytic coefficients, a harmonic function $f$ is real-analytic in its domain of definition. Hence the set $C$ of critical points of $f$ is a real-analytic subset of $\mathbb{R}^3$, and as such it admits a locally finite partition into real-analytic locally closed smooth submanifolds. Thus if $\dim C \leq 1$, it is locally a finite union of analytic open arcs and singular points (but the curves might not extend smoothly across those points). – BS. Oct 22 '10 at 8:20
Since the laplacian is elliptic with real-analytic coefficients, a harmonic function $f$ is real-analytic in its domain of definition. Hence the set $C$ of critical points of $f$ is a real-analytic subset of $R^3$, and as such it admits a locally finite partition into real-analytic locally closed smooth submanifolds. Thus if $\dim C≤1$, it is locally a finite union of analytic open arcs and singular points (but the curves might not extend smoothly across those points).
A reference on real analytic functions (reedited in 2002) might be
S. Krantz, H. Parks, A primer of real analytic functions. Birkhäuser Verlag, 1992.
But maybe the "curve selection lemma" in Milnor's "Singular points on complex hypersurfaces" would be enough . edit: it concerns real algebraic subsets.
As an example of a curve of critical points not extending throuh a singular point, take the harmonic polynomial $f(x,y,z)=y^3-3x^2y+y^3z-yz^3$, which has critical locus $y=0,z^3=-3x^2$. But of course you have the singular parametrization $x=3t^3, z=-3t^2$. I don't know if they exist in general.
Addendum : in fact the critical locus of a harmonic can polynomial can have an arbitrary (real) plane algebraic curve as a union of irreducible components. Let $P(x,y)$ be a real two variable polynomial, and define $$f(x,y,z)=\Sigma_k \frac{z^{2k+1}}{(2k+1)!}(-\Delta_{x,y})^k P(x,y) \; .$$ It is easy to check that $f$ is harmonic, and $df$ vanishes on $z=0$, $P(x,y)=0$.
-
Thank you very much, BS! Perhaps if I had not been so convinced that all curves of critical points would extend through the any singular points, I would have had better luck finding a counterexample myself :) Thus I learn again that (my) intuition and analysis should not be mixed. This is really amazing -- I have been pushing this problem to mathematician friends for a couple of years now with no progress, and then a few days after posting it to mathoverflow it is solved. The future is now (but then, it's 2010 -- so it'd better be) :) – Janus Wesenberg Oct 25 '10 at 2:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8943303227424622, "perplexity": 256.43872467368004}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900024.23/warc/CC-MAIN-20141030025820-00078-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://cstheory.stackexchange.com/questions/11273/why-valuations-when-defining-fol/11274 | # Why valuations when defining FOL?
Why does one need valuations in order to define the semantics of first-order logic? Why not just define it for sentences and also define formula substitutions (in he expected way). That should be enough:
$$M \models \forall x. \phi \iff \text{for all }d\in \mathrm{dom}(M),\ M \models \phi[x\mapsto d]$$
$$M,v \models \forall x. \phi \iff \text{for all }d\in \mathrm{dom}(M),\ M, v[x\mapsto d] \models \phi$$
It is perfectly possible to define satisfaction using just sentences as you suggest, and in fact, it used to be the standard approach for quite some time.
The drawback of this method is that it requires to mix semantic objects into syntax: in order to make an inductive definition of satisfaction of sentences in a model $M$, it is not sufficient to define it for sentences of the original language of $M$. You need to first expand the language with individual constants for all elements of the domain of $M$, and then you can define satisfaction for sentences in the expanded language. This is, I believe, the main reason why this approach went into disuse; if you use valuations, you can maintain a clear conceptual distinction between syntactic formulas of the original language and semantic entities that are used to model them.
• I think it depends somewhat on whether the author is approaching things from a proof theory side or a model theory side. In the case of proof theory, the original language is of interest for studying provability of sentences, but in the case of model theory the expanded language is more useful for studying definability. So for example Marker's model theory book defines satisfaction via the extended language, but Enderton's intro logic book uses valuations. – Carl Mummert May 3 '12 at 21:50
The meaning of a closed formula is a truth value $\bot$ or $\top$. The meaning of a formula containing a free variable $x$ ranging over a set $A$ is a function from $A$ to truth values. Functions $A \to \lbrace \bot, \top \rbrace$ form a complete Boolean algebra, so we can interpet first-order logic in it.
Similarly, a closed term $t$ denotes an element of some domain $D$, while a term with a free variable denotes a function $D \to D$ because the element depends on the value of the variable.
It is therefore natural to interpret a formula $\phi(x_1, \ldots, x_n)$ with free variables $x_1, \ldots, x_n$ in the complete Boolean algebra $D^n \to \lbrace{\bot, \top\rbrace}$ where $D$ is the domain of range of the variables. Whether you phrase the interpretation in this complete Boolean algebra in terms of valuations or otherwise is a technical matter.
Mathematicians seem to be generally confused about free variables. They think they are implicitly universally quantified or some such. The cause of this is a meta-theorem stating that $\phi(x)$ is provable if and only if its universal closure $\forall x . \phi(x)$ is provable. But there is more to formulas than their provability. For example, $\phi(x)$ is not generally equivalent to $\forall x . \phi(x)$, so we certainly cannot pretend that these two formulas are interchangable.
To summarize:
• formulas with free variables are unavoidable, at least in the usual first-order logic,
• the meaning of a formula with a free variable is a truth function,
• therefore in semantics we are forced to consider complete Boolean algebras $D^n \to \lbrace\bot, \top\rbrace$, which is where valuations come from,
• the universal closure of a formula is not equivalent to the original formula,
• it is a mistake to equate the meaning of a formula with the meaning of its universal closure, just as it is a mistake to equate a function with its codomain.
• Cool. Clear and simple answser! I wonder what the logicians have to say about this? – Uday Reddy May 6 '12 at 12:29
• I am one of "the logicians", it's written on my certificate of PhD. – Andrej Bauer May 6 '12 at 16:39
Simply because it's more natural to say "$x > 2$ is true when $x$ is $\pi$" (that is, on a valuation which sends $x$ to $\pi$) than "$x > 2$ is true when we substitute $\pi$ (the number itself, not the Greek letter) for $x$". Technically the approaches are equivalent.
I want to strengthen Alexey's answer, and claim that the reason is that the first definition suffers from technical difficulties, and not just that the second (standard) way is more natural.
Alexy's point is that the first approach, i.e.:
$M \models \forall x . \phi \iff$ for all $d \in M$: $M \models \phi[x\mapsto d]$
mixes syntax and semantics.
For example, let's take Alexey's example:
${(0,\infty)} \models x > 2$
Then in order to show that, one of the things we have to show is: $(0,\infty) \models \pi > 2$
The entity $\pi > 2$ is not a formula, unless our language includes the symbol $\pi$, that is interpreted in the model $M$ as the mathematical constant $\pi \approx 3.141\ldots$.
A more extreme case would be to show that $M\models\sqrt[15]{15,000,000} > 2$, and again, the right hand side is a valid formula only if our language contains a binary radical symbol $\sqrt{}$, that is interpreted as the radical, and number constants $15$ and $15,000,000$.
To ram the point home, consider what happens when the model we present has a more complicated structure. For example, instead of taking real numbers, take Dedekind cuts (a particular implementation of the real numbers).
Then the elements of your model are not just "numbers". They are pairs of sets of rational numbers $(A,B)$ that form a Dedkind cut.
Now, look at the object $({q \in \mathbb Q | q < 0 \vee q^2 < 5}, {q \in \mathbb Q | 0 \leq q \wedge q^2 > 5}) > 2$" (which is what we get when we "substitute" the Dedekind cut describing $\sqrt{5}$ in the formula $x > 2$. What is this object? It's not a formula --- it has sets, and pairs and who knows what in it. It's potentially infinite.
So in order for this approach to work well, you need to extend your notion of "formula" to include such mixed entities of semantic and syntactic objects. Then you need to define operations such as substitutions on them. But now substitutions would no longer be syntactic functions: $[ x \mapsto t]: Terms \to Terms$. They would be operations on very very large collections of these generalised, semantically mixed terms.
It's possible you will be able to overcome these technicalities, but I guess you will have to work very hard.
The standard approach keeps the distinction between syntax and semantics. What we change is the valuation, a semantic entity, and keep formulae syntactic.
• The key point to the first approach is that given a model $M$ in a language $L$ you first expand to a language $L(M)$ in which there is a new constant symbol for every element in $M$. Then you can just substitute these constant symbols into formulas in the usual way. There are no actual technical difficulties. – Carl Mummert May 3 '12 at 21:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016978144645691, "perplexity": 278.9632580530444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00456.warc.gz"} |
https://www.physicsforums.com/threads/calculate-electric-field-at-origin-with-3-charges.314664/ | # Calculate electric field at origin, with 3 charges
1. May 17, 2009
1. The problem statement, all variables and given/known data
Three charges, +2.5$$\mu$$C, -4.8$$\mu$$C & -6.3$$\mu$$C
are located at (-0.20m, 0.15m), (0.50m, -0.35m) and (-0.42m, -0.32m) respectively. What is the electric field at the origin?
q1 = +2.5$$\mu$$C
q2 = -4.8$$\mu$$C
q3 = -6.3$$\mu$$C
2. Relevant equations
a$$^{}2$$ + b$$^{}2$$ = c$$^{}2$$
v$$_{}x$$ = magnitude $$\times$$cos($$\theta$$)
v$$_{}y$$ = magnitude $$\times$$sin($$\theta$$)
E = $$\frac{kq}{r^{}2}$$
law of cosines
k= 9x10^9
3. The attempt at a solution
first i found the hypotenuse for the three charges.
q1 = .25m
q2 = .6103m
q3 = .5280m
then i used the formula for magnitude of an electric field
where k is the constant, q was my three charges, and the radius were my three hypotenuses
my results were:
q1 = 360000
q2 = 115983
q3 = 203383
i used the law of cosines to get $$\theta$$
my three angles were:
1 = 36.8
2 = 35
3 = 37.3
to find Ex, i multiplied the product of my magnitudes by the cosine of its respective angle:
my results:
1 = 288263
2 = 95007
3 = 161785
i added these up and got 545055, the book says 2.2x10^5!
i didnt bother doing y, since im completely lost!
2. May 17, 2009
### LowlyPion
Doesn't the direction of q3 carry a negative sign ... i.e. pointing toward the right from the origin?
Hence |E1| + |E2| - |E3| along x?
288 + 95 - 161 = 222
3. May 17, 2009
yeah q3 has a negative sign.
So is the way i did the problem correct?
4. May 17, 2009
I got another problem im trying to solve for y, but when i add everything up i get + 443958, not -4.1x10^5 like the book says.
P.S. how do i know which ones to add, and which ones to subtract?
5. May 17, 2009
### LowlyPion
Remember the E-Field is a vector field.
So you not only need to account for the sign of the charge, but you also must take into account where the point that you are taking the E-Field at is relative to the charge.
A positive charge has radial outward field. A negative charge is radial inward. So depending on which side you are and whether it is a + or - is what determines the sign of the |E|, not simply which quadrant it may lay in.
Similar Discussions: Calculate electric field at origin, with 3 charges | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8060171604156494, "perplexity": 1780.0200680087798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822116.0/warc/CC-MAIN-20171017144041-20171017164041-00894.warc.gz"} |
https://www.studypug.com/ca/phys/work-and-energy | # Work and Energy
### Work and Energy
#### Lessons
In this lesson, we will learn:
• Work done on an object is the change in an object's mechanical energy
• Work done by a force is the product of the force and the displacement of the object
Notes:
• Work is the transfer of energy from one place to another. Because work is a form of energy, it is a scalar and measured in joules (J).
• Work is done when a force moves an object over a displacement. It is equal to force times displacement, or $W = F_\parallel d$. It can be either positive or negative. Doing positive work on an object increases the object's mechanical energy. Positive work can increase an object's kinetic energy (by accelerating it), potential energy (by moving it to a greater height), or both. This can be expressed with the equation $W = F_\parallel d = \Delta E_{mech}$. Negative work on an object reduces its mechanical energy.
• The parallel sign ("$\parallel$") in the formula $W = F_\parallel d$ indicates that only a force that is parallel to the displacement of an object can do work. Remember, in order for a force to do positive work on an object (add energy), it has to add kinetic and/or potential energy to the object. Consider the following examples:
• A force pointed in the same direction as an object's displacement does positive work.
• A force that does not cause a displacement does not do work.
• A force pointed perpendicular to an object's displacement does not work.
• If a force is applied to an object and the resulting displacement is at a non-90° angle from the force (i.e. moving an object by pushing/pulling at an angle), only the component of the force that is pointed in the same direction as the displacement does work. The other component is perpendicular to the displacement and does no work.
• A force that points in the opposite direction of an object's displacement does negative work.
• A common force that does negative work is friction, since it is always pointed opposite the direction of motion.
Work
$W = F_\parallel d = \Delta E_{mech} = (E_{kf} + E_{pf}) - (E_{ki} + E_{pi})$
$W:$ work, in joules (J)
$d:$ displacement, in meters (m)
$F_\parallel:$ component of force parallel to $d$ in newtons (N)
$\Delta E_{mech}:$ change in mechanical energy
$(E_{kf} + E_{pf}):$ total final mechanical energy, in joules (J)
$(E_{ki} + E_{pi}):$ total initial of force parallel to $d$, in newtons (N)
Kinetic Energy
$E_k = \frac{1}{2}mv^2$
$E_k;$ kinetic energy, in joules (J)
$m:$ mass, in kilgrams (kg)
$v:$ velocity, in meters per second (m/s)
Gravitational Potential Energy
$E_p = mgh$
$E_p:$ gravitational potential energy, in joules (J)
$g:$ acceleration due to gravity, in meters per second squared (m/s2)
$h:$ height, in meters (m)
• 1.
$\bold{W = \Delta E_{mech} = F_\parallel d:}$ Calculating work
a)
A 1170 kg car travels at 11.0 m/s.
1. How much work needs to be done on the car to accelerate it to 24.0 m/s?
2. What is the net force acting on the car that accelerates it, if the acceleration is uniform and happens over 95.0 m?
b)
A 5.50 kg box slides across a floor at 12.0 m/s. Friction slows the box to 2.00 m/s after is has travelled 13.0 m. Find the work done on the box and the force of friction acting on the box.
c)
A 2.50 kg box is initially at rest at the top of a 30.0° slope and reaches a speed of 11.5 m/s when it slides 12.0 m down the slope. Find the force of friction acting on the box.
• 2.
$\bold{W = \Delta E_{mech} = F_\parallel d:}$ Calculating work with force applied at an angle
A force of 885 N pulls on a box at an angle of 28.0° above the horizontal. 115 N of friction acts on the box as it slides 19.0 m. How much work does the 885 N force do on the box? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095964431762695, "perplexity": 430.3861128380168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00434.warc.gz"} |
https://www.physicsforums.com/threads/functional-analysis-question.305092/ | # Functional analysis question.
1. Apr 5, 2009
### math8
See the attachment.
File size:
28.2 KB
Views:
68
2. Apr 5, 2009
### Dick
I really don't think there is much to show. How do you define an infinite sum like the sum of x_k*e_k? I would say it's the sequence whose i-th term is the sum of the i-ith terms of all of the x_k*e_k. So for a given i there's only one sequence with a nonzero term. You definitely don't want to start trying to prove the partial sums converge in the l_infinity norm. They don't unless x converges to zero (in the real infinite sequence sense).
3. Apr 6, 2009
### maze
You can use LaTeX codes on the forum by using the [ tex ]LaTeX Code Goes Here[ /tex ] tags (without the spaces) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922431707382202, "perplexity": 1801.3045201453972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00261.warc.gz"} |
https://worldwidescience.org/topicpages/c/carlo+method+implemented.html | #### Sample records for carlo method implemented
1. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4
Energy Technology Data Exchange (ETDEWEB)
Dixon, D.A., E-mail: [email protected] [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: [email protected] [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: [email protected] [Sandia National Laboratories, Albuquerque, NM 87123 (United States)
2015-09-15
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
2. An Alternative Implementation of the Differential Operator (Taylor Series) Perturbation Method for Monte Carlo Criticality Problems
International Nuclear Information System (INIS)
The standard implementation of the differential operator (Taylor series) perturbation method for Monte Carlo criticality problems has previously been shown to have a wide range of applicability. In this method, the unperturbed fission distribution is used as a fixed source to estimate the change in the keff eigenvalue of a system due to a perturbation. A new method, based on the deterministic perturbation theory assumption that the flux distribution (rather than the fission source distribution) is unchanged after a perturbation, is proposed in this paper. Dubbed the F-A method, the new method is implemented within the framework of the standard differential operator method by making tallies only in perturbed fissionable regions and combining the standard differential operator estimate of their perturbations according to the deterministic first-order perturbation formula. The F-A method, developed to extend the range of applicability of the differential operator method rather than as a replacement, was more accurate than the standard implementation for positive and negative density perturbations in a thin shell at the exterior of a computational Godiva model. The F-A method was also more accurate than the standard implementation at estimating reactivity worth profiles of samples with a very small positive reactivity worth (compared to actual measurements) in the Zeus critical assembly, but it was less accurate for a sample with a small negative reactivity worth
3. Implementation of the probability table method in a continuous-energy Monte Carlo code system
Energy Technology Data Exchange (ETDEWEB)
Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
4. Implementation of a Monte Carlo method to model photon conversion for solar cells
International Nuclear Information System (INIS)
A physical model describing different photon conversion mechanisms is presented in the context of photovoltaic applications. To solve the resulting system of equations, a Monte Carlo ray-tracing model is implemented, which takes into account the coupling of the photon transport phenomena to the non-linear rate equations describing luminescence. It also separates the generation of rays from the two very different sources of photons involved (the sun and the luminescence centers). The Monte Carlo simulator presented in this paper is proposed as a tool to help in the evaluation of candidate materials for up- and down-conversion. Some application examples are presented, exploring the range of values that the most relevant parameters describing the converter should have in order to give significant gain in photocurrent
5. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems
Energy Technology Data Exchange (ETDEWEB)
Somasundaram, E.; Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97332-5902 (United States)
2013-07-01
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
6. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems
International Nuclear Information System (INIS)
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
7. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
Science.gov (United States)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
8. Exploring Monte Carlo methods
CERN Document Server
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
9. MontePython: Implementing Quantum Monte Carlo using Python
OpenAIRE
J.K. Nilsen
2006-01-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible.
10. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy
International Nuclear Information System (INIS)
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30–16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9–67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. (note)
11. Shell model Monte Carlo methods
International Nuclear Information System (INIS)
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
12. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation
Directory of Open Access Journals (Sweden)
Jimin Liang
2010-01-01
Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.
13. Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
14. Criticality calculations on pebble-bed HTR-PROTEUS configuration as a validation for the pseudo-scattering tracking method implemented in the MORET 5 Monte Carlo code
International Nuclear Information System (INIS)
The MORET code is a three dimensional Monte Carlo criticality code. It is designed to calculate the effective multiplication factor (keff) of any geometrical configuration as well as the reaction rates in the various volumes and the neutron leakage out of the system. A recent development for the MORET code consists of the implementation of an alternate neutron tracking method, known as the pseudo-scattering tracking method. This method has been successfully implemented in the MORET code and its performances have been tested by mean of an extensive parametric study on very simple geometrical configurations. In this context, the goal of the present work is to validate the pseudo-scattering method against realistic configurations. In this perspective, pebble-bed cores are particularly well-adapted cases to model, as they exhibit large amount of volumes stochastically arranged on two different levels (the pebbles in the core and the TRISO particles inside each pebble). This paper will introduce the techniques and methods used to model pebble-bed cores in a realistic way. The results of the criticality calculations, as well as the pseudo-scattering tracking method performance in terms of computation time, will also be presented. (authors)
15. Criticality calculations on realistic modelling of pebble-bed HTR-PROTEUS as a validation for the woodcock tracking method implemented in the MORET 5 Monte Carlo code
International Nuclear Information System (INIS)
The MORET code is a three dimensional Monte Carlo criticality code. It is designed to calculate the effective multiplication factor (keff) of any geometrical configuration as well as the reaction rates in the various volumes and the neutron leakage out of the system. A recent development for the MORET code consists of the implementation of an alternate neutron tracking method known as the pseudo-scattering tracking method. This method has been successfully implemented in the MORET code and its performances have been tested by the means of an extensive parametric study on very simple geometrical configurations. In this context, the goal of the present work is to validate the pseudo-scattering method against realistic configurations. In this perspective, pebble-bed cores are particularly well-adapted cases to model as they exhibit large amount of volumes stochastically arranged on two different levels (the pebbles in the core and the TRISO particles inside each pebble). This paper will introduce the techniques and methods used to model pebble-bed cores in a realistic way. The results of the criticality calculations, as well as the pseudo-scattering tracking method performance in terms of computation time will be presented. (authors)
16. Efficient implementation of the Monte Carlo method for lattice gauge theory calculations on the floating point systems FPS-164
International Nuclear Information System (INIS)
The computer program calculates the average action per plaquette for SU(6)/Z6 lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600. (orig.)
17. Extending canonical Monte Carlo methods
Science.gov (United States)
Velazquez, L.; Curilef, S.
2010-02-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.
18. Monte Carlo methods for applied scientists
CERN Document Server
Dimov, Ivan T
2007-01-01
The Monte Carlo method is inherently parallel and the extensive and rapid development in parallel computers, computational clusters and grids has resulted in renewed and increasing interest in this method. At the same time there has been an expansion in the application areas and the method is now widely used in many important areas of science including nuclear and semiconductor physics, statistical mechanics and heat and mass transfer. This book attempts to bridge the gap between theory and practice concentrating on modern algorithmic implementation on parallel architecture machines. Although
19. IMPLEMENTATION METHOD
Directory of Open Access Journals (Sweden)
Cătălin LUPU
2009-06-01
Full Text Available In this article presents applications of “Divide et impera” method using object -oriented programming in C #.Main advantage of using the "divide et impera" cost in that it allows software to reduce the complexity of the problem,sub-problems that were being decomposed and simpler data sharing in smaller groups of data (eg sub -algorithmQuickSort. Object-oriented programming means programs with new types that integrates both data and methodsassociated with the creation, processing and destruction of such data. To gain advantages through abstractionprogramming (the program is no longer a succession of processing, but a set of objects to life, have differentproperties, are capable of specific action s and interact in the program. Spoke on instantiation new techniques,derivation and polimorfismul object types.
20. TH-A-19A-04: Latent Uncertainties and Performance of a GPU-Implemented Pre-Calculated Track Monte Carlo Method
International Nuclear Information System (INIS)
Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implemented on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy
1. Simulations with the Hybrid Monte Carlo algorithm: implementation and data analysis
CERN Document Server
Schaefer, Stefan
2011-01-01
This tutorial gives a practical introduction to the Hybrid Monte Carlo algorithm and the analysis of Monte Carlo data. The method is exemplified at the ϕ 4 theory, for which all steps from the derivation of the relevant formulae to the actual implementation in a computer program are discussed in detail. It concludes with the analysis of Monte Carlo data, in particular their auto-correlations.
2. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
3. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
4. Monte Carlo methods for particle transport
CERN Document Server
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
5. Use of Monte Carlo Methods in brachytherapy
International Nuclear Information System (INIS)
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
6. Experience with the Monte Carlo Method
International Nuclear Information System (INIS)
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed
7. A Multivariate Time Series Method for Monte Carlo Reactor Analysis
International Nuclear Information System (INIS)
A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor
8. Extending canonical Monte Carlo methods: II
Science.gov (United States)
Velazquez, L.; Curilef, S.
2010-04-01
We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2langδE2rang compatible with negative heat capacities, C < 0. Now, we improve this methodology by including the finite size effects that reduce the precision of a direct determination of the microcanonical caloric curve β(E) = ∂S(E)/∂E, as well as by carrying out a better implementation of the MC schemes. We show that, despite the modifications considered, the extended canonical MC methods lead to an impressive overcoming of the so-called supercritical slowing down observed close to the region of the temperature driven first-order phase transition. In this case, the size dependence of the decorrelation time τ is reduced from an exponential growth to a weak power-law behavior, \\tau (N)\\propto N^{\\alpha } , as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14-0.18.
9. Clinical implementation of full Monte Carlo dose calculation in proton beam therapy
Energy Technology Data Exchange (ETDEWEB)
Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States)
2008-09-07
The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical
10. Guideline for radiation transport simulation with the Monte Carlo method
International Nuclear Information System (INIS)
Today, the photon and neutron transport calculations with the Monte Carlo method have been progressed with advanced Monte Carlo codes and high-speed computers. Monte Carlo simulation is rather suitable expression than the calculation. Once Monte Carlo codes become more friendly and performance of computer progresses, most of the shielding problems will be solved by using the Monte Carlo codes and high-speed computers. As those codes prepare the standard input data for some problems, the essential techniques for solving the Monte Carlo method and variance reduction techniques of the Monte Carlo calculation might lose the interests to the general Monte Carlo users. In this paper, essential techniques of the Monte Carlo method and the variance reduction techniques, such as importance sampling method, selection of estimator, and biasing technique, are described to afford a better understanding of the Monte Carlo method and Monte Carlo code. (author)
11. On the Convergence of Adaptive Sequential Monte Carlo Methods
OpenAIRE
Beskos, Alexandros; Jasra, Ajay; Kantas, Nikolas; Thiery, Alexandre
2013-01-01
In several implementations of Sequential Monte Carlo (SMC) methods it is natural, and important in terms of algorithmic efficiency, to exploit the information of the history of the samples to optimally tune their subsequent propagations. In this article we provide a carefully formulated asymptotic theory for a class of such \\emph{adaptive} SMC methods. The theoretical framework developed here will cover, under assumptions, several commonly used SMC algorithms. There are only limited results a...
12. Monte Carlo methods beyond detailed balance
NARCIS (Netherlands)
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
13. Extending canonical Monte Carlo methods: II
International Nuclear Information System (INIS)
We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2(δE2) compatible with negative heat capacities, C α, as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14–0.18
14. Introduction to the Monte Carlo methods
International Nuclear Information System (INIS)
Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab
15. The Monte Carlo method the method of statistical trials
CERN Document Server
Shreider, YuA
1966-01-01
The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio
16. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX
International Nuclear Information System (INIS)
In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)
17. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX
Energy Technology Data Exchange (ETDEWEB)
Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology
2010-02-15
In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)
18. The Moment Guided Monte Carlo Method
OpenAIRE
Degond, Pierre; Dimarco, Giacomo; Pareschi, Lorenzo
2009-01-01
In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the p...
19. New Dynamic Monte Carlo Renormalization Group Method
OpenAIRE
Lacasse, Martin-D.; Vinals, Jorge; Grant, Martin
1992-01-01
The dynamical critical exponent of the two-dimensional spin-flip Ising model is evaluated by a Monte Carlo renormalization group method involving a transformation in time. The results agree very well with a finite-size scaling analysis performed on the same data. The value of $z = 2.13 \\pm 0.01$ is obtained, which is consistent with most recent estimates.
20. Monte Carlo methods for preference learning
DEFF Research Database (Denmark)
Viappiani, P.
2012-01-01
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
1. Fast sequential Monte Carlo methods for counting and optimization
CERN Document Server
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
2. by means of FLUKA Monte Carlo method
Directory of Open Access Journals (Sweden)
Ermis Elif Ebru
2015-01-01
Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
3. The Moment Guided Monte Carlo Method
CERN Document Server
Degond, Pierre; Pareschi, Lorenzo
2009-01-01
In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the particle positions and velocities through moment equations so that the concurrent solution of the moment and kinetic models furnishes the same macroscopic quantities.
4. Reactor perturbation calculations by Monte Carlo methods
International Nuclear Information System (INIS)
Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)
5. Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems
Science.gov (United States)
Slattery, Stuart R.
This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It
6. Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media
7. Introduction to Monte-Carlo method
International Nuclear Information System (INIS)
We recall first some well known facts about random variables and sampling. Then we define the Monte-Carlo method in the case where one wants to compute a given integral. Afterwards, we ship to discrete Markov chains for which we define random walks, and apply to finite difference approximations of diffusion equations. Finally we consider Markov chains with continuous state (but discrete time), transition probabilities and random walks, which are the main piece of this work. The applications are: diffusion and advection equations, and the linear transport equation with scattering
8. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code
Science.gov (United States)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
9. A new method for commissioning Monte Carlo treatment planning systems
Science.gov (United States)
Aljarrah, Khaled Mohammed
2005-11-01
The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.
10. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm
KAUST Repository
Hoel, Hakon
2014-01-01
We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.
11. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
CERN Document Server
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
12. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code
Science.gov (United States)
He, Tongming Tony
In IMRT inverse planning, inaccurate dose calculations and limitations in optimization algorithms introduce both systematic and convergence errors to treatment plans. The goal of this work is to practically implement a Monte Carlo based inverse planning model for clinical IMRT. The intention is to minimize both types of error in inverse planning and obtain treatment plans with better clinical accuracy than non-Monte Carlo based systems. The strategy is to calculate the dose matrices of small beamlets by using a Monte Carlo based method. Optimization of beamlet intensities is followed based on the calculated dose data using an optimization algorithm that is capable of escape from local minima and prevents possible pre-mature convergence. The MCNP 4B Monte Carlo code is improved to perform fast particle transport and dose tallying in lattice cells by adopting a selective transport and tallying algorithm. Efficient dose matrix calculation for small beamlets is made possible by adopting a scheme that allows concurrent calculation of multiple beamlets of single port. A finite-sized point source (FSPS) beam model is introduced for easy and accurate beam modeling. A DVH based objective function and a parallel platform based algorithm are developed for the optimization of intensities. The calculation accuracy of improved MCNP code and FSPS beam model is validated by dose measurements in phantoms. Agreements better than 1.5% or 0.2 cm have been achieved. Applications of the implemented model to clinical cases of brain, head/neck, lung, spine, pancreas and prostate have demonstrated the feasibility and capability of Monte Carlo based inverse planning for clinical IMRT. Dose distributions of selected treatment plans from a commercial non-Monte Carlo based system are evaluated in comparison with Monte Carlo based calculations. Systematic errors of up to 12% in tumor doses and up to 17% in critical structure doses have been observed. The clinical importance of Monte Carlo based
13. Method of tallying adjoint fluence and calculating kinetics parameters in Monte Carlo codes
International Nuclear Information System (INIS)
A method of using iterated fission probability to estimate the adjoint fluence during particles simulation, and using it as the weighting function to calculate kinetics parameters βeff and A in Monte Carlo codes, was introduced in this paper. Implements of this method in continuous energy Monte Carlo code MCNP and multi-group Monte Carlo code MCMG are both elaborated. Verification results show that, with regardless additional computing cost, using this method, the adjoint fluence accounted by MCMG matches well with the result computed by ANISN, and the kinetics parameters calculated by MCNP agree very well with benchmarks. This method is proved to be reliable, and the function of calculating kinetics parameters in Monte Carlo codes is carried out effectively, which could be the basement for Monte Carlo codes' utility in the analysis of nuclear reactors' transient behavior. (authors)
14. Accelerated Monte Carlo Methods for Coulomb Collisions
Science.gov (United States)
Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce
2014-03-01
We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.
15. Monte Carlo method with complex-valued weights for frequency domain analyses of neutron noise
International Nuclear Information System (INIS)
Highlights: • The transport equation of the neutron noise is solved with the Monte Carlo method. • A new Monte Carlo algorithm where complex-valued weights are treated is developed.• The Monte Carlo algorithm is verified by comparing with analytical solutions. • The results with the Monte Carlo method are compared with the diffusion theory. - Abstract: A Monte Carlo algorithm to solve the transport equation of the neutron noise in the frequency domain has been developed to extend the conventional diffusion theory of the neutron noise to the transport theory. In this paper, the neutron noise is defined as the stationary fluctuation of the neutron flux around its mean value, and is induced by perturbations of the macroscopic cross sections. Since the transport equation of the neutron noise is a complex equation, a Monte Carlo technique for treating complex-valued weights that was recently proposed for neutron leakage-corrected calculations has been introduced to solve the complex equation. To cancel the positive and negative values of complex-valued weights, an algorithm that is similar to the power iteration method has been implemented. The newly-developed Monte Carlo algorithm is benchmarked to analytical solutions in an infinite homogeneous medium. The neutron noise spatial distributions have been obtained both with the newly-developed Monte Carlo method and the conventional diffusion method for an infinitely-long homogeneous cylinder. The results with the Monte Carlo method agree well with those of the diffusion method. However, near the noise source induced by a high frequency perturbation, significant differences are found between the diffusion method and Monte Carlo method. The newly-developed Monte Carlo algorithm is expected to contribute to the improvement of calculation accuracy of the neutron noise
16. Improved criticality convergence via a modified Monte Carlo iteration method
Energy Technology Data Exchange (ETDEWEB)
Booth, Thomas E [Los Alamos National Laboratory; Gubernatis, James E [Los Alamos National Laboratory
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
17. Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia
Energy Technology Data Exchange (ETDEWEB)
Granero Cabanero, D.
2015-07-01
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
18. Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems
Science.gov (United States)
Martin, W. R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
19. Rare event simulation using Monte Carlo methods
CERN Document Server
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
20. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code
International Nuclear Information System (INIS)
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results
1. Combinatorial nuclear level density by a Monte Carlo method
OpenAIRE
Cerf, N.
1993-01-01
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning t...
2. Neutron transport calculations using Quasi-Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Moskowitz, B.S.
1997-07-01
This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points.
3. Monte Carlo method for solving a parabolic problem
Directory of Open Access Journals (Sweden)
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
4. On the feasibility of a homogenised multi-group Monte Carlo method in reactor analysis
International Nuclear Information System (INIS)
The use of homogenised multi-group cross sections to speed up Monte Carlo calculation has been studied to some extent, but the method is not widely implemented in modern calculation codes. This paper presents a calculation scheme in which homogenised material parameters are generated using the PSG continuous-energy Monte Carlo reactor physics code and used by MORA, a new full-core Monte Carlo code entirely based on homogenisation. The theory of homogenisation and its implementation in the Monte Carlo method are briefly introduced. The PSG-MORA calculation scheme is put to practice in two fundamentally different test cases: a small sodium-cooled fast reactor (JOYO) and a large PWR core. It is shown that the homogenisation results in a dramatic increase in efficiency. The results are in a reasonably good agreement with reference PSG and MCNP5 calculations, although fission source convergence becomes a problem in the PWR test case. (authors)
5. Quantum Monte Carlo methods algorithms for lattice models
CERN Document Server
Gubernatis, James; Werner, Philipp
2016-01-01
Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...
6. Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules
CERN Document Server
Lester, William A; Reynolds, PJ
1994-01-01
This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n
7. Inference in Kingman's Coalescent with Particle Markov Chain Monte Carlo Method
OpenAIRE
Chen, Yifei; Xie, Xiaohui
2013-01-01
We propose a new algorithm to do posterior sampling of Kingman's coalescent, based upon the Particle Markov Chain Monte Carlo methodology. Specifically, the algorithm is an instantiation of the Particle Gibbs Sampling method, which alternately samples coalescent times conditioned on coalescent tree structures, and tree structures conditioned on coalescent times via the conditional Sequential Monte Carlo procedure. We implement our algorithm as a C++ package, and demonstrate its utility via a ...
8. On the Markov Chain Monte Carlo (MCMC) method
Rajeeva L Karandikar
2006-04-01
Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples.
9. A Particle Population Control Method for Dynamic Monte Carlo
Science.gov (United States)
Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony
2014-06-01
A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.
10. Problems in radiation shielding calculations with Monte Carlo methods
International Nuclear Information System (INIS)
The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)
11. Monte Carlo methods and applications in nuclear physics
International Nuclear Information System (INIS)
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs
12. Implementing Newton's Method
OpenAIRE
Neuerburg, Kent M.
2007-01-01
Newton's Method, the recursive algorithm for computing the roots of an equation, is one of the most efficient and best known numerical techniques. The basics of the method are taught in any first-year calculus course. However, in most cases the two most important questions are often left unanswered. These questions are, "Where do I start?" and "When do I stop?" We give criteria for determining when a given value is a good starting value and how many iterations it will take to ...
13. A new method for the calculation of diffusion coefficients with Monte Carlo
International Nuclear Information System (INIS)
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods. (author)
14. A New Method for the Calculation of Diffusion Coefficients with Monte Carlo
Science.gov (United States)
Dorval, Eric
2014-06-01
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.
15. Implementation of Rosenbrock methods
Energy Technology Data Exchange (ETDEWEB)
Shampine, L. F.
1980-11-01
Rosenbrock formulas have shown promise in research codes for the solution of initial-value problems for stiff systems of ordinary differential equations (ODEs). To help assess their practical value, the author wrote an item of mathematical software based on such a formula. This required a variety of algorithmic and software developments. Those of general interest are reported in this paper. Among them is a way to select automatically, at every step, an explicit Runge-Kutta formula or a Rosenbrock formula according to the stiffness of the problem. Solving linear systems is important to methods for stiff ODEs, and is rather special for Rosenbrock methods. A cheap, effective estimate of the condition of the linear systems is derived. Some numerical results are presented to illustrate the developments.
16. Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo
Energy Technology Data Exchange (ETDEWEB)
Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)
2011-07-01
This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)
17. Application of biasing techniques to the contributon Monte Carlo method
International Nuclear Information System (INIS)
Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables
18. Simulation and the Monte Carlo Method, Student Solutions Manual
CERN Document Server
Rubinstein, Reuven Y
2012-01-01
This accessible new edition explores the major topics in Monte Carlo simulation Simulation and the Monte Carlo Method, Second Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over twenty-five years ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, suc
19. A residual Monte Carlo method for discrete thermal radiative diffusion
International Nuclear Information System (INIS)
Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems
20. Development of Continuous-Energy Eigenvalue Sensitivity Coefficient Calculation Methods in the Shift Monte Carlo Code
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
1. A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
International Nuclear Information System (INIS)
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
2. Comparison between Monte Carlo method and deterministic method
International Nuclear Information System (INIS)
A fast critical assembly consists of a lattice of plates of sodium, plutonium or uranium, resulting in a high inhomogeneity. The inhomogeneity in the lattice should be evaluated carefully to determine the bias factor accurately. Deterministic procedures are generally used for the lattice calculation. To reduce the required calculation time, various one-dimensional lattice models have been developed previously to replace multi-dimensional models. In the present study, calculations are made for a two-dimensional model and results are compared with those obtained with one-dimensional models in terms of the average microscopic cross section of a lattice and diffusion coefficient. Inhomogeneity in a lattice affects the effective cross section and distribution of neutrons in the lattice. The background cross section determined by the method proposed by Tone is used here to calculate the effective cross section, and the neutron distribution is determined by the collision probability method. Several other methods have been proposed to calculate the effective cross section. The present study also applies the continuous energy Monte Carlo method to the calculation. A code based on this method is employed to evaluate several one-dimensional models. (Nogami, K.)
3. Computing Functionals of Multidimensional Diffusions via Monte Carlo Methods
OpenAIRE
Jan Baldeaux; Eckhard Platen
2012-01-01
We discuss suitable classes of diffusion processes, for which functionals relevant to finance can be computed via Monte Carlo methods. In particular, we construct exact simulation schemes for processes from this class. However, should the finance problem under consideration require e.g. continuous monitoring of the processes, the simulation algorithm can easily be embedded in a multilevel Monte Carlo scheme. We choose to introduce the finance problems under the benchmark approach, and find th...
4. Computing Greeks with Multilevel Monte Carlo Methods using Importance Sampling
OpenAIRE
Euget, Thomas
2012-01-01
This paper presents a new efficient way to reduce the variance of an estimator of popular payoffs and greeks encounter in financial mathematics. The idea is to apply Importance Sampling with the Multilevel Monte Carlo recently introduced by M.B. Giles. So far, Importance Sampling was proved successful in combination with standard Monte Carlo method. We will show efficiency of our approach on the estimation of financial derivatives prices and then on the estimation of Greeks (i.e. sensitivitie...
5. A New Method for Parallel Monte Carlo Tree Search
OpenAIRE
Mirsoleimani, S. Ali; Plaat, Aske; Herik, Jaap van den; Vermaseren, Jos
2016-01-01
In recent years there has been much interest in the Monte Carlo tree search algorithm, a new, adaptive, randomized optimization algorithm. In fields as diverse as Artificial Intelligence, Operations Research, and High Energy Physics, research has established that Monte Carlo tree search can find good solutions without domain dependent heuristics. However, practice shows that reaching high performance on large parallel machines is not so successful as expected. This paper proposes a new method...
6. New simpler method of matching NLO corrections with parton shower Monte Carlo
OpenAIRE
Jadach, S.; Placzek, W.; Sapeta, S.(CERN PH-TH, CH-1211, Geneva 23, Switzerland); Siodmok, A.; Skrzypek, M.
2016-01-01
Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higg...
7. New simpler method of matching NLO corrections with parton shower Monte Carlo
CERN Document Server
Jadach, S; Sapeta, S; Siodmok, A; Skrzypek, M
2016-01-01
Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higgs-boson production process are also presented.
8. Monte Carlo methods and models in finance and insurance
CERN Document Server
Korn, Ralf
2010-01-01
Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat
9. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
CERN Document Server
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
10. Markov chain Monte Carlo methods: an introductory example
Science.gov (United States)
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
11. Implementation Method of Stable Model
Directory of Open Access Journals (Sweden)
Shasha Wu
2008-01-01
Full Text Available Software Stability Modeling (SSM is a promising software development methodology based on object-oriented programming to achieve model level stability and reusability. Among the three critical categories of objects proposed by SSM, the business objects play a critical role in connecting the stable problem essentials (enduringbusiness themes and the unstable object implementations (industry objects. The business objects are especially difficult to implement and often raise confusion in the implementation because of their unique characteristics: externally stable and internally unstable. The implementation and code level stability is not the major concern. How to implement the objects in a stable model through object-oriented programming without losing its stability is a big challenge in the real software development. In this paper, we propose new methods to realize the business objects in the implementation of stable model. We also rephrase the definition of the business objects from the implementation perspective, in hope the new description can help software developers to adopt and implement stable models more easily. Finally, we describe the implementation of a stable model for a balloon rental resource management scope to illustrate the advantages of the proposed method.
12. Monte Carlo methods for the self-avoiding walk
Energy Technology Data Exchange (ETDEWEB)
Janse van Rensburg, E J [Department of Mathematics and Statistics, York University, Toronto, ON M3J 1P3 (Canada)], E-mail: [email protected]
2009-08-14
The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)
13. Monte Carlo methods for the self-avoiding walk
International Nuclear Information System (INIS)
The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)
14. Monte Carlo Methods for Tempo Tracking and Rhythm Quantization
CERN Document Server
Cemgil, A T; 10.1613/jair.1121
2011-01-01
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...
15. Monte Carlo method application to shielding calculations
International Nuclear Information System (INIS)
CANDU spent fuel discharged from the reactor core contains Pu, so it must be stressed in two directions: tracing for the fuel reactivity in order to prevent critical mass formation and personnel protection during the spent fuel manipulation. The basic tasks accomplished by the shielding calculations in a nuclear safety analysis consist in dose rates calculations in order to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. To perform photon dose rates calculations the Monte Carlo MORSE-SGC code incorporated in SAS4 sequence from SCALE system was used. The paper objective was to obtain the photon dose rates to the spent fuel transport cask wall, both in radial and axial directions. As source of radiation one spent CANDU fuel bundle was used. All the geometrical and material data related to the transport cask were considered according to the shipping cask type B model, whose prototype has been realized and tested in the Institute for Nuclear Research Pitesti. (authors)
16. Quantum Monte Carlo diagonalization method as a variational calculation
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1997-05-01
A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)
17. Auxiliary-field quantum Monte Carlo methods in nuclei
CERN Document Server
Alhassid, Y
2016-01-01
Auxiliary-field quantum Monte Carlo methods enable the calculation of thermal and ground state properties of correlated quantum many-body systems in model spaces that are many orders of magnitude larger than those that can be treated by conventional diagonalization methods. We review recent developments and applications of these methods in nuclei using the framework of the configuration-interaction shell model.
18. Observations on variational and projector Monte Carlo methods
International Nuclear Information System (INIS)
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed
19. LISA data analysis using Markov chain Monte Carlo methods
International Nuclear Information System (INIS)
The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions
20. Monte Carlo methods for the reliability analysis of Markov systems
International Nuclear Information System (INIS)
This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator
1. Introduction to Monte Carlo methods: sampling techniques and random numbers
International Nuclear Information System (INIS)
The Monte Carlo method describes a very broad area of science, in which many processes, physical systems and phenomena that are statistical in nature and are difficult to solve analytically are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions. As the number of individual events (called histories) is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. Assuming that the behavior of physical system can be described by probability density functions, then the Monte Carlo simulation can proceed by sampling from these probability density functions, which necessitates a fast and effective way to generate random numbers uniformly distributed on the interval (0,1). Particles are generated within the source region and are transported by sampling from probability density functions through the scattering media until they are absorbed or escaped the volume of interest. The outcomes of these random samplings or trials, must be accumulated or tallied in an appropriate manner to produce the desired result, but the essential characteristic of Monte Carlo is the use of random sampling techniques to arrive at a solution of the physical problem. The major components of Monte Carlo methods for random sampling for a given event are described in the paper
2. Frequency domain optical tomography using a Monte Carlo perturbation method
Science.gov (United States)
Yamamoto, Toshihiro; Sakamoto, Hiroki
2016-04-01
A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.
OpenAIRE
Falcioni, Marco; Michael W. Deem
2000-01-01
Strategies for searching the space of variables in combinatorial chemistry experiments are presented, and a random energy model of combinatorial chemistry experiments is introduced. The search strategies, derived by analogy with the computer modeling technique of Monte Carlo, effectively search the variable space even in combinatorial chemistry experiments of modest size. Efficient implementations of the library design and redesign strategies are feasible with current experimental capabilities.
4. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Energy Technology Data Exchange (ETDEWEB)
Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
5. Monte Carlo Form-Finding Method for Tensegrity Structures
Science.gov (United States)
Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping
2010-05-01
In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.
6. Latent uncertainties of the precalculated track Monte Carlo method
International Nuclear Information System (INIS)
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the
7. Diffusion/transport hybrid discrete method for Monte Carlo solution of the neutron transport equation
International Nuclear Information System (INIS)
Monte Carlo method is widely used for solving neutron transport equation. Basically Monte Carlo method treats continuous angle, space and energy. It gives very accurate solution when enough many particle histories are used, but it takes too long computation time. To reduce computation time, discrete Monte Carlo method was proposed. It is called Discrete Transport Monte Carlo (DTMC) method. It uses discrete space but continuous angle in mono energy one dimension problem and uses lump, linear-discontinuous (LLD) equation to make probabilities of leakage, scattering, and absorption. LLD may cause negative angular fluxes in highly scattering problem, so two scatter variance reduction method is applied to DTMC and shows very accurate solution in various problems. In transport Monte Carlo calculation, the particle history does not end for scattering event. So it also takes much computation time in highly scattering problem. To further reduce computation time, Discrete Diffusion Monte Carlo (DDMC) method is implemented. DDMC uses diffusion equation to make probabilities and has no scattering events. So DDMC takes very short computation time comparing with DTMC and shows very well-agreed results with cell-centered diffusion results. It is known that diffusion result may not be good in boundaries. So in hybrid method of DTMC and DDMC, boundary regions are calculated by DTMC and the other regions are calculated by DDMC. In this thesis, DTMC, DDMC and hybrid methods and their results of several problems are presented. The results show that DDMC and DTMC are well agreed with deterministic diffusion and transport results, respectively. The hybrid method shows transport-like results in problems where diffusion results are poor. The computation time of hybrid method is between DDMC and DTMC, as expected
8. Extending the alias Monte Carlo sampling method to general distributions
International Nuclear Information System (INIS)
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs
9. Analysis of the uranium price predicted to 24 months, implementing neural networks and the Monte Carlo method like predictive tools; Analisis del precio del uranio pronosticado a 24 meses, implementando redes neuronales y el metodo de Monte Carlo como herramientas predictivas
Energy Technology Data Exchange (ETDEWEB)
Esquivel E, J.; Ramirez S, J. R.; Palacios H, J. C., E-mail: [email protected] [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2011-11-15
The present work shows predicted prices of the uranium, using a neural network. The importance of predicting financial indexes of an energy resource, in this case, allows establishing budgetary measures, as well as the costs of the resource to medium period. The uranium is part of the main energy generating fuels and as such, its price rebounds in the financial analyses, due to this is appealed to predictive methods to obtain an outline referent to the financial behaviour that will have in a certain time. In this study, two methodologies are used for the prediction of the uranium price: the Monte Carlo method and the neural networks. These methods allow predicting the indexes of monthly costs, for a two years period, starting from the second bimonthly of 2011. For the prediction the uranium costs are used, registered from the year 2005. (Author)
10. Computing Functionals of Multidimensional Diffusions via Monte Carlo Methods
CERN Document Server
Baldeaux, Jan
2012-01-01
We discuss suitable classes of diffusion processes, for which functionals relevant to finance can be computed via Monte Carlo methods. In particular, we construct exact simulation schemes for processes from this class. However, should the finance problem under consideration require e.g. continuous monitoring of the processes, the simulation algorithm can easily be embedded in a multilevel Monte Carlo scheme. We choose to introduce the finance problems under the benchmark approach, and find that this approach allows us to exploit conveniently the analytical tractability of these diffusion processes.
11. Development of three-dimensional program based on Monte Carlo and discrete ordinates bidirectional coupling method
International Nuclear Information System (INIS)
The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)
12. MOSFET GATE CURRENT MODELLING USING MONTE-CARLO METHOD
OpenAIRE
Voves, J.; Vesely, J.
1988-01-01
The new technique for determining the probability of hot-electron travel through the gate oxide is presented. The technique is based on the Monte Carlo method and is used in MOSFET gate current modelling. The calculated values of gate current are compared with experimental results from direct measurements on MOSFET test chips.
13. Application of equivalence methods on Monte Carlo method based homogenization multi-group constants
International Nuclear Information System (INIS)
The multi-group constants generated via continuous energy Monte Carlo method do not satisfy the equivalence between reference calculation and diffusion calculation applied in reactor core analysis. To the satisfaction of the equivalence theory, general equivalence theory (GET) and super homogenization method (SPH) were applied to the Monte Carlo method based group constants, and a simplified reactor core and C5G7 benchmark were examined with the Monte Carlo constants. The results show that the calculating precision of group constants is improved, and GET and SPH are good candidates for the equivalence treatment of Monte Carlo homogenization. (authors)
14. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
Science.gov (United States)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
15. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
International Nuclear Information System (INIS)
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife (registered) SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head and neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations
16. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
Energy Technology Data Exchange (ETDEWEB)
Ma, C-M; Li, J S; Deng, J; Fan, J [Radiation Oncology Department, Fox Chase Cancer Center, Philadelphia, PA (United States)], E-mail: [email protected]
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife (registered) SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head and neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
17. A separable shadow Hamiltonian hybrid Monte Carlo method
Science.gov (United States)
Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.
2009-11-01
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).
18. Monte Carlo methods for pricing financial options
N Bolia; S Juneja
2005-04-01
Pricing financial options is amongst the most important and challenging problems in the modern financial industry. Except in the simplest cases, the prices of options do not have a simple closed form solution and efficient computational methods are needed to determine them. Monte Carlo methods have increasingly become a popular computational tool to price complex financial options, especially when the underlying space of assets has a large dimensionality, as the performance of other numerical methods typically suffer from the ‘curse of dimensionality’. However, even Monte-Carlo techniques can be quite slow as the problem-size increases, motivating research in variance reduction techniques to increase the efficiency of the simulations. In this paper, we review some of the popular variance reduction techniques and their application to pricing options. We particularly focus on the recent Monte-Carlo techniques proposed to tackle the difficult problem of pricing American options. These include: regression-based methods, random tree methods and stochastic mesh methods. Further, we show how importance sampling, a popular variance reduction technique, may be combined with these methods to enhance their effectiveness. We also briefly review the evolving options market in India.
19. Stabilizing Canonical-Ensemble Calculations in the Auxiliary-Field Monte Carlo Method
CERN Document Server
Gilbreth, C N
2014-01-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
20. Bayesian Monte Carlo method for nuclear data evaluation
International Nuclear Information System (INIS)
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight. (orig.)
1. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification
Science.gov (United States)
Wu, Keyi; Li, Jinglai
2016-09-01
In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.
2. Non-analogue Monte Carlo method, application to neutron simulation
International Nuclear Information System (INIS)
With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Nowadays, only the Monte Carlo method offers such possibilities. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette
3. Development of continuous-energy eigenvalue sensitivity coefficient calculation methods in the shift Monte Carlo Code
Energy Technology Data Exchange (ETDEWEB)
Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
4. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign ? Part II: Model application to the CENICA, Pedregal and Santa Ana sites
OpenAIRE
San Martini, F. M.; E. J. Dunlea; R. Volkamer; Onasch, T. B.; J. T. Jayne; Canagaratna, M. R.; Worsnop, D. R.; C. E. Kolb; J. H. Shorter; S. C. Herndon; M. S. Zahniser; D. Salcedo; Dzepina, K.; Jimenez, J. L; Ortega, J. M.
2006-01-01
A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the model is able to accurately pred...
5. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part II: Model application to the CENICA, Pedregal and Santa Ana sites
OpenAIRE
San Martini, F. M.; Dunlea, E. J.; R. Volkamer; Onasch, T. B.; Jayne, J. T.; Canagaratna, M. R.; Worsnop, D. R.; Kolb, C. E.; Shorter, J. H.; Herndon, S. C.; Zahniser, M. S.; D. Salcedo; Dzepina, K.; Jimenez, J. L.; Ortega, J. M.
2006-01-01
A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the mode...
6. Efficient Monte Carlo methods for continuum radiative transfer
CERN Document Server
Juvela, M
2005-01-01
We discuss the efficiency of Monte Carlo methods in solving continuum radiative transfer problems. The sampling of the radiation field and convergence of dust temperature calculations in the case of optically thick clouds are both studied. For spherically symmetric clouds we find that the computational cost of Monte Carlo simulations can be reduced, in some cases by orders of magnitude, with simple importance weighting schemes. This is particularly true for models consisting of cells of different sizes for which the run times would otherwise be determined by the size of the smallest cell. We present a new idea of extending importance weighting to scattered photons. This is found to be useful in calculations of scattered flux and could be important for three-dimensional models when observed intensity is needed only for one general direction of observations. Convergence of dust temperature calculations is studied for models with optical depths 10-10000. We examine acceleration methods where radiative interactio...
7. Multi-way Monte Carlo Method for Linear Systems
OpenAIRE
Wu, Tao; Gleich, David F.
2016-01-01
We study the Monte Carlo method for solving a linear system of the form $x = H x + b$. A sufficient condition for the method to work is $\\| H \\| < 1$, which greatly limits the usability of this method. We improve this condition by proposing a new multi-way Markov random walk, which is a generalization of the standard Markov random walk. Under our new framework we prove that the necessary and sufficient condition for our method to work is the spectral radius $\\rho(H^{+}) < 1$, which is a weake...
8. Monte Carlo methods and applications for the nuclear shell model
OpenAIRE
Dean, D. J.; White, J A
1998-01-01
The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sdpf- shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.
9. Efficient Monte Carlo methods for light transport in scattering media
OpenAIRE
Jarosz, Wojciech
2008-01-01
In this dissertation we focus on developing accurate and efficient Monte Carlo methods for synthesizing images containing general participating media. Participating media such as clouds, smoke, and fog are ubiquitous in the world and are responsible for many important visual phenomena which are of interest to computer graphics as well as related fields. When present, the medium participates in lighting interactions by scattering or absorbing photons as they travel through the scene. Though th...
10. Calculating atomic and molecular properties using variational Monte Carlo methods
International Nuclear Information System (INIS)
The authors compute a number of properties for the 1 1S, 21S, and 23S states of helium as well as the ground states of H2 and H/+3 using Variational Monte Carlo. These are in good agreement with previous calculations (where available). Electric-response constants for the ground states of helium, H2 and H+3 are computed as derivatives of the total energy. The method used to calculate these quantities is discussed in detail
11. Monte Carlo Methods and Applications for the Nuclear Shell Model
International Nuclear Information System (INIS)
The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems
12. Calculations of pair production by Monte Carlo methods
International Nuclear Information System (INIS)
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs
13. Calculations of pair production by Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
14. Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
15. A new lattice Monte Carlo method for simulating dielectric inhomogeneity
Science.gov (United States)
Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei
We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.
16. A new hybrid method--combined heat flux method with Monte-Carlo method to analyze thermal radiation
Institute of Scientific and Technical Information of China (English)
2006-01-01
A new hybrid method, Monte-Carlo-Heat-Flux (MCHF) method, was presented to analyze the radiative heat transfer of participating medium in a three-dimensional rectangular enclosure using combined the Monte-Carlo method with the heat flux method. Its accuracy and reliability was proved by comparing the computational results with exact results from classical "Zone Method".
17. An object-oriented implementation of a parallel Monte Carlo code for radiation transport
Science.gov (United States)
Santos, Pedro Duarte; Lani, Andrea
2016-05-01
This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.
18. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 5. New Zero-Variance Methods for Monte Carlo Criticality and Source-Detector Problems
International Nuclear Information System (INIS)
A zero-variance (ZV) Monte Carlo transport method is a theoretical construct that, if it could be implemented on a practical computer, would produce the exact result after any number of histories. Unfortunately, ZV methods are impractical; to implement them, one must have complete knowledge of a certain adjoint flux, and acquiring this knowledge is an infinitely greater task than solving the original criticality or source-detector problem. (In fact, the adjoint flux itself yields the desired result, with no need of a Monte Carlo simulation) Nevertheless, ZV methods are of practical interest because it is possible to approximate them in ways that yield efficient variance-reduction schemes. Such implementations must be done carefully. For example, one must not change the mean of the final answer) The goal of variance reduction is to estimate the true mean with greater efficiency. In this paper, we describe new ZV methods for Monte Carlo criticality and source-detector problems. These methods have the same requirements (and disadvantages) as described earlier. However, their implementation is very different. Thus, the concept of approximating them to obtain practical variance-reduction schemes opens new possibilities. In previous ZV methods, (a) a single characteristic parameter (the k-eigenvalue or a detector response) of a forward transport problem is sought; (b) the exact solution of an adjoint problem must be known for all points in phase-space; and (c) a non-analog process, defined in terms of the adjoint solution, transports forward Monte Carlo particles from the source to the detector (in criticality problems, from the fission region, where a generation n fission neutron is born, back to the fission region, where generation n+1 fission neutrons are born). In the non-analog transport process, Monte Carlo particles (a) are born in the source region with weight equal to the desired characteristic parameter, (b) move through the system by an altered transport
19. Finite population-size effects in projection Monte Carlo methods
International Nuclear Information System (INIS)
Projection (Green's function and diffusion) Monte Carlo techniques sample a wave function by a stochastic iterative procedure. It is shown that these methods converge to a stationary distribution which is unexpectedly biased, i.e., differs from the exact ground state wave function, and that this bias occurs because of the introduction of a replication procedure. It is demonstrated that these biased Monte Carlo algorithms lead to a modified effective mass which is equal to the desired mass only in the limit of an infinite population of walkers. In general, the bias scales as 1/N for a population of walkers of size N. Various strategies to reduce this bias are considered. (authors). 29 refs., 3 figs
20. A Hamiltonian Monte–Carlo method for Bayesian inference of supermassive black hole binaries
International Nuclear Information System (INIS)
We investigate the use of a Hamiltonian Monte–Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte–Carlo (MCMC) methods, such as Metropolis–Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte–Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a 106 iteration chain from ∼109 to ∼106. The result is in an implementation of the Hamiltonian Monte–Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC. (paper)
1. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
International Nuclear Information System (INIS)
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 107 xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual
2. Monte Carlo methods in electron transport problems. Pt. 1
International Nuclear Information System (INIS)
The condensed-history Monte Carlo method for charged particles transport is reviewed and discussed starting from a general form of the Boltzmann equation (Part I). The physics of the electronic interactions, together with some pedagogic example will be introduced in the part II. The lecture is directed to potential users of the method, for which it can be a useful introduction to the subject matter, and wants to establish the basis of the work on the computer code RECORD, which is at present in a developing stage
3. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation
International Nuclear Information System (INIS)
Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes
4. Dynamical Monte Carlo methods for plasma-surface reactions
Science.gov (United States)
Guerra, Vasco; Marinov, Daniil
2016-08-01
Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir–Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.
5. Monte Carlo implementation of a guiding-center Fokker-Planck kinetic equation
International Nuclear Information System (INIS)
A Monte Carlo method for the collisional guiding-center Fokker-Planck kinetic equation is derived in the five-dimensional guiding-center phase space, where the effects of magnetic drifts due to the background magnetic field nonuniformity are included. It is shown that, in the limit of a homogeneous magnetic field, our guiding-center Monte Carlo collision operator reduces to the guiding-center Monte Carlo Coulomb operator previously derived by Xu and Rosenbluth [Phys. Fluids B 3, 627 (1991)]. Applications of the present work will focus on the collisional transport of energetic ions in complex nonuniform magnetized plasmas in the large mean-free-path (collisionless) limit, where magnetic drifts must be retained
6. Condensed history Monte Carlo methods for photon transport problems
International Nuclear Information System (INIS)
We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods - called Condensed History (CH) methods - have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models - one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes - can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions
7. MCNP4, a parallel Monte Carlo implementation on a workstation network
International Nuclear Information System (INIS)
The Monte Carlo code MCNP4 has been implemented on a workstation network to allow parallel computing of Monte Carlo transport processes. This has been achieved by making use of the communication tool PVM (Parallel Virtual Machine) and introducing some changes in the MCNP4 code. The PVM daemons and user libraries have been installed on different workstations to allow working on the same platform. Essential features of PVM and the structure of the parallelized MCNP4 version are discussed in this paper. Experiences are described and problems are explained and solved with the extended version of MCNP. The efficiency of the parallelized MCNP4 is assessed for two realistic sample problems from the field of fusion neutronics. Compared with the fastest workstation in the network, a speed-up factor near five has been obtained by using a network of ten workstations, different in architecture and performance. (orig.)
8. Iridium 192 dosimetric study by Monte-Carlo method
International Nuclear Information System (INIS)
The Monte-Carlo method was applied to a dosimetry of iridium192 in water and in air; an iridium-platinum alloy seed, enveloped by a platinum can, is used as source. The radioactive decay of this nuclide and the transport of emitted particles from the seed-source in the can and in the irradiated medium are simulated successively. The photons energy spectra outside the source, as well as dose distributions, are given. Phi(d) function is calculated and our results with various experimental values are compared
9. Research on Monte Carlo simulation method of industry CT system
International Nuclear Information System (INIS)
There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)
10. The macro response Monte Carlo method for electron transport
CERN Document Server
Svatos, M M
1999-01-01
This thesis demonstrates the feasibility of basing dose calculations for electrons in radiotherapy on first-principles single scatter physics, in a calculation time that is comparable to or better than current electron Monte Carlo methods. The macro response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. The problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position, and trajectory after leaving the local geometry, a small sphere or "kugel." A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25-8 MeV) and sizes (0.025 to 0.1 cm in radius). The second transport stage is a global calculation, in which steps that conform to the size of the kugels in the...
11. 'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods
International Nuclear Information System (INIS)
The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)
12. Application of Monte Carlo methods in tomotherapy and radiation biophysics
Science.gov (United States)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published
13. A study of potential energy curves from the model space quantum Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: [email protected] [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)
2015-12-07
We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.
14. Time-step limits for a Monte Carlo Compton-scattering method
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory
2008-01-01
Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the
15. Multilevel Monte Carlo methods for computing failure probability of porous media flow systems
Science.gov (United States)
Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.
2016-08-01
We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.
16. Application of Macro Response Monte Carlo method for electron spectrum simulation
International Nuclear Information System (INIS)
During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations
17. Monte Carlo implementation, validation, and characterization of a 120 leaf MLC
International Nuclear Information System (INIS)
Purpose: Recently, the new high definition multileaf collimator (HD120 MLC) was commercialized by Varian Medical Systems providing high resolution in the center section of the treatment field. The aim of this work is to investigate the characteristics of the HD120 MLC using Monte Carlo (MC) methods. Methods: Based on the information of the manufacturer, the HD120 MLC was implemented into the already existing Swiss MC Plan (SMCP). The implementation has been configured by adjusting the physical density and the air gap between adjacent leaves in order to match transmission profile measurements for 6 and 15 MV beams of a Novalis TX. These measurements have been performed in water using gafchromic films and an ionization chamber at an SSD of 95 cm and a depth of 5 cm. The implementation was validated by comparing diamond measured and calculated penumbra values (80%-20%) for different field sizes and water depths. Additionally, measured and calculated dose distributions for a head and neck IMRT case using the DELTA4 phantom have been compared. The validated HD120 MLC implementation has been used for its physical characterization. For this purpose, phase space (PS) files have been generated below the fully closed multileaf collimator (MLC) of a 40 x 22 cm2 field size for 6 and 15 MV. The PS files have been analyzed in terms of energy spectra, mean energy, fluence, and energy fluence in the direction perpendicular to the MLC leaves and have been compared with the corresponding data using the well established Varian 80 leaf (MLC80) and Millennium M120 (M120 MLC) MLCs. Additionally, the impact of the tongue and groove design of the MLCs on dose has been characterized. Results: Calculated transmission values for the HD120 MLC are 1.25% and 1.34% in the central part of the field for the 6 and 15 MV beam, respectively. The corresponding ionization chamber measurements result in a transmission of 1.20% and 1.35%. Good agreement has been found for the comparison between
18. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.
Science.gov (United States)
Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J
2004-09-01
We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy. PMID:15487756
19. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications
International Nuclear Information System (INIS)
We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1x108 or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8x108 histories. For a smaller number of histories (1x108) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1x108 histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy
20. A new DNB design method using the system moment method combined with Monte Carlo simulation
International Nuclear Information System (INIS)
A new statistical method of core thermal design for pressurized water reactors is presented. It not only quantifies the DNBR parameter uncertainty by the system moment method, but also combines the DNBR parameter with correlation uncertainty using Monte Carlo technique. The randomizing function for Monte Carlo simulation was expressed in a form of reciprocal-multiplication of DNBR parameter and correlation uncertainty factors. The results of comparisons with the conventional methods show that the DNBR limit calculated by this method is in good agreement with that by the SCU method with less computational effort and it is considered applicable to the current DNB design
1. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-10-01
Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
2. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer: I. Algorithms and numerical methods
CERN Document Server
Harries, Tim J
2015-01-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelisation method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion onto, and the growth of, the protostars. We detail the resu...
3. The macro response Monte Carlo method for electron transport
Energy Technology Data Exchange (ETDEWEB)
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could
4. A CNS calculation line based on a Monte Carlo method
International Nuclear Information System (INIS)
Full text: The design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. Decisions taken in this sense affect not only the neutron flux in the source neighborhood, which can be evaluated by a standard empirical method, but also the neutron flux values in experimental positions far away of the neutron source. At long distances from the neutron source, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures. Standard and typical terminology such as average neutron flux, neutron current, angular flux, luminosity, are magnitudes very difficult to evaluate in positions located several meters away from the neutron source. The Monte Carlo method is a unique and powerful tool to transport neutrons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The proper use of MCNP as the main tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors. The design goal is to evaluate the performance of the neutron sources, their beam tubes and neutron guides at specific experimental locations in the reactor hall as well as in the neutron or experimental hall. In this work, the calculation methodology used to design Cold, Thermal and Hot Neutron Sources and their associated Neutron Beam Transport Systems, based on the use of the MCNP code, is presented. This work also presents some changes made to the cross section libraries in order to cope with cryogenic moderators such as liquid hydrogen and liquid deuterium. (author)
5. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part II: Model application to the CENICA, Pedregal and Santa Ana sites
Directory of Open Access Journals (Sweden)
F. M. San Martini
2006-01-01
Full Text Available A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the model is able to accurately predict the observed inorganic particle concentrations at all three sites. The agreement between the predicted and observed gas phase ammonia concentration is excellent. The NOz concentration calculated from the NOy, NO and NO2 observations is of limited use in constraining the gas phase nitric acid concentration given the large uncertainties in this measure of nitric acid and additional reactive nitrogen species. Focusing on the acidic period of 9–11 April identified by Salcedo et al. (2006, the model accurately predicts the particle phase observations during this period with the exception of the nitrate predictions after 10:00 a.m. (Central Daylight Time, CDT on 9 April, where the model underpredicts the observations by, on average, 20%. This period had a low planetary boundary layer, very high particle concentrations, and higher than expected nitrogen dioxide concentrations. For periods when the particle chloride observations are consistently above the detection limit, the model is able to both accurately predict the particle chloride mass concentrations and provide well-constrained HCl (g concentrations. The availability of gas-phase ammonia observations helps constrain the predicted HCl (g concentrations. When the particles are aqueous, the most likely concentrations of HCl (g are in the sub-ppbv range. The most likely predicted concentration of HCl (g was found to reach concentrations of order 10 ppbv if the particles are dry. Finally, the
6. Hybrid Deterministic-Monte Carlo Methods for Neutral Particle Transport
International Nuclear Information System (INIS)
In the history of transport analysis methodology for nuclear systems, there have been two fundamentally different methods, i.e., deterministic and Monte Carlo (MC) methods. Even though these two methods coexisted for the past 60 years and are complementary each other, they never been coded in the same computer codes. Recently, however, researchers have started to consider to combine these two methods in a computer code to make use of the strengths of two algorithms and avoid weaknesses. Although the advanced modern deterministic techniques such as method of characteristics (MOC) can solve a multigroup transport equation very accurately, there are still uncertainties in the MOC solutions due to the inaccuracy of the multigroup cross section data caused by approximations in the process of multigroup cross section generation, i.e., equivalence theory, interference effects, etc. Conversely, the MC method can handle the resonance shielding effect accurately when sufficiently many neutron histories are used but it takes a long calculation time. There was also a research to combine a multigroup transport and a continuous energy transport solver in a computer code system depending on the energy range. This paper proposes a hybrid deterministic-MC method in which a multigroup MOC method is used for high and low energy range and continuous MC method is used for the intermediate resonance energy range for efficient and accurate transport analysis
7. The derivation of Particle Monte Carlo methods for plasma modeling from transport equations
OpenAIRE
Longo, Savino
2008-01-01
We analyze here in some detail, the derivation of the Particle and Monte Carlo methods of plasma simulation, such as Particle in Cell (PIC), Monte Carlo (MC) and Particle in Cell / Monte Carlo (PIC/MC) from formal manipulation of transport equations.
8. Methods for variance reduction in Monte Carlo simulations
Science.gov (United States)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
9. Radiative heat transfer by the Monte Carlo method
CERN Document Server
Hartnett †, James P; Cho, Young I; Greene, George A; Taniguchi, Hiroshi; Yang, Wen-Jei; Kudo, Kazuhiko
1995-01-01
This book presents the basic principles and applications of radiative heat transfer used in energy, space, and geo-environmental engineering, and can serve as a reference book for engineers and scientists in researchand development. A PC disk containing software for numerical analyses by the Monte Carlo method is included to provide hands-on practice in analyzing actual radiative heat transfer problems.Advances in Heat Transfer is designed to fill the information gap between regularly scheduled journals and university level textbooks by providing in-depth review articles over a broader scope than journals or texts usually allow.Key Features* Offers solution methods for integro-differential formulation to help avoid difficulties* Includes a computer disk for numerical analyses by PC* Discusses energy absorption by gas and scattering effects by particles* Treats non-gray radiative gases* Provides example problems for direct applications in energy, space, and geo-environmental engineering
10. Modelling a gamma irradiation process using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Soares, Gabriela A.; Pereira, Marcio T., E-mail: [email protected], E-mail: [email protected] [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2011-07-01
In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)
11. The discrete angle technique combined with the subgroup Monte Carlo method
International Nuclear Information System (INIS)
We are investigating the use of the discrete angle technique for taking into account anisotropy scattering in the case of a subgroup (or multiband) Monte Carlo algorithm implemented in the DRAGON lattice code. In order to use the same input library data already available for deterministic methods, only Legendre moments of the isotopic transfer cross sections are available, typically computed by the GROUPR module of NJOY. However the direct use of these data is impractical into a Monte Carlo algorithm, due to the occurrence of negative parts into these distributions. To deal with this limitation, Legendre expansions are consistently converted by a moment method into sums of Dirac-delta distributions. These probability tables can then be directly used to sample the scattering cosine. In this proposed approach, the same moment approach is used to compute probability tables for the scattering angle and for the resonant cross sections. The applicability of the moment approach shall however be thoroughly investigated, due to the presence of incoherent Legendre moments. When Dirac angles can not be computed, the discrete angle technique is substituted by legacy semi-analytic methods. We provide numerical examples to illustrate the methodology by comparison with SN and legacy Monte Carlo codes on several benchmarks from the ICSBEP. (author)
12. Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering
OpenAIRE
Machta, Jon; Ellis, Richard S.
2011-01-01
Parallel tempering and population annealing are both effective methods for simulating equilibrium systems with rough free energy landscapes. Parallel tempering, also known as replica exchange Monte Carlo, is a Markov chain Monte Carlo method while population annealing is a sequential Monte Carlo method. Both methods overcome the exponential slowing associated with high free energy barriers. The convergence properties and efficiency of the two methods are compared. For large systems, populatio...
13. Reactor physics analysis method based on Monte Carlo homogenization
International Nuclear Information System (INIS)
Background: Many new concepts of nuclear energy systems with complicated geometric structures and diverse energy spectra have been put forward to meet the future demand of nuclear energy market. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multi-group cross section libraries. The Monte Carlo (MC) method predominates the suitability of geometry and spectrum, but faces the problems of long computation time and slow convergence. Purpose: This work aims to find a novel scheme to take the advantages of both methods drawn from the deterministic core analysis method and MC method. Methods: A new two-step core analysis scheme is proposed to combine the geometry modeling capability and continuous energy cross section libraries of MC method, as well as the higher computational efficiency of deterministic method. First of all, the MC simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Then, the core diffusion calculations can be done with these multi-group cross sections. Results: The new scheme can achieve high efficiency while maintain acceptable precision. Conclusion: The new scheme can be used as an effective tool for the design and analysis of innovative nuclear energy systems, which has been verified by numeric tests. (authors)
14. Comprehensive evaluation and clinical implementation of commercially available Monte Carlo dose calculation algorithm.
Science.gov (United States)
Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J
2013-01-01
A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed
15. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-04-01
Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.
16. XBRL implementation methods in COREP reporting
OpenAIRE
Kettula, Teemu
2015-01-01
Objectives of the Study: The main objective of this study is to find out the XBRL adoption methods for European banks to submit COREP reports to local FSAs and to explore transitions in these methods. Thus, the goal is to find patterns from the transitions in XBRL implementation methods. The study is exploratory, as there is no earlier literature about XBRL implementation methods in COREP reporting or from XBRL implementation method transitions in any field. Additionally, this thesis h...
17. Implementation of mathematical phantom of hand and forearm in GEANT4 Monte Carlo code
International Nuclear Information System (INIS)
In this work, the implementation of a hand and forearm Geant4 phantom code, for further evaluation of occupational exposure of ends of the radionuclides decay manipulated during procedures involving the use of injection syringe. The simulation model offered by Geant4 includes a full set of features, with the reconstruction of trajectories, geometries and physical models. For this work, the values calculated in the simulation are compared with the measurements rates by thermoluminescent dosimeters (TLDs) in physical phantom REMAB®. From the analysis of the data obtained through simulation and experimentation, of the 14 points studied, there was a discrepancy of only 8.2% of kerma values found, and these figures are considered compatible. The geometric phantom implemented in Geant4 Monte Carlo code was validated and can be used later for the evaluation of doses at ends
18. Comparison of the TEP method for neutral particle transport in the plasma edge with the Monte Carlo method
International Nuclear Information System (INIS)
The transmission/escape probability (TEP) method for neutral particle transport has recently been introduced and implemented for the calculation of 2-D neutral atom transport in the edge plasma and divertor regions of tokamaks. The results of an evaluation of the accuracy of the approximations made in the calculation of the basic TEP transport parameters are summarized. Comparisons of the TEP and Monte Carlo calculations for model problems using tokamak experimental geometries and for the analysis of measured neutral densities in DIII-D are presented. The TEP calculations are found to agree rather well with Monte Carlo results, for the most part, but the need for a few extensions of the basic TEP transport methodology and for inclusion of molecular effects and a better wall reflection model in the existing code is suggested by the study. (author)
19. Interacting multiagent systems kinetic equations and Monte Carlo methods
CERN Document Server
Pareschi, Lorenzo
2014-01-01
The description of emerging collective phenomena and self-organization in systems composed of large numbers of individuals has gained increasing interest from various research communities in biology, ecology, robotics and control theory, as well as sociology and economics. Applied mathematics is concerned with the construction, analysis and interpretation of mathematical models that can shed light on significant problems of the natural sciences as well as our daily lives. To this set of problems belongs the description of the collective behaviours of complex systems composed by a large enough number of individuals. Examples of such systems are interacting agents in a financial market, potential voters during political elections, or groups of animals with a tendency to flock or herd. Among other possible approaches, this book provides a step-by-step introduction to the mathematical modelling based on a mesoscopic description and the construction of efficient simulation algorithms by Monte Carlo methods. The ar...
20. Quasi Monte Carlo methods for optimization models of the energy industry with pricing and load processes
International Nuclear Information System (INIS)
We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising.
1. On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
2. Evaluation of uncertainty in grating pitch measurement by optical diffraction using Monte Carlo methods
International Nuclear Information System (INIS)
Measurement of grating pitch by optical diffraction is one of the few methods currently available for establishing traceability to the definition of the meter on the nanoscale; therefore, understanding all aspects of the measurement is imperative for accurate dissemination of the SI meter. A method for evaluating the component of measurement uncertainty associated with coherent scattering in the diffractometer instrument is presented. The model equation for grating pitch calibration by optical diffraction is an example where Monte Carlo (MC) methods can vastly simplify evaluation of measurement uncertainty. This paper includes discussion of the practical aspects of implementing MC methods for evaluation of measurement uncertainty in grating pitch calibration by diffraction. Downloadable open-source software is demonstrated. (technical design note)
3. Earthquake Forecasting Based on Data Assimilation: Sequential Monte Carlo Methods for Renewal Processes
CERN Document Server
Werner, M J; Sornette, D
2009-01-01
In meteorology, engineering and computer sciences, data assimilation is routinely employed as the optimal way to combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than can be achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant to the seismic gap hypothesis, models of characteristic earthquakes and to recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating ar...
4. First Numerical Implementation of the Loop-Tree Duality Method
CERN Document Server
Buchta, Sebastian
2015-01-01
The Loop-Tree Duality (LTD) is a novel perturbative method in QFT that establishes a relation between loop-level and tree-level amplitudes, which gives rise to the idea of treating them simultaneously in a common Monte Carlo. Initially introduced for one-loop scalar integrals, the applicability of the LTD has been expanded to higher order loops and Feynman graphs beyond simple poles. For the first time, a numerical implementation relying on the LTD was realized in the form of a computer program that calculates one-loop scattering amplitudes. We present details on the employed contour deformation as well as results for scalar and tensor integrals.
5. Synchronous parallel Kinetic Monte Carlo: Implementation and results for object and lattice approaches
International Nuclear Information System (INIS)
An adaptation of the synchronous parallel Kinetic Monte Carlo (spKMC) algorithm developed by Martinez et al. (2008) to the existing KMC code MMonCa (Martin-Bragado et al. 2013) is presented in this work. Two cases, general enough to provide an idea of the current state-of-the-art in parallel KMC, are presented: Object KMC simulations of the evolution of damage in irradiated iron, and Lattice KMC simulations of epitaxial regrowth of amorphized silicon. The results allow us to state that (a) the parallel overhead is critical, and severely degrades the performance of the simulator when it is comparable to the CPU time consumed per event, (b) the balance between domains is important, but not critical, (c) the algorithm and its implementation are correct and (d) further improvements are needed for spKMC to become a general, all-working solution for KMC simulations
6. Synchronous parallel Kinetic Monte Carlo: Implementation and results for object and lattice approaches
Energy Technology Data Exchange (ETDEWEB)
Martin-Bragado, Ignacio, E-mail: [email protected] [IMDEA Materials Institute, C/ Eric Kandel 2, 28906 Getafe, Madrid (Spain); Abujas, J.; Galindo, P.L.; Pizarro, J. [Departamento de Ingeniería Informática, Universidad de Cádiz, Puerto Real, Cádiz (Spain)
2015-06-01
An adaptation of the synchronous parallel Kinetic Monte Carlo (spKMC) algorithm developed by Martinez et al. (2008) to the existing KMC code MMonCa (Martin-Bragado et al. 2013) is presented in this work. Two cases, general enough to provide an idea of the current state-of-the-art in parallel KMC, are presented: Object KMC simulations of the evolution of damage in irradiated iron, and Lattice KMC simulations of epitaxial regrowth of amorphized silicon. The results allow us to state that (a) the parallel overhead is critical, and severely degrades the performance of the simulator when it is comparable to the CPU time consumed per event, (b) the balance between domains is important, but not critical, (c) the algorithm and its implementation are correct and (d) further improvements are needed for spKMC to become a general, all-working solution for KMC simulations.
7. A Comparison of Advanced Monte Carlo Methods for Open Systems: CFCMC vs CBMC
NARCIS (Netherlands)
A. Torres-Knoop; S.P. Balaji; T.J.H. Vlugt; D. Dubbeldam
2014-01-01
Two state-of-the-art simulation methods for computing adsorption properties in porous materials like zeolites and metal-organic frameworks are compared: the configurational bias Monte Carlo (CBMC) method and the recently proposed continuous fractional component Monte Carlo (CFCMC) method. We show th
8. Formulation and Application of Quantum Monte Carlo Method to Fractional Quantum Hall Systems
OpenAIRE
Suzuki, Sei; Nakajima, Tatsuya
2003-01-01
Quantum Monte Carlo method is applied to fractional quantum Hall systems. The use of the linear programming method enables us to avoid the negative-sign problem in the Quantum Monte Carlo calculations. The formulation of this method and the technique for avoiding the sign problem are described. Some numerical results on static physical quantities are also reported.
9. Radiation transport in random disperse media implemented in the Monte Carlo code PRIZMA
International Nuclear Information System (INIS)
The paper describes PRIZMA capabilities for modeling radiation transport in random disperse media by the Monte Carlo method. It proposes a method for simulating radiation transport in binary media with variable volume fractions. The method models the medium consequently from one grain crossed by a particle trajectory to another. Like in the Limited Chord Length Sampling (LCLS) method, particles in grains are tracked in the actual grain geometry, but unlike LCLS, the medium is modeled using only Matrix Chord Length Sampling (MCLS) from the exponential distribution and it is not necessary to know the grain chord length distribution. This helped us extend the method to media with randomly oriented, arbitrarily shaped convex grains. Other extensions include multicomponent media - grains of several sorts, and polydisperse media - grains of different sizes
10. Seriation in paleontological data using markov chain Monte Carlo methods.
Directory of Open Access Journals (Sweden)
Kai Puolamäki
2006-02-01
Full Text Available Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95.
11. Limit theorems for weighted samples with applications to sequential Monte Carlo methods
OpenAIRE
Douc, R.; Moulines, France E.
2008-01-01
In the last decade, sequential Monte Carlo methods (SMC) emerged as a key tool in computational statistics [see, e.g., Sequential Monte Carlo Methods in Practice (2001) Springer, New York, Monte Carlo Strategies in Scientific Computing (2001) Springer, New York, Complex Stochastic Systems (2001) 109–173]. These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighted population of particles, which are generated recursively. ¶ ...
12. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond
International Nuclear Information System (INIS)
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)
13. Continuous-energy Monte Carlo methods for calculating generalized response sensitivities using TSUNAMI-3D
International Nuclear Information System (INIS)
This work introduces a new approach for calculating the sensitivity of generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The GEneralized Adjoint Responses in Monte Carlo (GEAR-MC) method has enabled the calculation of high resolution sensitivity coefficients for multiple, generalized neutronic responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here and proof of principle is demonstrated by calculating sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications. (author)
14. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models
Science.gov (United States)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
15. Direct simulation Monte Carlo calculation of rarefied gas drag using an immersed boundary method
Science.gov (United States)
Jin, W.; Kleijn, C. R.; van Ommen, J. R.
2016-06-01
For simulating rarefied gas flows around a moving body, an immersed boundary method is presented here in conjunction with the Direct Simulation Monte Carlo (DSMC) method in order to allow the movement of a three dimensional immersed body on top of a fixed background grid. The simulated DSMC particles are reflected exactly at the landing points on the surface of the moving immersed body, while the effective cell volumes are taken into account for calculating the collisions between molecules. The effective cell volumes are computed by utilizing the Lagrangian intersecting points between the immersed boundary and the fixed background grid with a simple polyhedra regeneration algorithm. This method has been implemented in OpenFOAM and validated by computing the drag forces exerted on steady and moving spheres and comparing the results to that from conventional body-fitted mesh DSMC simulations and to analytical approximations.
16. A Monte Carlo simulation based inverse propagation method for stochastic model updating
Science.gov (United States)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
17. Diffusion Monte Carlo methods applied to Hamaker Constant evaluations
CERN Document Server
Hongo, Kenta
2016-01-01
We applied diffusion Monte Carlo (DMC) methods to evaluate Hamaker constants of liquids for wettabilities, with practical size of a liquid molecule, Si$_6$H$_{12}$ (cyclohexasilane). The evaluated constant would be justified in the sense that it lies within the expected dependence on molecular weights among similar kinds of molecules, though there is no reference experimental values available for this molecule. Comparing the DMC with vdW-DFT evaluations, we clarified that some of the vdW-DFT evaluations could not describe correct asymptotic decays and hence Hamaker constants even though they gave reasonable binding lengths and energies, and vice versa for the rest of vdW-DFTs. We also found the advantage of DMC for this practical purpose over CCSD(T) because of the large amount of BSSE/CBS corrections required for the latter under the limitation of basis set size applicable to the practical size of a liquid molecule, while the former is free from such limitations to the extent that only the nodal structure of...
18. Dose calculation of 6 MV Truebeam using Monte Carlo method
International Nuclear Information System (INIS)
The purpose of this work is to simulate 6 MV Varian Truebeam linac dosimeter characteristics using Monte Carlo method and to investigate the availability of phase space file and the accuracy of the simulation. With the phase space file at linac window supplied by Varian to be a source, the patient-dependent part was simulated. Dose distributions in a water phantom with a 10 cm × 10 cm field were calculated and compared with measured data for validation. Evident time reduction was obtained from 4-5 h which a whole simulation cost on the same computer to around 48 minutes. Good agreement between simulations and measurements in water was observed. Dose differences are less than 3% for depth doses in build-up region and also for dose profiles inside the 80% field size, and the effect in penumbra is good. It demonstrate that the simulation using existing phase space file as the EGSnrc source is efficient. Dose differences between calculated data and measured data could meet the requirements for dose calculation. (authors)
19. Medical Imaging Image Quality Assessment with Monte Carlo Methods
Science.gov (United States)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.
20. Gas Swing Options: Introduction and Pricing using Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Václavík Tomáš
2016-02-01
Full Text Available Motivated by the changing nature of the natural gas industry in the European Union, driven by the liberalisation process, we focus on the introduction and pricing of gas swing options. These options are embedded in typical gas sales agreements in the form of offtake flexibility concerning volume and time. The gas swing option is actually a set of several American puts on a spread between prices of two or more energy commodities. This fact, together with the fact that the energy markets are fundamentally different from traditional financial security markets, is important for our choice of valuation technique. Due to the specific features of the energy markets, the existing analytic approximations for spread option pricing are hardly applicable to our framework. That is why we employ Monte Carlo methods to model the spot price dynamics of the underlying commodities. The price of an arbitrarily chosen gas swing option is then computed in accordance with the concept of risk-neutral expectations. Finally, our result is compared with the real payoff from the option realised at the time of the option execution and the maximum ex-post payoff that the buyer could generate in case he knew the future, discounted to the original time of the option pricing.
1. Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Energy Technology Data Exchange (ETDEWEB)
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
2. Quantum Monte Carlo methods and lithium cluster properties
Energy Technology Data Exchange (ETDEWEB)
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
3. Development of 3d reactor burnup code based on Monte Carlo method and exponential Euler method
International Nuclear Information System (INIS)
Burnup analysis plays a key role in fuel breeding, transmutation and post-processing in nuclear reactor. Burnup codes based on one-dimensional and two-dimensional transport method have difficulties in meeting the accuracy requirements. A three-dimensional burnup analysis code based on Monte Carlo method and Exponential Euler method has been developed. The coupling code combines advantage of Monte Carlo method in complex geometry neutron transport calculation and FISPACT in fast and precise inventory calculation, meanwhile resonance Self-shielding effect in inventory calculation can also be considered. The IAEA benchmark text problem has been adopted for code validation. Good agreements were shown in the comparison with other participants' results. (authors)
4. Applications of Monte Carlo methods in nuclear science and engineering
International Nuclear Information System (INIS)
5. Simple recursive implementation of fast multipole method
International Nuclear Information System (INIS)
In this paper we present an implementation of the well known 'fast multipole' method (FMM) for the efficient calculation of dipole fields. The main advantage of the present implementation is simplicity-we believe that a major reason for the lack of use of FMMs is their complexity. One of the simplifications is the use of polynomials in the Cartesian coordinates rather than spherical harmonics. We have implemented it in the context of an arbitrary hierarchical system of cells-no periodic mesh is required, as it is for FFT (fast Fourier transform) methods. The implementation is in terms of recursive functions. Results are given for application to micromagnetic simulation. Complete source code is provided for an open-source implementation of this method, as well as an installer for the resulting program.
6. Theory and applications of the fission matrix method for continuous-energy Monte Carlo
International Nuclear Information System (INIS)
Highlights: • The fission matrix method is implemented into the MCNP Monte Carlo code. • Eigenfunctions and eigenvalues of power distributions are shown and studied. • Source convergence acceleration is demonstrated for a fuel storage vault problem. • Forward flux eigenmodes and relative uncertainties are shown for a reactor problem. • Eigenmodes expansions are performed during source convergence for a reactor problem. - Abstract: The fission matrix method can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission distribution. It can also be used to accelerate the convergence of power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. These aspects of the method are here both theoretically justified and demonstrated, and then used to investigate fundamental properties of the transport equation for a continuous-energy physics treatment. Implementation into the MCNP6 Monte Carlo code is also discussed, including a sparse representation of the fission matrix, which permits much larger and more accurate representations. Properties of the calculated eigenvalue spectrum of a 2D PWR problem are discussed: for a fine enough mesh and a sufficient degree of sampling, the spectrum both converges and has a negligible imaginary component. Calculation of the fundamental mode of the fission matrix for a fuel storage vault problem shows how convergence can be accelerated by over a factor of ten given a flat initial distribution. Forward fluxes and the relative uncertainties for a 2D PWR are shown, both of which qualitatively agree with expectation. Lastly, eigenmode expansions are performed during source convergence of the 2D PWR
7. Monte Carlo methods for direct calculation of 3D dose distributions for photon fields in radiotherapy
International Nuclear Information System (INIS)
Even with state of the art treatment planning systems the photon dose calculation can be erroneous under certain circumstances. In these cases Monte Carlo methods promise a higher accuracy. We have used the photon transport code CHILD of the GSF-Forschungszentrum, which was developed to calculate dose in diagnostic radiation protection matters. The code was refined for application in radiotherapy for high energy photon irradiation and should serve for dose verification in individual cases. The irradiation phantom can be entered as any desired 3D matrix or be generated automatically from an individual CT database. The particle transport takes into account pair production, photo, and Compton effect with certain approximations. Efficiency is increased by the method of 'fractional photons'. The generated secondary electrons are followed by the unscattered continuous-slowing-down-approximation (CSDA). The developed Monte Carlo code Monaco Matrix was tested with simple homogeneous and heterogeneous phantoms through comparisons with simulations of the well known but slower EGS4 code. The use of a point source with a direction independent energy spectrum as simplest model of the radiation field from the accelerator head is shown to be sufficient for simulation of actual accelerator depth dose curves. Good agreement (<2%) was found for depth dose curves in water and in bone. With complex test phantoms and comparisons with EGS4 calculated dose profiles some drawbacks in the code were found. Thus, the implementation of the electron multiple-scattering should lead us to step by step improvement of the algorithm. (orig.)
8. Simulating Compton scattering using Monte Carlo method: COSMOC library
Czech Academy of Sciences Publication Activity Database
Opava: Silesian University, 2014 - (Stuchlík, Z.), s. 1-10. (Publications of the Institute of Physics. 7). ISBN 9788075101266. ISSN 2336-5668. [RAGtime /14.-16./. Opava (CZ), 18.09. 2012 -22.09. 2012 ] Institutional support: RVO:67985815 Keywords : Monte Carlo * Compton scattering * C++ Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
9. Analysis of some splitting and roulette algorithms in shield calculations by the Monte Carlo method
International Nuclear Information System (INIS)
Different schemes of using the splitting and roulette methods in calculation of radiation transport in nuclear facility shields by the Monte Carlo method are considered. Efficiency of the considered schemes is estimated on the example of test calculations
10. Review of quantum Monte Carlo methods and results for Coulombic systems
Energy Technology Data Exchange (ETDEWEB)
Ceperley, D.
1983-01-27
The various Monte Carlo methods for calculating ground state energies are briefly reviewed. Then a summary of the charged systems that have been studied with Monte Carlo is given. These include the electron gas, small molecules, a metal slab and many-body hydrogen.
11. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
12. Parallel implementation of the Monte Carlo transport code EGS4 on the hypercube
International Nuclear Information System (INIS)
Monte Carlo transport codes are commonly used in the study of particle interactions. The CALOR89 code system is a combination of several Monte Carlo transport and analysis programs. In order to produce good results, a typical Monte Carlo run will have to produce many particle histories. On a single processor computer, the transport calculation can take a huge amount of time. However, if the transport of particles were divided among several processors in a multiprocessor machine, the time can be drastically reduced
13. BREESE-II: auxiliary routines for implementing the albedo option in the MORSE Monte Carlo code
International Nuclear Information System (INIS)
The routines in the BREESE package implement the albedo option in the MORSE Monte Carlo Code by providing (1) replacements for the default routines ALBIN and ALBDO in the MORSE Code, (2) an estimating routine ALBDOE compatible with the SAMBO package in MORSE, and (3) a separate program that writes a tape of albedo data in the proper format for ALBIN. These extensions of the package initially reported in 1974 were performed jointly by ORNL, Bechtel Power Corporation, and Science Applications, Inc. The first version of BREESE had a fixed number of outgoing polar angles and the number of outgoing azimuthal angles was a function of the value of the outgoing polar angle only. An examination of differential albedo data led to this modified version which allows the number of outgoing polar angles to be dependent upon the value of the incoming polar angle and the number of outgoing azimuthal angles to be a function of the value of both incoming and outgoing polar angles
14. The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy
CERN Document Server
Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F
2010-01-01
Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...
15. Monte Carlo Method for Calculating Oxygen Abundances and Their Uncertainties from Strong-Line Flux Measurements
CERN Document Server
Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or
2015-01-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...
16. A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport
International Nuclear Information System (INIS)
Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution.
17. Monte Carlo method for calculating oxygen abundances and their uncertainties from strong-line flux measurements
Science.gov (United States)
Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O.
2016-07-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z https://github.com/nyusngroup/pyMCZ.
18. Genetic algorithms: An evolution from Monte Carlo Methods for strongly non-linear geophysical optimization problems
Science.gov (United States)
Gallagher, Kerry; Sambridge, Malcolm; Drijkoningen, Guy
In providing a method for solving non-linear optimization problems Monte Carlo techniques avoid the need for linearization but, in practice, are often prohibitive because of the large number of models that must be considered. A new class of methods known as Genetic Algorithms have recently been devised in the field of Artificial Intelligence. We outline the basic concept of genetic algorithms and discuss three examples. We show that, in locating an optimal model, the new technique is far superior in performance to Monte Carlo techniques in all cases considered. However, Monte Carlo integration is still regarded as an effective method for the subsequent model appraisal.
19. Gamma ray energy loss spectra simulation in NaI detectors with the Monte Carlo method
International Nuclear Information System (INIS)
With the aim of studying and applying the Monte Carlo method, a computer code was developed to calculate the pulse height spectra and detector efficiencies for gamma rays incident on NaI (Tl) crystals. The basic detector processes in NaI (Tl) detectors are given together with an outline of Monte Carlo methods and a general review of relevant published works. A detailed description of the application of Monte Carlo methods to ν-ray detection in NaI (Tl) detectors is given. Comparisons are made with published, calculated and experimental, data. (Author)
20. Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues
International Nuclear Information System (INIS)
The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL
1. SU-E-T-277: Raystation Electron Monte Carlo Commissioning and Clinical Implementation
International Nuclear Information System (INIS)
Purpose: To evaluate the Raystation v4.0 Electron Monte Carlo algorithm for an Elekta Infinity linear accelerator and commission for clinical use. Methods: A total of 199 tests were performed (75 Export and Documentation, 20 PDD, 30 Profiles, 4 Obliquity, 10 Inhomogeneity, 55 MU Accuracy, and 5 Grid and Particle History). Export and documentation tests were performed with respect to MOSAIQ (Elekta AB) and RadCalc (Lifeline Software Inc). Mechanical jaw parameters and cutout magnifications were verified. PDD and profiles for open cones and cutouts were extracted and compared with water tank measurements. Obliquity and inhomogeneity for bone and air calculations were compared to film dosimetry. MU calculations for open cones and cutouts were performed and compared to both RadCalc and simple hand calculations. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Acceptability was categorized as follows: performs as expected, negligible impact on workflow, marginal impact, critical impact or safety concern, and catastrophic impact of safety concern. Results: Overall results are: 88.8% perform as expected, 10.2% negligible, 2.0% marginal, 0% critical and 0% catastrophic. Results per test category are as follows: Export and Documentation: 100% perform as expected, PDD: 100% perform as expected, Profiles: 66.7% perform as expected, 33.3% negligible, Obliquity: 100% marginal, Inhomogeneity 50% perform as expected, 50% negligible, MU Accuracy: 100% perform as expected, Grid and particle histories: 100% negligible. To achieve distributions with satisfactory smoothness level, 5,000,000 particle histories were used. Calculation time was approximately 1 hour. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use. All of the issues encountered have acceptable workarounds. Known issues were reported to Raysearch and will be resolved in upcoming releases
2. An Implementation of the Frequency Matching Method
DEFF Research Database (Denmark)
Lange, Katrine; Frydendall, Jan; Hansen, Thomas Mejer;
aspects of the implementation of the Fre-quency Matching method and the techniques adopted to make it com-putationally feasible also for large-scale inverse problems. The source code is publicly available at GitHub and this paper also provides an example of how to apply the Frequency Matching method to a...
3. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method
International Nuclear Information System (INIS)
This work is based on the determination of the detection efficiency of 125I and 131I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131I and 125I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
4. Quasi-Monte Carlo methods for lattice systems. A first look
International Nuclear Information System (INIS)
We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N-1/2, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N-1. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.
5. A method of simulating dynamic multileaf collimators using Monte Carlo techniques for intensity-modulated radiation therapy
International Nuclear Information System (INIS)
A method of modelling the dynamic motion of multileaf collimators (MLCs) for intensity-modulated radiation therapy (IMRT) was developed and implemented into the Monte Carlo simulation. The simulation of the dynamic MLCs (DMLCs) was based on randomizing leaf positions during a simulation so that the number of particle histories being simulated for each possible leaf position was proportional to the monitor units delivered to that position. This approach was incorporated into an EGS4 Monte Carlo program, and was evaluated in simulating the DMLCs for Varian accelerators (Varian Medical Systems, Palo Alto, CA, USA). The MU index of each segment, which was specified in the DMLC-control data, was used to compute the cumulative probability distribution function (CPDF) for the leaf positions. This CPDF was then used to sample the leaf positions during a real-time simulation, which allowed for either the step-shoot or sweeping-leaf motion in the beam delivery. Dose intensity maps for IMRT fields were computed using the above Monte Carlo method, with its accuracy verified by film measurements. The DMLC simulation improved the operational efficiency by eliminating the need to simulate multiple segments individually. More importantly, the dynamic motion of the leaves could be simulated more faithfully by using the above leaf-position sampling technique in the Monte Carlo simulation. (author)
6. Implementation and the choice of evaluation methods
DEFF Research Database (Denmark)
Flyvbjerg, Bent
1984-01-01
approach founded more in phenomenology and social science. The role of analytical methods is viewed very differently in the two paradigms as in the conception of the policy process in general. Allthough analytical methods have come to play a prominent (and often dominant) role in transportation evaluation...... the programmed paradigm. By emphasizing the importance of the process of social interaction and subordinating analysis to this process, the adaptive paradigm reduces the likelihood of analytical methods narrowing and biasing implementation. To fulfil this subordinate role and to aid social interaction......The development of evaluation and implementation processes has been closely interrelated in both theory and practice. Today, two major paradigms of evaluation and implementation exist: the programmed paradigm with its approach based on the natural science model, and the adaptive paradigm with an...
7. Frequency-domain deviational Monte Carlo method for linear oscillatory gas flows
Science.gov (United States)
2015-10-01
Oscillatory non-continuum low Mach number gas flows are often generated by nanomechanical devices in ambient conditions. These flows can be simulated using a range of particle based Monte Carlo techniques, which in their original form operate exclusively in the time-domain. Recently, a frequency-domain weight-based Monte Carlo method was proposed [D. R. Ladiges and J. E. Sader, "Frequency-domain Monte Carlo method for linear oscillatory gas flows," J. Comput. Phys. 284, 351-366 (2015)] that exhibits superior statistical convergence when simulating oscillatory flows. This previous method used the Bhatnagar-Gross-Krook (BGK) kinetic model and contains a "virtual-time" variable to maintain the inherent time-marching nature of existing Monte Carlo algorithms. Here, we propose an alternative frequency-domain deviational Monte Carlo method that facilitates the use of a wider range of molecular models and more efficient collision/relaxation operators. We demonstrate this method with oscillatory Couette flow and the flow generated by an oscillating sphere, utilizing both the BGK kinetic model and hard sphere particles. We also discuss how oscillatory motion of arbitrary time-dependence can be simulated using computationally efficient parallelization. As in the weight-based method, this deviational frequency-domain Monte Carlo method is shown to offer improved computational speed compared to the equivalent time-domain technique.
8. Growing lattice animals and Monte-Carlo methods
Science.gov (United States)
Reich, G. R.; Leath, P. L.
1980-01-01
We consider the search problems which arise in Monte-Carlo studies involving growing lattice animals. A new periodic hashing scheme (based on a periodic cell) especially suited to these problems is presented which takes advantage both of the connected geometric structure of the animals and the traversal-oriented nature of the search. The scheme is motivated by a physical analogy and tested numerically on compact and on ramified animals. In both cases the performance is found to be more efficient than random hashing, and to a degree depending on the compactness of the animals
9. Study of the quantitative analysis approach of maintenance by the Monte Carlo simulation method
International Nuclear Information System (INIS)
This study is examination of the quantitative valuation by Monte Carlo simulation method of maintenance activities of a nuclear power plant. Therefore, the concept of the quantitative valuation of maintenance that examination was advanced in the Japan Society of Maintenology and International Institute of Universality (IUU) was arranged. Basis examination for quantitative valuation of maintenance was carried out at simple feed water system, by Monte Carlo simulation method. (author)
10. Spectral method and its high performance implementation
KAUST Repository
Wu, Zedong
2014-01-01
We have presented a new method that can be dispersion free and unconditionally stable. Thus the computational cost and memory requirement will be reduced a lot. Based on this feature, we have implemented this algorithm on GPU based CUDA for the anisotropic Reverse time migration. There is almost no communication between CPU and GPU. For the prestack wavefield extrapolation, it can combine all the shots together to migration. However, it requires to solve a bigger dimensional problem and more meory which can\\'t fit into one GPU cards. In this situation, we implement it based on domain decomposition method and MPI for distributed memory system.
11. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Science.gov (United States)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
12. Implementation of Mobility Management Methods for MANET
Directory of Open Access Journals (Sweden)
Jiri Hosek
2012-12-01
Full Text Available The Mobile Adhoc Networks represent very perspective way of communication. The mobility management is on the most often discussed research issues within these networks. There have been designed many methods and algorithms to control and predict the movement of mobile nodes, but each method has different functional principle and is suitable for different environment and network circumstances. Therefore, it is advantageous to use a simulation tool in order to model and evaluate a mobile network together with the mobility management method. The aim of this paper is to present the implementation process of movement control methods into simulation environment OPNET Modeler based on the TRJ file. The described trajectory control procedure utilized the information about the route stored in the GPX file which is used to store the GPS coordinates. The developed conversion tool, implementation of proposed method into OPNET Modeler and also final evaluation are presented in this paper.
13. An irreversible Markov-chain Monte Carlo method with skew detailed balance conditions
International Nuclear Information System (INIS)
An irreversible Markov-chain Monte Carlo (MCMC) method based on a skew detailed balance condition is discussed. Some recent theoretical works concerned with the irreversible MCMC method are reviewed and the irreversible Metropolis-Hastings algorithm for the method is described. We apply the method to ferromagnetic Ising models in two and three dimensions. Relaxation dynamics of the order parameter and the dynamical exponent are studied in comparison to those with the conventional reversible MCMC method with the detailed balance condition. We also examine how the efficiency of exchange Monte Carlo method is affected by the combined use of the irreversible MCMC method
14. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo
International Nuclear Information System (INIS)
In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)
15. Verification of the spectral history correction method with fully coupled Monte Carlo code BGCore
International Nuclear Information System (INIS)
Recently, a new method for accounting for burnup history effects on few-group cross sections was developed and implemented in the reactor dynamic code DYN3D. The method relies on the tracking of the local Pu-239 density which serves as an indicator of burnup spectral history. The validity of the method was demonstrated in PWR and VVER applications. However, the spectrum variation in BWR core is more pronounced due to the stronger coolant density change. Therefore, the purpose of the current work is to further investigate the applicability of the method to BWR analysis. The proposed methodology was verified against recently developed BGCore system, which couples Monte Carlo neutron transport with depletion and thermal hydraulic solvers and thus capable of providing a reference solution for 3D simulations. The results dearly show that neglecting the spectral history effects leads to a very large deviation (e.g. 2000 pcm in reactivity) from fee reference solution. However, a very good agreement between DYN3D and BGCore is observed (on the order of 200 pcm in reactivity), when the. Pu-correction method is applied. (author)
16. MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks
Directory of Open Access Journals (Sweden)
Zhaoyan Jin
2013-10-01
Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works
17. Analysis of possibility to apply new mathematical methods (R-function theory) in Monte Carlo simulation of complex geometry
International Nuclear Information System (INIS)
This analysis is part of the report on ' Implementation of geometry module of 05R code in another Monte Carlo code', chapter 6.0: establishment of future activity related to geometry in Monte Carlo method. The introduction points out some problems in solving complex three-dimensional models which induce the need for developing more efficient geometry modules in Monte Carlo calculations. Second part include formulation of the problem and geometry module. Two fundamental questions to be solved are defined: (1) for a given point, it is necessary to determine material region or boundary where it belongs, and (2) for a given direction, all cross section points with material regions should be determined. Third part deals with possible connection with Monte Carlo calculations for computer simulation of geometry objects. R-function theory enables creation of geometry module base on the same logic (complex regions are constructed by elementary regions sets operations) as well as construction geometry codes. R-functions can efficiently replace functions of three-value logic in all significant models. They are even more appropriate for application since three-value logic is not typical for digital computers which operate in two-value logic. This shows that there is a need for work in this field. It is shown that there is a possibility to develop interactive code for computer modeling of geometry objects in parallel with development of geometry module
18. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer - I. Algorithms and numerical methods
Science.gov (United States)
Harries, Tim J.
2015-04-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.
19. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method
Institute of Scientific and Technical Information of China (English)
XI Jia-mi; YANG Geng-she
2008-01-01
Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.
20. Correlation between vacancies and magnetoresistance changes in FM manganites using the Monte Carlo method
International Nuclear Information System (INIS)
The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La2/3Ca1/3MnO3, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x–y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn3+eg’–O–Mn3+eg, Mn3+eg–O–Mn4+d3 and Mn3+eg’–O–Mn4+d3. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions TC (Curie temperature) and TMI (metal–insulator temperature) are similar, whereas with the increase in the vacancy percentage, TMI presented lower values than TC. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below TMI. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below TC. - Highlights: • Changes in the resistivity of FM materials as a function of the temperature and external magnetic field can be obtained by the Monte Carlo method, Metropolis algorithm, classical Heisenberg and Kronig–Penney approximation for magnetic clusters. • Increases in the magnetoresistive effect were observed at temperatures below TMI by the vacancies effect. • The resistive hysteresis loop presents two peaks that are directly associated with the coercive field in the magnetic
1. Application of a Monte Carlo method for modeling debris flow run-out
Science.gov (United States)
Luna, B. Quan; Cepeda, J.; Stumpf, A.; van Westen, C. J.; Malet, J. P.; van Asch, T. W. J.
2012-04-01
A probabilistic framework based on a Monte Carlo method for the modeling of debris flow hazards is presented. The framework is based on a dynamic model, which is combined with an explicit representation of the different parameter uncertainties. The probability distribution of these parameters is determined from an extensive collected database with information of back calibrated past events from different authors. The uncertainty in these inputs can be simulated and used to increase confidence in certain extreme run-out distances. In the Monte Carlo procedure; the input parameters of the numerical models simulating propagation and stoppage of debris flows are randomly selected. Model runs are performed using the randomly generated input values. This allows estimating the probability density function of the output variables characterizing the destructive power of debris flow (for instance depth, velocities and impact pressures) at any point along the path. To demonstrate the implementation of this method, a continuum two-dimensional dynamic simulation model that solves the conservation equations of mass and momentum was applied (MassMov2D). This general methodology facilitates the consistent combination of physical models with the available observations. The probabilistic model presented can be considered as a framework to accommodate any existing one or two dimensional dynamic model. The resulting probabilistic spatial model can serve as a basis for hazard mapping and spatial risk assessment. The outlined procedure provides a useful way for experts to produce hazard or risk maps for the typical case where historical records are either poorly documented or even completely lacking, as well as to derive confidence limits on the proposed zoning.
2. Calculation of gamma-ray families by Monte Carlo method
International Nuclear Information System (INIS)
Extensive Monte Carlo calculation on gamma-ray families was carried out under appropriate model parameters which are currently used in high energy cosmic ray phenomenology. Characteristics of gamma-ray families are systematically investigated by the comparison of calculated results with experimental data obtained at mountain altitudes. The main point of discussion is devoted to examine the validity of Feynman scaling in the fragmentation region of the multiple meson production. It is concluded that experimental data cannot be reproduced under the assumption of scaling law when primary cosmic rays are dominated by protons. Other possibilities on primary composition and increase of interaction cross section are also examined. These assumptions are consistent with experimental data only when we introduce intense dominance of heavy primaries in E0>1015 eV region and very strong increase of interaction cross section (say sigma varies as Esub(0)sup(0.06)) simultaneously
3. New methods for the Monte Carlo simulation of neutron noise experiments in Ads
International Nuclear Information System (INIS)
This paper presents two improvements to speed up the Monte-Carlo simulation of neutron noise experiments. The first one is to separate the actual Monte Carlo transport calculation from the digital signal processing routines, while the second is to introduce non-analogue techniques to improve the efficiency of the Monte Carlo calculation. For the latter method, adaptations to the theory of neutron noise experiments were made to account for the distortion of the higher-moments of the calculated neutron noise. Calculations were performed to test the feasibility of the above outlined scheme and to demonstrate the advantages of the application of the track length estimator. It is shown that the modifications improve the efficiency of these calculations to a high extent, which turns the Monte Carlo method into a powerful tool for the development and design of on-line reactivity measurement systems for ADS
4. Quantum trajectory Monte Carlo method describing the coherent dynamics of highly charged ions
International Nuclear Information System (INIS)
We present a theoretical framework for studying dynamics of open quantum systems. Our formalism gives a systematic path from Hamiltonians constructed by first principles to a Monte Carlo algorithm. Our Monte Carlo calculation can treat the build-up and time evolution of coherences. We employ a reduced density matrix approach in which the total system is divided into a system of interest and its environment. An equation of motion for the reduced density matrix is written in the Lindblad form using an additional approximation to the Born-Markov approximation. The Lindblad form allows the solution of this multi-state problem in terms of Monte Carlo sampling of quantum trajectories. The Monte Carlo method is advantageous in terms of computer storage compared to direct solutions of the equation of motion. We apply our method to discuss coherence properties of the internal state of a Kr35+ ion subject to spontaneous radiative decay. Simulations exhibit clear signatures of coherent transitions
5. Convex-based void filling method for CAD-based Monte Carlo geometry modeling
International Nuclear Information System (INIS)
Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time
6. An energy transfer method for 4D Monte Carlo dose calculation.
Science.gov (United States)
Siebers, Jeffrey V; Zhong, Hualiang
2008-09-01
This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862
7. The all particle method: Coupled neutron, photon, electron, charged particle Monte Carlo calculations
International Nuclear Information System (INIS)
At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig
8. Consideration of convergence judgment method with source acceleration in Monte Carlo criticality calculation
International Nuclear Information System (INIS)
Theoretical consideration is made for possibility to accelerate and judge convergence of a conventional Monte Carlo iterative calculation when it is used for a weak neutron interaction problem. And the clue for this consideration is rendered with some application analyses using the OECD/NEA source convergence benchmark problems. Some practical procedures are proposed to realize these acceleration and judgment methods in practical application using a Monte Carlo code. (author)
9. Hybrid Monte-Carlo method for simulating neutron and photon radiography
International Nuclear Information System (INIS)
We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable (E)quivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs
10. Hybrid Monte-Carlo method for simulating neutron and photon radiography
Science.gov (United States)
Wang, Han; Tang, Vincent
2013-11-01
We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable Equivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs.
11. Combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation
Energy Technology Data Exchange (ETDEWEB)
Saleur, H.; Derrida, B.
1985-07-01
In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.
12. Spin-orbit interactions in electronic structure quantum Monte Carlo methods
Science.gov (United States)
Melton, Cody A.; Zhu, Minyi; Guo, Shi; Ambrosetti, Alberto; Pederiva, Francesco; Mitas, Lubos
2016-04-01
We develop generalization of the fixed-phase diffusion Monte Carlo method for Hamiltonians which explicitly depends on particle spins such as for spin-orbit interactions. The method is formulated in a zero-variance manner and is similar to the treatment of nonlocal operators in commonly used static-spin calculations. Tests on atomic and molecular systems show that it is very accurate, on par with the fixed-node method. This opens electronic structure quantum Monte Carlo methods to a vast research area of quantum phenomena in which spin-related interactions play an important role.
13. Automating methods to improve precision in Monte-Carlo event generation for particle colliders
Energy Technology Data Exchange (ETDEWEB)
Gleisberg, Tanju
2008-07-01
The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove
14. Automating methods to improve precision in Monte-Carlo event generation for particle colliders
International Nuclear Information System (INIS)
The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove
15. The S/sub N//Monte Carlo response matrix hybrid method
International Nuclear Information System (INIS)
A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples
16. Acceptance and implementation of a system of planning computerized based on Monte Carlo
International Nuclear Information System (INIS)
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
17. Progress on burnup calculation methods coupling Monte Carlo and depletion codes
Energy Technology Data Exchange (ETDEWEB)
Leszczynski, Francisco [Comision Nacional de Energia Atomica, San Carlos de Bariloche, RN (Argentina). Centro Atomico Bariloche]. E-mail: [email protected]
2005-07-01
Several methods of burnup calculations coupling Monte Carlo and depletion codes that were investigated and applied for the author last years are described. here. Some benchmark results and future possibilities are analyzed also. The methods are: depletion calculations at cell level with WIMS or other cell codes, and use of the resulting concentrations of fission products, poisons and actinides on Monte Carlo calculation for fixed burnup distributions obtained from diffusion codes; same as the first but using a method o coupling Monte Carlo (MCNP) and a depletion code (ORIGEN) at a cell level for obtaining the concentrations of nuclides, to be used on full reactor calculation with Monte Carlo code; and full calculation of the system with Monte Carlo and depletion codes, on several steps. All these methods were used for different problems for research reactors and some comparisons with experimental results of regular lattices were performed. On this work, a resume of all these works is presented and discussion of advantages and problems found are included. Also, a brief description of the methods adopted and MCQ system for coupling MCNP and ORIGEN codes is included. (author)
18. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
CERN Document Server
Wollaeger, Ryan T; Graziani, Carlo; Couch, Sean M; Jordan, George C; Lamb, Donald Q; Moses, Gregory A
2013-01-01
We explore the application of Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) to radiation transport in strong fluid outflows with structured opacity. The IMC method of Fleck & Cummings is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking Monte Carlo particles through optically thick materials. The DDMC method of Densmore accelerates an IMC computation where the domain is diffusive. Recently, Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent neutrino transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally grey DDMC method. In this article we rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. The method described is suitable for a large variety of non-mono...
19. Calculation of extended shields in the Monte Carlo method using importance function (BRAND and DD code systems)
International Nuclear Information System (INIS)
Consideration is given of a technique and algorithms of constructing neutron trajectories in the Monte-Carlo method taking into account the data on adjoint transport equation solution. When simulating the transport part of transfer kernel the use is made of piecewise-linear approximation of free path length density along the particle motion direction. The approach has been implemented in programs within the framework of the BRAND code system. The importance is calculated in the multigroup P1-approximation within the framework of the DD-30 code system. The efficiency of the developed computation technique is demonstrated by means of solution of two model problems. 4 refs.; 2 tabs
20. MCVIEW: a radiation view factor computer program for three dimensional geometries using Monte Carlo method
International Nuclear Information System (INIS)
A Computer program MCVIEW calculates the radiation view factor between surfaces for three dimensional geometries. MCVIEW was developed to calculate view factors for input data to heat transfer analysis programs TRUMP, HEATING-5, HEATING-6 and so on. In the paper, brief illustration of calculation method using Monte Carlo for view factor is presented. The second section presents comparisons between view factors of other methods such as area integration, line integration and cross string and Monte Carlo methods, concerning with calculation error and computer execution time. The third section provides a user's input guide for MCVIEW. (author)
1. Metric conjoint segmentation methods : A Monte Carlo comparison
NARCIS (Netherlands)
Vriens, M; Wedel, M; Wilms, T
1996-01-01
The authors compare nine metric conjoint segmentation methods. Four methods concern two-stage procedures in which the estimation of conjoint models and the partitioning of the sample are performed separately; in five, the estimation and segmentation stages are integrated. The methods are compared co
2. Methods Used in Criticality Calculations; Monte Carlo Method, Neutron Interaction, Programmes for IBM-7094
International Nuclear Information System (INIS)
Computer development has a bearing on the choice of methods and their possible uses. The authors discuss the possible uses of the diffusion and transport theories and their limitations. Most of the problems encountered in regard to criticality involve fissile materials in simple or multiple assemblies. These entail the use of methods of calculation based on different principles. There are approximate methods of calculation, but very often, for economic reasons or with a view to practical application, a high degree of accuracy is required in determining the reactivity of the assemblies in question, and the methods based on the Monte Carlo principle are then the most valid. When these methods are used, accuracy is linked with the calculation time, so that the usefulness of the codes derives from their speed. With a view to carrying out the work in the best conditions, depending on the geometry and the nature of the materials involved, various codes must be used. Four principal codes are described, as are their variants; some typical possibilities and certain fundamental results are presented. Finally the accuracies of the various methods are compared. (author)
3. The factorization method for Monte Carlo simulations of systems with a complex with
Science.gov (United States)
Ambjørn, J.; Anagnostopoulos, K. N.; Nishimura, J.; Verbaarschot, J. J. M.
2004-03-01
We propose a method for Monte Carlo simulations of systems with a complex action. The method has the advantages of being in principle applicable to any such system and provides a solution to the overlap problem. In some cases, like in the IKKT matrix model, a finite size scaling extrapolation can provide results for systems whose size would make it prohibitive to simulate directly.
4. Remarkable moments in the history of neutron transport Monte Carlo methods
International Nuclear Information System (INIS)
I highlight a few results from the past of the neutron and photon transport Monte Carlo methods which have caused me a great pleasure for their ingenuity and wittiness and which certainly merit to be remembered even when tricky methods are not needed anymore. (orig.)
5. Implementation of 3D Lattice Monte Carlo Simulation on a Cluster of Symmetric Multiprocessors
Institute of Scientific and Technical Information of China (English)
雷咏梅; 蒋英; 等
2002-01-01
This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors(SMPs).The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition.Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied.Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly.It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains.
6. A GPU-based Large-scale Monte Carlo Simulation Method for Systems with Long-range Interactions
CERN Document Server
Liang, Yihao; Li, Yaohang
2016-01-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures. It adopts the sequential updating scheme of Metropolis algorithm, and makes no approximation in the computation of energy. It reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We use this method to simulate primitive model electrolytes. We measure very precisely all ion-ion pair correlation functions at high concentrations, and extract renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
7. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
International Nuclear Information System (INIS)
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
8. ANALYSIS OF NEIGHBORHOOD IMPACTS ARISING FROM IMPLEMENTATION OF SUPERMARKETS IN CITY OF SÃO CARLOS
OpenAIRE
Pedro Silveira Gonçalves Neto; José Augusto de Lollo
2010-01-01
The study included supermarkets of different sizes (small, medium and large - defined based on the area occupied by the project and volume of activity) located in São Carlos (São Paulo state, Brazil) to evaluate the influence of the size of the project impacts neighborhood generated by these supermarkets. It was considered the influence of factors like the location of enterprises, size of the building, and areas of influence contribute to the increased population density and change of use of ...
9. Zone modeling of radiative heat transfer in industrial furnaces using adjusted Monte-Carlo integral method for direct exchange area calculation
International Nuclear Information System (INIS)
This paper proposes the Monte-Carlo Integral method for the direct exchange area calculation in the zone method for the first time. This method is simple and able to handle the complex geometry zone problem and the self-zone radiation problem. The Monte-Carlo Integral method is adjusted to improve the efficiency, so that an acceptable accuracy within a reasonable computation time could be achieved. The zone method with the adjusted Monte-Carlo Integral method is used for the modeling and simulation of the radiation transfer in the industrial furnace. The simulation result is compared with the industrial data and show great accordance. It also shows the high temperature flue gas heats the furnace wall, which reflects the radiant heat to the reactor tubes. The highest temperature of flue gas and the side wall appears in nearly one third of the furnace height from the bottom, which corresponds with the industrial measuring data. The simulation result indicates that the zone method is comprehensive and easy to implement for radiative phenomenon in the furnace. - Highlights: • The Monte Carlo Integral method for evaluating direct exchange areas. • Adjustment from the MCI method to the AMCI method for efficiency. • Examination of the performance of the MCI and AMCI methods. • Development of the 3D zone model with the AMCI method. • The simulation results show good accordance with the industrial data
10. Improving Power System Risk Evaluation Method Using Monte Carlo Simulation and Gaussian Mixture Method
Directory of Open Access Journals (Sweden)
GHAREHPETIAN, G. B.
2009-06-01
Full Text Available The analysis of the risk of partial and total blackouts has a crucial role to determine safe limits in power system design, operation and upgrade. Due to huge cost of blackouts, it is very important to improve risk assessment methods. In this paper, Monte Carlo simulation (MCS was used to analyze the risk and Gaussian Mixture Method (GMM has been used to estimate the probability density function (PDF of the load curtailment, in order to improve the power system risk assessment method. In this improved method, PDF and a suggested index have been used to analyze the risk of loss of load. The effect of considering the number of generation units of power plants in the risk analysis has been studied too. The improved risk assessment method has been applied to IEEE 118 bus and the network of Khorasan Regional Electric Company (KREC and the PDF of the load curtailment has been determined for both systems. The effect of various network loadings, transmission unavailability, transmission capacity and generation unavailability conditions on blackout risk has been investigated too.
11. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
International Nuclear Information System (INIS)
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations
12. External individual monitoring: experiments and simulations using Monte Carlo Method
International Nuclear Information System (INIS)
In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF2:NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have. been introduced. The first one was the inclusion of 6% of air in the composition of the CaF2:NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF2:NaCl compound estimated by simulation to be 2,20(25) mm-1 was introduced. Conversion coefficients Cp from air kerma to personal dose equivalent were calculated using a slab water phantom with polymethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratorio de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low price can
13. Quasi-Monte Carlo methods for lattice systems. A first look
Energy Technology Data Exchange (ETDEWEB)
Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik
2013-02-15
We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.
14. Monte Carlo boundary methods for RF-heating of fusion plasma
International Nuclear Information System (INIS)
A fusion plasma can be heated by launching an electromagnetic wave into the plasma with a frequency close to the cyclotron frequency of a minority ion species. This heating process creates a non-Maxwellian distribution function, that is difficult to solve numerically in toroidal geometry. Solutions have previously been found using a Monte Carlo code FIDO. However the computations are rather time consuming. Therefore methods to speed up the computations, using Monte Carlo boundary methods have been studied. The ion cyclotron frequency heating mainly perturbs the high velocity distribution, while the low velocity distribution remains approximately Maxwellian. An hybrid model is therefore proposed, assuming a Maxwellian at low velocities and calculating the high velocity distribution with a Monte Carlo method. Three different methods to treat the boundary between the low and the high velocity regime are presented. A Monte Carlo code HYBRID has been developed to test the most promising method, the 'Modified differential equation' method, for a one dimensional problem. The results show good agreement with analytical solutions
15. Implementation of SMED method in wood processing
Directory of Open Access Journals (Sweden)
Vukićević Milan R.
2007-01-01
Full Text Available The solution of problems in production is mainly tackled by the management based on the hardware component, i.e. by the introduction of work centres of recent generation. In this way, it ensures the continuity of quality reduced consumption of energy, humanization of work, etc. However, the interaction between technical-technological and organizational-economic aspects of production is neglected. This means that the new-generation equipment requires a modern approach to planning, organization, and management of production, as well as to economy of production. Consequently it is very important to ensure the implementation of modern organizational methods in wood processing. This paper deals with the problem of implementation of SMED method (SMED - Single Digit Minute Exchange of Die in the aim of rationalization of set-up-end-up operations. It is known that in the conditions of discontinuous production, set-up-end-up time is a significant limiting factor in the increase of flexibility of production systems.
16. Correlation between vacancies and magnetoresistance changes in FM manganites using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Agudelo-Giraldo, J.D. [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo-Parra, E., E-mail: [email protected] [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo, J. [Grupo de Magnetismo y Simulación, Instituto de Física, Universidad de Antioquia, A.A. 1226, Medellín (Colombia)
2015-10-01
The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La{sub 2/3}Ca{sub 1/3}MnO{sub 3}, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x–y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn{sup 3+eg’}–O–Mn{sup 3+eg}, Mn{sup 3+eg}–O–Mn{sup 4+d3} and Mn{sup 3+eg’}–O–Mn{sup 4+d3}. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions T{sub C} (Curie temperature) and T{sub MI} (metal–insulator temperature) are similar, whereas with the increase in the vacancy percentage, T{sub MI} presented lower values than T{sub C}. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below T{sub MI}. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below T{sub C}. - Highlights: • Changes in the resistivity of FM materials as a function of the temperature and external magnetic field can be obtained by the Monte Carlo method, Metropolis algorithm, classical Heisenberg and Kronig–Penney approximation for magnetic clusters. • Increases in the magnetoresistive effect were observed at temperatures below T{sub MI} by the vacancies effect. • The resistive hysteresis
17. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: [email protected] [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)
2014-08-15
This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
18. TH-A-19A-08: Intel Xeon Phi Implementation of a Fast Multi-Purpose Monte Carlo Simulation for Proton Therapy
International Nuclear Information System (INIS)
Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time
19. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO
International Nuclear Information System (INIS)
The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)
20. Methods of Monte Carlo biasing using two-dimensional discrete ordinates adjoint flux
Energy Technology Data Exchange (ETDEWEB)
Tang, J.S.; Stevens, P.N.; Hoffman, T.J.
1976-06-01
Methods of biasing three-dimensional deep penetration Monte Carlo calculations using importance functions obtained from a two-dimensional discrete ordinates adjoint calculation have been developed. The important distinction was made between the applications of the point value and the event value to alter the random walk in Monte Carlo analysis of radiation transport. The biasing techniques developed are the angular probability biasing which alters the collision kernel using the point value as the importance function and the path length biasing which alters the transport kernel using the event value as the importance function. Source location biasings using the step importance function and the scalar adjoint flux obtained from the two-dimensional discrete ordinates adjoint calculation were also investigated. The effects of the biasing techniques to Monte Carlo calculations have been investigated for neutron transport through a thick concrete shield with a penetrating duct. Source location biasing, angular probability biasing, and path length biasing were employed individually and in various combinations. Results of the biased Monte Carlo calculations were compared with the standard Monte Carlo and discrete ordinates calculations.
1. Markov Chain Monte Carlo methods in computational statistics and econometrics
Czech Academy of Sciences Publication Activity Database
Volf, Petr
Plzeň : University of West Bohemia in Pilsen, 2006 - (Lukáš, L.), s. 525-530 ISBN 978-80-7043-480-2. [Mathematical Methods in Economics 2006. Plzeň (CZ), 13.09.2006-15.09.2006] R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Random search * MCMC * optimization Subject RIV: BB - Applied Statistics, Operational Research
2. The application of Monte Carlo method to electron and photon beams transport
International Nuclear Information System (INIS)
The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs
3. Infinite dimensional integrals beyond Monte Carlo methods: yet another approach to normalized infinite dimensional integrals
International Nuclear Information System (INIS)
An approach to (normalized) infinite dimensional integrals, including normalized oscillatory integrals, through a sequence of evaluations in the spirit of the Monte Carlo method for probability measures is proposed. in this approach the normalization through the partition function is included in the definition. For suitable sequences of evaluations, the ('classical') expectation values of cylinder functions are recovered.
4. Infinite dimensional integrals beyond Monte Carlo methods: yet another approach to normalized infinite dimensional integrals
OpenAIRE
Magnot, Jean-Pierre
2012-01-01
An approach to (normalized) infinite dimensional integrals, including normalized oscillatory integrals, through a sequence of evaluations in the spirit of the Monte Carlo method for probability measures is proposed. in this approach the normalization through the partition function is included in the definition. For suitable sequences of evaluations, the ("classical") expectation values of cylinder functions are recovered
5. Lowest-order relativistic corrections of helium computed using Monte Carlo methods
International Nuclear Information System (INIS)
We have calculated the lowest-order relativistic effects for the three lowest states of the helium atom with symmetry 1S, 1P, 1D, 3S, 3P, and 3D using variational Monte Carlo methods and compact, explicitly correlated trial wave functions. Our values are in good agreement with the best results in the literature.
6. The information-based complexity of approximation problem by adaptive Monte Carlo methods
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem.
7. On the use of the continuous-energy Monte Carlo method for lattice physics applications
International Nuclear Information System (INIS)
This paper is a general overview of the Serpent Monte Carlo reactor physics burnup calculation code. The Serpent code is a project carried out at VTT Technical Research Centre of Finland, in an effort to extend the use of the continuous-energy Monte Carlo method to lattice physics applications, including group constant generation for coupled full-core reactor simulator calculations. The main motivation of going from deterministic transport methods to Monte Carlo simulation is the capability to model any fuel or reactor type using the same fundamental neutron interaction data without major approximations. This capability is considered important especially for the development of next-generation reactor technology, which often lies beyond the modeling capabilities of conventional LWR codes. One of the main limiting factors for the Monte Carlo method is still today the prohibitively long computing time, especially in burnup calculation. The Serpent code uses certain dedicated calculation techniques to overcome this limitation. The overall running time is reduced significantly, in some cases by almost two orders of magnitude. The main principles of the calculation methods and the general capabilities of the code are introduced. The results section presents a collection of validation cases in which Serpent calculations are compared to reference MCNP4C and CASMO-4E results. (author)
8. A Monte Carlo Green's function method for three-dimensional neutron transport
International Nuclear Information System (INIS)
This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution
9. Transpor properties of electrons in GaAs using random techniques (Monte-Carlo Method)
International Nuclear Information System (INIS)
We study the transport properties of electrons in GaAs using random techniques (Monte-Carlo method). With a simple non parabolic band model for this semiconductor we obtain the electron stationary against the electric field in this material, cheking these theoretical results with the experimental ones given by several authors. (Author)
10. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.
Science.gov (United States)
Kim, Seock-Ho
2001-01-01
Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…
11. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods
Science.gov (United States)
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
12. Stability of few-body systems and quantum Monte-Carlo methods
International Nuclear Information System (INIS)
Quantum Monte-Carlo methods are well suited to study the stability of few-body systems. Their capabilities are illustrated by studying the critical stability of the hydrogen molecular ion whose nuclei and electron interact through the Yukawa potential, and the stability of small helium clusters. Refs. 16 (author)
13. A Monte-Carlo-Based Network Method for Source Positioning in Bioluminescence Tomography
OpenAIRE
Zhun Xu; Xiaolei Song; Xiaomeng Zhang; Jing Bai
2007-01-01
We present an approach based on the improved Levenberg Marquardt (LM) algorithm of backpropagation (BP) neural network to estimate the light source position in bioluminescent imaging. For solving the forward problem, the table-based random sampling algorithm (TBRS), a fast Monte Carlo simulation method ...
14. Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods
International Nuclear Information System (INIS)
The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author)
15. A variance-reduced electrothermal Monte Carlo method for semiconductor device simulation
Energy Technology Data Exchange (ETDEWEB)
Muscato, Orazio; Di Stefano, Vincenza [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) Leibniz-Institut im Forschungsverbund Berlin e.V., Berlin (Germany)
2012-11-01
This paper is concerned with electron transport and heat generation in semiconductor devices. An improved version of the electrothermal Monte Carlo method is presented. This modification has better approximation properties due to reduced statistical fluctuations. The corresponding transport equations are provided and results of numerical experiments are presented.
16. Detailed balance method for chemical potential determination in Monte Carlo and molecular dynamics simulations
International Nuclear Information System (INIS)
We present a new, nondestructive, method for determining chemical potentials in Monte Carlo and molecular dynamics simulations. The method estimates a value for the chemical potential such that one has a balance between fictitious successful creation and destruction trials in which the Monte Carlo method is used to determine success or failure of the creation/destruction attempts; we thus call the method a detailed balance method. The method allows one to obtain estimates of the chemical potential for a given species in any closed ensemble simulation; the closed ensemble is paired with a ''natural'' open ensemble for the purpose of obtaining creation and destruction probabilities. We present results for the Lennard-Jones system and also for an embedded atom model of liquid palladium, and compare to previous results in the literature for these two systems. We are able to obtain an accurate estimate of the chemical potential for the Lennard-Jones system at higher densities than reported in the literature
17. Sequential Monte Carlo methods for nonlinear discrete-time filtering
CERN Document Server
Bruno, Marcelo GS
2013-01-01
In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable.We begin the notes with a review of Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in line
18. Markov chain Monte Carlo methods in directed graphical models
DEFF Research Database (Denmark)
Højbjerre, Malene
Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models...... tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...
19. An energy transfer method for 4D Monte Carlo dose calculation
OpenAIRE
Siebers, Jeffrey V; Zhong, Hualiang
2008-01-01
This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy ...
20. Constrained-Realization Monte-Carlo Method for Hypothesis Testing
CERN Document Server
Theiler, J; Theiler, James; Prichard, Dean
1996-01-01
We compare two theoretically distinct approaches to generating artificial (or surrogate'') data for testing hypotheses about a given data set. The first and more straightforward approach is to fit a single best'' model to the original data, and then to generate surrogate data sets that are typical realizations'' of that model. The second approach concentrates not on the model but directly on the original data; it attempts to constrain the surrogate data sets so that they exactly agree with the original data for a specified set of sample statistics. Examples of these two approaches are provided for two simple cases: a test for deviations from a gaussian distribution, and a test for serial dependence in a time series. Additionally, we consider tests for nonlinearity in time series based on a Fourier transform (FT) method and on more conventional autoregressive moving-average (ARMA) fits to the data. The comparative performance of hypothesis testing schemes based on these two approaches is found to depend ...
1. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods; Avenir des nouveaux concepts des calculs dosimetriques bases sur les methodes de Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)
2009-01-15
Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)
2. MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM
Directory of Open Access Journals (Sweden)
Gabriela Ižaríková
2015-12-01
Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.
3. Application of Monte Carlo methods for dead time calculations for counting measurements; Anwendung von Monte-Carlo-Methoden zur Berechnung der Totzeitkorrektion fuer Zaehlmessungen
Energy Technology Data Exchange (ETDEWEB)
Henniger, Juergen; Jakobi, Christoph [Technische Univ. Dresden (Germany). Arbeitsgruppe Strahlungsphysik (ASP)
2015-07-01
From a mathematical point of view Monte Carlo methods are the numerical solution of certain integrals and integral equations using a random experiment. There are several advantages compared to the classical stepwise integration. The time required for computing increases for multi-dimensional problems only moderately with increasing dimension. The only requirements for the integral kernel are its capability of being integrated in the considered integration area and the possibility of an algorithmic representation. These are the important properties of Monte Carlo methods that allow the application in every scientific area. Besides that Monte Carlo algorithms are often more intuitive than conventional numerical integration methods. The contribution demonstrates these facts using the example of dead time corrections for counting measurements.
4. ANALYSIS OF NEIGHBORHOOD IMPACTS ARISING FROM IMPLEMENTATION OF SUPERMARKETS IN CITY OF SÃO CARLOS
Directory of Open Access Journals (Sweden)
Pedro Silveira Gonçalves Neto
2010-12-01
Full Text Available The study included supermarkets of different sizes (small, medium and large - defined based on the area occupied by the project and volume of activity located in São Carlos (São Paulo state, Brazil to evaluate the influence of the size of the project impacts neighborhood generated by these supermarkets. It was considered the influence of factors like the location of enterprises, size of the building, and areas of influence contribute to the increased population density and change of use of buildings since it was post-deployment analysis. The relationship between the variables of the spatial impacts was made possible by the use of geographic information system. It was noted that the legislation does not have suitable conditions to guide the studies of urban impacts due to the complex integration between the urban and impacting components.
5. GPU-accelerated inverse identification of radiative properties of particle suspensions in liquid by the Monte Carlo method
Science.gov (United States)
Ma, C. Y.; Zhao, J. M.; Liu, L. H.; Zhang, L.; Li, X. C.; Jiang, B. C.
2016-03-01
Inverse identification of radiative properties of participating media is usually time consuming. In this paper, a GPU accelerated inverse identification model is presented to obtain the radiative properties of particle suspensions. The sample medium is placed in a cuvette and a narrow light beam is irradiated normally from the side. The forward three-dimensional radiative transfer problem is solved using a massive parallel Monte Carlo method implemented on graphics processing unit (GPU), and particle swarm optimization algorithm is applied to inversely identify the radiative properties of particle suspensions based on the measured bidirectional scattering distribution function (BSDF). The GPU-accelerated Monte Carlo simulation significantly reduces the solution time of the radiative transfer simulation and hence greatly accelerates the inverse identification process. Hundreds of speedup is achieved as compared to the CPU implementation. It is demonstrated using both simulated BSDF and experimentally measured BSDF of microalgae suspensions that the radiative properties of particle suspensions can be effectively identified based on the GPU-accelerated algorithm with three-dimensional radiative transfer modelling.
6. A Method for Estimating Annual Energy Production Using Monte Carlo Wind Speed Simulation
Directory of Open Access Journals (Sweden)
Birgir Hrafnkelsson
2016-04-01
Full Text Available A novel Monte Carlo (MC approach is proposed for the simulation of wind speed samples to assess the wind energy production potential of a site. The Monte Carlo approach is based on historical wind speed data and reserves the effect of autocorrelation and seasonality in wind speed observations. No distributional assumptions are made, and this approach is relatively simple in comparison to simulation methods that aim at including the autocorrelation and seasonal effects. Annual energy production (AEP is simulated by transforming the simulated wind speed values via the power curve of the wind turbine at the site. The proposed Monte Carlo approach is generic and is applicable for all sites provided that a sufficient amount of wind speed data and information on the power curve are available. The simulated AEP values based on the Monte Carlo approach are compared to both actual AEP and to simulated AEP values based on a modified Weibull approach for wind speed simulation using data from the Burfell site in Iceland. The comparison reveals that the simulated AEP values based on the proposed Monte Carlo approach have a distribution that is in close agreement with actual AEP from two test wind turbines at the Burfell site, while the simulated AEP of the Weibull approach is such that the P50 and the scale are substantially lower and the P90 is higher. Thus, the Weibull approach yields AEP that is not in line with the actual variability in AEP, while the Monte Carlo approach gives a realistic estimate of the distribution of AEP.
7. Modeling radiation from the atmosphere of Io with Monte Carlo methods
Science.gov (United States)
Gratiy, Sergey
Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. To validate a global numerical model of Io's atmosphere against astronomical observations requires a 3-D spherical-shell radiative transfer (RT) code to simulate disk-resolved images and disk-integrated spectra from the ultraviolet to the infrared spectral region. In addition, comparison of simulated and astronomical observations provides important information to improve existing atmospheric models. In order to achieve this goal, a new 3-D spherical-shell forward/backward photon Monte Carlo code capable of simulating radiation from absorbing/emitting and scattering atmospheres with an underlying emitting and reflecting surface was developed. A new implementation of calculating atmospheric brightness in scattered sunlight is presented utilizing the notion of an "effective emission source" function. This allows for the accumulation of the scattered contribution along the entire path of a ray and the calculation of the atmospheric radiation when both scattered sunlight and thermal emission contribute to the observed radiation---which was not possible in previous models. A "polychromatic" algorithm was developed for application with the backward Monte Carlo method and was implemented in the code. It allows one to calculate radiative intensity at several wavelengths simultaneously, even when the scattering properties of the atmosphere are a function of wavelength. The application of the "polychromatic" method improves the computational efficiency because it reduces the number of photon bundles traced during the simulation. A 3-D gas dynamics model of Io's atmosphere, including both sublimation and volcanic
8. A recursive Monte Carlo method for estimating importance functions in deep penetration problems
International Nuclear Information System (INIS)
A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems
9. Quantile Mechanics II: Changes of Variables in Monte Carlo methods and GPU-Optimized Normal Quantiles
OpenAIRE
Shaw, W. T.; Luu, T.; Brickman, N.
2009-01-01
With financial modelling requiring a better understanding of model risk, it is helpful to be able to vary assumptions about underlying probability distributions in an efficient manner, preferably without the noise induced by resampling distributions managed by Monte Carlo methods. This paper presents differential equations and solution methods for the functions of the form Q(x) = F −1(G(x)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of Mont...
10. Construction of the Jacobian matrix for fluorescence diffuse optical tomography using a perturbation Monte Carlo method
Science.gov (United States)
Zhang, Xiaofeng
2012-03-01
Image formation in fluorescence diffuse optical tomography is critically dependent on construction of the Jacobian matrix. For clinical and preclinical applications, because of the highly heterogeneous characteristics of the medium, Monte Carlo methods are frequently adopted to construct the Jacobian. Conventional adjoint Monte Carlo method typically compute the Jacobian by multiplying the photon density fields radiated from the source at the excitation wavelength and from the detector at the emission wavelength. Nonetheless, this approach assumes that the source and the detector in Green's function are reciprocal, which is invalid in general. This assumption is particularly questionable in small animal imaging, where the mean free path length of photons is typically only one order of magnitude smaller than the representative dimension of the medium. We propose a new method that does not rely on the reciprocity of the source and the detector by tracing photon propagation entirely from the source to the detector. This method relies on the perturbation Monte Carlo theory to account for the differences in optical properties of the medium at the excitation and the emission wavelengths. Compared to the adjoint methods, the proposed method is more valid in reflecting the physical process of photon transport in diffusive media and is more efficient in constructing the Jacobian matrix for densely sampled configurations.
11. A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport
Science.gov (United States)
Tautz, R. C.
2016-05-01
A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.
12. Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle
Science.gov (United States)
Arai, R.; Tamura, R.; Fukuda, H.; Li, J.; Saito, A. T.; Kaji, S.; Nakagome, H.; Numazawa, T.
2015-12-01
In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method.
13. Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
International Nuclear Information System (INIS)
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients Sij are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature Tm is plotted in terms of the cluster atom number Nat. The standard Nat−1/3 linear dependence (Pawlow law) is observed for Nat >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For Nat <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I
14. Sequential Monte Carlo Methods for Joint Detection and Tracking of Multiaspect Targets in Infrared Radar Images
Directory of Open Access Journals (Sweden)
Bruno MarceloGS
2008-01-01
Full Text Available We present in this paper a sequential Monte Carlo methodology for joint detection and tracking of a multiaspect target in image sequences. Unlike the traditional contact/association approach found in the literature, the proposed methodology enables integrated, multiframe target detection and tracking incorporating the statistical models for target aspect, target motion, and background clutter. Two implementations of the proposed algorithm are discussed using, respectively, a resample-move (RS particle filter and an auxiliary particle filter (APF. Our simulation results suggest that the APF configuration outperforms slightly the RS filter in scenarios of stealthy targets.
15. Research of Monte Carlo method used in simulation of different maintenance processes
International Nuclear Information System (INIS)
The paper introduces two kinds of Monte Carlo methods used in equipment life process simulation under the least maintenance: condition: method of producing the interval of lifetime, method of time scale conversion. The paper also analyzes the characteristics and the using scope of the two methods. By using the conception of service age reduction factor, the model of equipment's life process under incomplete maintenance condition is established, and also the life process simulation method applicable to this situation is invented. (authors)
16. Contributon Monte Carlo
International Nuclear Information System (INIS)
The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables
17. Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
Science.gov (United States)
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
18. Determining the optimum confidence interval based on the hybrid Monte Carlo method and its application in financial calculations
OpenAIRE
Kianoush Fathi Vajargah
2014-01-01
The accuracy of Monte Carlo and quasi-Monte Carlo methods decreases in problems of high dimensions. Therefore, the objective of this study was to present an optimum method to increase the accuracy of the answer. As the problem gets larger, the resulting accuracy will be higher. In this respect, this study combined the two previous methods, QMC and MC, and presented a hybrid method with efficiency higher than that of those two methods.
19. The application of Monte Carlo method to electron and photon beams transport; Zastosowanie metody Monte Carlo do analizy transportu elektronow i fotonow
Energy Technology Data Exchange (ETDEWEB)
Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)
1994-12-31
The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs.
20. A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution
Science.gov (United States)
Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.
2015-11-01
In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].
1. Polarization imaging of multiply-scattered radiation based on integral-vector Monte Carlo method
International Nuclear Information System (INIS)
A new integral-vector Monte Carlo method (IVMCM) is developed to analyze the transfer of polarized radiation in 3D multiple scattering particle-laden media. The method is based on a 'successive order of scattering series' expression of the integral formulation of the vector radiative transfer equation (VRTE) for application of efficient statistical tools to improve convergence of Monte Carlo calculations of integrals. After validation against reference results in plane-parallel layer backscattering configurations, the model is applied to a cubic container filled with uniformly distributed monodispersed particles and irradiated by a monochromatic narrow collimated beam. 2D lateral images of effective Mueller matrix elements are calculated in the case of spherical and fractal aggregate particles. Detailed analysis of multiple scattering regimes, which are very similar for unpolarized radiation transfer, allows identifying the sensitivity of polarization imaging to size and morphology.
2. Monte Carlo Methods Development and Applications in Conformational Sampling of Proteins
DEFF Research Database (Denmark)
Tian, Pengfei
sampling methods to address these two problems. First of all, a novel technique has been developed for reliably estimating diffusion coefficients for use in the enhanced sampling of molecular simulations. A broad applicability of this method is illustrated by studying various simulation problems such as...... sufficient to provide an accurate structural and dynamical description of certain properties of proteins, (2), it is difficult to obtain correct statistical weights of the samples generated, due to lack of equilibrium sampling. In this dissertation I present several new methodologies based on Monte Carlo...... protein folding and aggregation. Second, by combining Monte Carlo sampling with a flexible probabilistic model of NMR chemical shifts, a series of simulation strategies are developed to accelerate the equilibrium sampling of free energy landscapes of proteins. Finally, a novel approach is presented to...
3. Monte Carlo method of macroscopic modulation of small-angle charged particle reflection from solid surfaces
CERN Document Server
Bratchenko, M I
2001-01-01
A novel method of Monte Carlo simulation of small-angle reflection of charged particles from solid surfaces has been developed. Instead of atomic-scale simulation of particle-surface collisions the method treats the reflection macroscopically as 'condensed history' event. Statistical parameters of reflection are sampled from the theoretical distributions upon energy and angles. An efficient sampling algorithm based on combination of inverse probability distribution function method and rejection method has been proposed and tested. As an example of application the results of statistical modeling of particles flux enhancement near the bottom of vertical Wehner cone are presented and compared with simple geometrical model of specular reflection.
4. A vectorized Monte Carlo method with pseudo-scattering for neutron transport analysis
International Nuclear Information System (INIS)
A vectorized Monte Carlo method has been developed for the neutron transport analysis on the vector supercomputer HITAC S810. In this method, a multi-particle tracking algorithm is adopted and fundamental processing such as pseudo-random number generation is modified to use the vector processor effectively. The flight analysis of this method is characterized by the new algorithm with pseudo-scattering. This algorithm was verified by comparing its results with those of the conventional one. The method realized a speed-up of factor 10; about 7 times by vectorization and 1.5 times by the new algorithm for flight analysis
5. Monte-Carlo method for electron transport in a material with electron field
International Nuclear Information System (INIS)
The precise mathematical and physical foundations of the Monte-Carlo method for electron transport with the electromagnetic field are established. The condensed histories method given by M.J. Berger is generalized to the case where electromagnetic field exists in the material region. The full continuous-slowing-down method and the coupling method of continuous-slowing-down and catastrophic collision are compared. Using the approximation of homogeneous electronic field, the thickness of material for shielding the supra-thermal electrons produced by laser light irradiated target is evaluated
6. A study of orientational disorder in ND4Cl by the reverse Monte Carlo method
International Nuclear Information System (INIS)
The total structure factor for deuterated ammonium chloride measured by neutron diffraction has been modeled using the reverse Monte Carlo method. The results show that the orientational disorder of the ammonium ions consists of a local librational motion with an average angular amplitude α = 17 deg and reorientations of ammonium ions by 90 deg jumps around two-fold axes. Reorientations around three-fold axes have a very low probability
7. The massive Schwinger model on the lattice studied via a local Hamiltonian Monte-Carlo method
International Nuclear Information System (INIS)
A local Hamiltonian Monte-Carlo method is used to study the massive Schwinger model. A non-vanishing quark condensate is found and the dependence of the condensate and the string tension on the background field is calculated. These results reproduce well the expected continuum results. We study also the first-order phase transition which separates the weak and strong coupling regimes and find evidence for the behaviour conjectured by Coleman. (author)
8. Study of the tritium production in a 1-D blanket model with Monte Carlo methods
OpenAIRE
Cubí Ricart, Álvaro
2015-01-01
In this work a method to collapse a 3D geometry into a mono dimensional model of a fusion reactor blanket is developed and tested. Using this model, neutron and photon uxes and its energy deposition will be obtained with a Monte Carlo code. This results will allow to calculate the TBR and the thermal power of the blanket and will be able to be integrated in the AINA code.
9. Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA
International Nuclear Information System (INIS)
Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)
10. R and D on automatic modeling methods for Monte Carlo codes FLUKA
International Nuclear Information System (INIS)
FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)
11. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems
KAUST Repository
Efendiev, Yalchin R.
2014-12-19
In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.
12. Calculation of neutron cross-sections in the unresolved resonance region by the Monte Carlo method
International Nuclear Information System (INIS)
The Monte-Carlo method is used to produce neutron cross-sections and functions of the cross-section probabilities in the unresolved energy region and a corresponding Fortran programme (ONERS) is described. Using average resonance parameters, the code generates statistical distribution of level widths and spacing between resonance for S and P waves. Some neutron cross-sections for U238 and U235 are shown as examples
13. A ''local'' exponential transform method for global variance reduction in Monte Carlo transport problems
International Nuclear Information System (INIS)
Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ''Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ''local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation
14. Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
Science.gov (United States)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
15. Quantifying and reducing uncertainty in life cycle assessment using the Bayesian Monte Carlo method
International Nuclear Information System (INIS)
The traditional life cycle assessment (LCA) does not perform quantitative uncertainty analysis. However, without characterizing the associated uncertainty, the reliability of assessment results cannot be understood or ascertained. In this study, the Bayesian method, in combination with the Monte Carlo technique, is used to quantify and update the uncertainty in LCA results. A case study of applying the method to comparison of alternative waste treatment options in terms of global warming potential due to greenhouse gas emissions is presented. In the case study, the prior distributions of the parameters used for estimating emission inventory and environmental impact in LCA were based on the expert judgment from the intergovernmental panel on climate change (IPCC) guideline and were subsequently updated using the likelihood distributions resulting from both national statistic and site-specific data. The posterior uncertainty distribution of the LCA results was generated using Monte Carlo simulations with posterior parameter probability distributions. The results indicated that the incorporation of quantitative uncertainty analysis into LCA revealed more information than the deterministic LCA method, and the resulting decision may thus be different. In addition, in combination with the Monte Carlo simulation, calculations of correlation coefficients facilitated the identification of important parameters that had major influence to LCA results. Finally, by using national statistic data and site-specific information to update the prior uncertainty distribution, the resultant uncertainty associated with the LCA results could be reduced. A better informed decision can therefore be made based on the clearer and more complete comparison of options
16. Investigation of neutral particle leakages in lacunary media to speed up Monte Carlo methods
International Nuclear Information System (INIS)
This research aims at optimizing calculation methods which are used for long duration penetration problems in radiation protection when vacuum media are involved. After having recalled the main notions of the transport theory, the various numerical methods which are used to solve them, the fundamentals of the Monte Carlo method, and problems related to long duration penetration, the report focuses on the problem of leaks through vacuum. It describes the bias introduced in the TRIPOLI code, reports the search for an optimal bias in cylindrical configurations by using the JANUS code. It reports the application to a simple straight tube
17. Mass attenuation coefficient calculations of different detector crystals by means of FLUKA Monte Carlo method
Science.gov (United States)
Ebru Ermis, Elif; Celiktas, Cuneyt
2015-07-01
Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
18. Analysis over Critical Issues of Implementation or Non-implementation of the ABC Method in Romania
Directory of Open Access Journals (Sweden)
Sorinel Cãpusneanu
2009-12-01
Full Text Available This article analyses the critical issues regarding implementation or non-implementation of the Activity-Based Costing (ABC method in Romania. There are highlighted the specialists views in the field opinions and own point of view of the authors regarding informational, technical, behavioral, financial, managerial, property and competitive issues regarding implementation or non-implementation of the ABC method in Romania.
19. Numerical methods design, analysis, and computer implementation of algorithms
CERN Document Server
Greenbaum, Anne
2012-01-01
Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or c
20. TH-A-19A-11: Validation of GPU-Based Monte Carlo Code (gPMC) Versus Fully Implemented Monte Carlo Code (TOPAS) for Proton Radiation Therapy: Clinical Cases Study
International Nuclear Information System (INIS)
Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavities and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases
1. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
2. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Science.gov (United States)
Chen, Chaobin; Huang, Qunying; Wu, Yican
2005-04-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
3. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method
International Nuclear Information System (INIS)
Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)
4. Acceptance and implementation of a system of planning computerized based on Monte Carlo; Aceptacion y puesta en marcha de un sistema de planificacion comutarizada basado en Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.
2013-07-01
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
5. An implementation of Runge's method for Diophantine equations
OpenAIRE
Beukers, F.; Tengely, Sz.
2005-01-01
In this paper we suggest an implementation of Runge's method for solving Diophantine equations satisfying Runge's condition. In this implementation we avoid the use of Puiseux series and algebraic coefficients.
6. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams
Energy Technology Data Exchange (ETDEWEB)
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' ' Carlos Haya' ' , Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2010-07-15
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
7. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams
International Nuclear Information System (INIS)
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within ∼3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
8. A Monte-Carlo method for calculations of the distribution of angular deflections due to multiple scattering
International Nuclear Information System (INIS)
A Monte Carlo method for calculation of the distribution of angular deflections of fast charged particles passing through thin layer of matter is described on the basis of Moliere theory of multiple scattering. The distribution of the angular deflections obtained as the result of calculations is compared with Moliere theory. The method proposed is useful to calculate the electron transport in matter by Monte Carlo method. (author)
9. Monte Carlo simulations of Higgs-boson production at the LHC with the KrkNLO method
CERN Document Server
Jadach, S; Placzek, W; Sapeta, S; Siodmok, A; Skrzypek, M
2016-01-01
We present numerical tests and predictions of the KrkNLO method for matching of NLO QCD corrections to hard processes with LO parton shower Monte Carlo generators. This method was described in detail in our previous publications, where its advantages over other approaches, such as MCatNLO and POWHEG, were pointed out. Here we concentrate on presenting some numerical results (cross sections and distributions) for $Z/\\gamma^*$ (Drell-Yan) and Higgs-boson production processes at the LHC. The Drell--Yan process is used mainly to validate the KrkNLO implementation in the Herwig 7 program with respect to the previous implementation in Sherpa. We also show predictions for this process with the new, complete, MC-scheme parton distribution functions and compare them with our previously published results. Then, we present the first results of the KrkNLO method for the Higgs production in gluon--gluon fusion at the LHC and compare them with the predictions of other programs, such as MCFM, MCatNLO, POWHEG and HNNLO, as w...
10. Simulation of clinical X-ray tube using the Monte Carlo Method - PENELOPE code
International Nuclear Information System (INIS)
Breast cancer is the most common type of cancer among women. The main strategy to increase the long-term survival of patients with this disease is the early detection of the tumor, and mammography is the most appropriate method for this purpose. Despite the reduction of cancer deaths, there is a big concern about the damage caused by the ionizing radiation to the breast tissue. To evaluate these measures it was modeled a mammography equipment, and obtained the depth spectra using the Monte Carlo method - PENELOPE code. The average energies of the spectra in depth and the half value layer of the mammography output spectrum. (author)
11. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems
International Nuclear Information System (INIS)
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
12. Comparing Subspace Methods for Closed Loop Subspace System Identification by Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
David Di Ruscio
2009-10-01
Full Text Available A novel promising bootstrap subspace system identification algorithm for both open and closed loop systems is presented. An outline of the SSARX algorithm by Jansson (2003 is given and a modified SSARX algorithm is presented. Some methods which are consistent for closed loop subspace system identification presented in the literature are discussed and compared to a recently published subspace algorithm which works for both open as well as for closed loop data, i.e., the DSR_e algorithm as well as the bootstrap method. Experimental comparisons are performed by Monte Carlo simulations.
13. Experimental results and Monte Carlo simulations of a landmine localization device using the neutron backscattering method
Energy Technology Data Exchange (ETDEWEB)
Datema, C.P. E-mail: [email protected]; Bom, V.R.; Eijk, C.W.E. van
2002-08-01
Experiments were carried out to investigate the possible use of neutron backscattering for the detection of landmines buried in the soil. Several landmines, buried in a sand-pit, were positively identified. A series of Monte Carlo simulations were performed to study the complexity of the neutron backscattering process and to optimize the geometry of a future prototype. The results of these simulations indicate that this method shows great potential for the detection of non-metallic landmines (with a plastic casing), for which so far no reliable method has been found.
14. Mass attenuation coefficient calculations of different detector crystals by means of FLUKA Monte Carlo method
OpenAIRE
Ermis Elif Ebru; Celiktas Cuneyt
2015-01-01
Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded f...
15. Comparison of approximative Markov and Monte Carlo simulation methods for reliability assessment of crack containing components
International Nuclear Information System (INIS)
Reliability assessments based on probabilistic fracture mechanics can give insight into the effects of changes in design parameters, operational conditions and maintenance schemes. Although they are often not capable of providing absolute reliability values, these methods at least allow the ranking of different solutions among alternatives. Due to the variety of possible solutions for design, operation and maintenance problems numerous probabilistic reliability assessments have to be carried out. This is a laborous task especially for crack containing welds of nuclear pipes subjected to fatigue. The objective of this paper is to compare the Monte Carlo simulation method and a newly developed approximative approach using the Markov process ansatz for this task
16. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)
Science.gov (United States)
Hansson, Marie; Isaksson, Mats
2007-04-01
X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.
17. On the Calculation of Reactor Time Constants Using the Monte Carlo Method
International Nuclear Information System (INIS)
Full-core reactor dynamics calculation involves the coupled modelling of thermal hydraulics and the time-dependent behaviour of core neutronics. The reactor time constants include prompt neutron lifetimes, neutron reproduction times, effective delayed neutron fractions and the corresponding decay constants, typically divided into six or eight precursor groups. The calculation of these parameters is traditionally carried out using deterministic lattice transport codes, which also produce the homogenised few-group constants needed for resolving the spatial dependence of neutron flux. In recent years, there has been a growing interest in the production of simulator input parameters using the stochastic Monte Carlo method, which has several advantages over deterministic transport calculation. This paper reviews the methodology used for the calculation of reactor time constants. The calculation techniques are put to practice using two codes, the PSG continuous-energy Monte Carlo reactor physics code and MORA, a new full-core Monte Carlo neutron transport code entirely based on homogenisation. Both codes are being developed at the VTT Technical Research Centre of Finland. The results are compared to other codes and experimental reference data in the CROCUS reactor kinetics benchmark calculation. (author)
18. Uncertainty Assessment of the Core Thermal-Hydraulic Analysis Using the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Choi, Sun Rock; Yoo, Jae Woon; Hwang, Dae Hyun; Kim, Sang Ji [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2010-10-15
In the core thermal-hydraulic design of a sodium cooled fast reactor, the uncertainty factor analysis is a critical issue in order to assure safe and reliable operation. The deviations from the nominal values need to be quantitatively considered by statistical thermal design methods. The hot channel factors (HCF) were employed to evaluate the uncertainty in the early design such as the CRBRP. The improved thermal design procedure (ISTP) calculates the overall uncertainty based on the Root Sum Square technique and sensitivity analyses of each design parameters. Another way to consider the uncertainties is to use the Monte Carlo method (MCM). In this method, all the input uncertainties are randomly sampled according to their probability density functions and the resulting distribution for the output quantity is analyzed. It is able to directly estimate the uncertainty effects and propagation characteristics for the present thermalhydraulic model. However, it requires a huge computation time to get a reliable result because the accuracy is dependent on the sampling size. In this paper, the analysis of uncertainty factors using the Monte Carlo method is described. As a benchmark model, the ORNL 19 pin test is employed to validate the current uncertainty analysis method. The thermal-hydraulic calculation is conducted using the MATRA-LMR program which was developed at KAERI based on the subchannel approach. The results are compared with those of the hot channel factors and the improved thermal design procedure
19. A CNS calculation line based on a Monte-Carlo method
International Nuclear Information System (INIS)
The neutronic design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. The decisions taken in this sense affect not only the neutron flux in the source neighbourhood, which can be evaluated by a standard deterministic method, but also the neutron flux values in experimental positions far away from the neutron source. At long distances from the CNS, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures of standard and typical magnitudes such as average neutron flux, neutron current, angular flux, and luminosity. The Monte Carlo method is a unique and powerful tool to calculate the transport of neutrons and photons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The use of MCNP as the main neutronic design tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors, if the proper scheme is applied. The design goal is to evaluate the performance of the CNS, its beam tubes and neutron guides, at specific experimental locations in the reactor hall and in the neutron or experimental hall. In this work, the calculation methodology used to design a CNS and its associated Neutron Beam Transport Systems (NBTS), based on the use of the MCNP code, is presented. (author)
20. Research on Reliability Modelling Method of Machining Center Based on Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
Chuanhai Chen
2013-03-01
Full Text Available The aim of this study is to get the reliability of series system and analyze the reliability of machining center. So a modified method of reliability modelling based on Monte Carlo simulation for series system is proposed. The reliability function, which is built by the classical statistics method based on the assumption that machine tools were repaired as good as new, may be biased in the real case. The reliability functions of subsystems are established respectively and then the reliability model is built according to the reliability block diagram. Then the fitting reliability function of machine tools is established using the failure data of sample generated by Monte Carlo simulation, whose inverse reliability function is solved by the linearization technique based on radial basis function. Finally, an example of the machining center is presented using the proposed method to show its potential application. The analysis results show that the proposed method can provide an accurate reliability model compared with the conventional method.
1. Online Health Management for Complex Nonlinear Systems Based on Hidden Semi-Markov Model Using Sequential Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qinming Liu
2012-01-01
Full Text Available Health management for a complex nonlinear system is becoming more important for condition-based maintenance and minimizing the related risks and costs over its entire life. However, a complex nonlinear system often operates under dynamically operational and environmental conditions, and it subjects to high levels of uncertainty and unpredictability so that effective methods for online health management are still few now. This paper combines hidden semi-Markov model (HSMM with sequential Monte Carlo (SMC methods. HSMM is used to obtain the transition probabilities among health states and health state durations of a complex nonlinear system, while the SMC method is adopted to decrease the computational and space complexity, and describe the probability relationships between multiple health states and monitored observations of a complex nonlinear system. This paper proposes a novel method of multisteps ahead health recognition based on joint probability distribution for health management of a complex nonlinear system. Moreover, a new online health prognostic method is developed. A real case study is used to demonstrate the implementation and potential applications of the proposed methods for online health management of complex nonlinear systems.
2. Towards testing a two-Higgs-doublet model with maximal CP symmetry at the LHC: Monte Carlo event generator implementation
International Nuclear Information System (INIS)
A Monte Carlo event generator is implemented for a two-Higgs-doublet model with maximal CP symmetry, the MCPM. The model contains five physical Higgs bosons; the ρ', behaving similarly to the standard-model Higgs boson, two extra neutral bosons h' and h'', and a charged pair H±. The special feature of the MCPM is that, concerning the Yukawa couplings, the bosons h', h'' and H± couple directly only to the second-generation fermions but with strengths given by the third-generation-fermion masses. Our event generator allows the simulation of the Drell-Yan-type production processes of h', h'' and H± in proton-proton collisions at LHC energies. Also the subsequent leptonic decays of these bosons into the μ+ μ-, μ+νμ and μ- anti νμ channels are studied as well as the dominant background processes. We estimate the integrated luminosities needed in pp collisions at center-of-mass energies of 8 and 14 TeV for significant observations of the Higgs bosons h', h'' and H± in these muonic channels. (orig.)
3. Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems
Science.gov (United States)
Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark
2016-03-01
The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.
4. A 'local' exponential transform method for global variance reduction in Monte Carlo transport problems
International Nuclear Information System (INIS)
We develop a 'Local' Exponential Transform method which distributes the particles nearly uniformly across the system in Monte Carlo transport calculations. An exponential approximation to the continuous transport equation is used in each mesh cell to formulate biasing parameters. The biasing parameters, which resemble those of the conventional exponential transform, tend to produce a uniform sampling of the problem geometry when applied to a forward Monte Carlo calculation, and thus they help to minimize the maximum variance of the flux. Unlike the conventional exponential transform, the biasing parameters are spatially dependent, and are automatically determined from a forward diffusion calculation. We develop two versions of the forward Local Exponential Transform method, one with spatial biasing only, and one with spatial and angular biasing. The method is compared to conventional geometry splitting/Russian roulette for several sample one-group problems in X-Y geometry. The forward Local Exponential Transform method with angular biasing is found to produce better results than geometry splitting/Russian roulette in terms of minimizing the maximum variance of the flux. (orig.)
5. EVALUATION OF AGILE METHODS AND IMPLEMENTATION
OpenAIRE
Hossain, Arif
2015-01-01
The concepts of agile development were introduced when programmers were experiencing different obstacles in building software in various aspects. The obsolete waterfall model became defective and was no more pure process in terms of developing software. Consequently new other development methods have been introduced to mitigate the defects. The purpose of this thesis is to study different agile methods and find out the best one for software development. Each important agile method offers ...
6. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Shaoyun Ge
2014-01-01
Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.
7. Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process
International Nuclear Information System (INIS)
The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author)
8. Integration of the adjoint gamma quantum transport equation by the Monte Carlo method
International Nuclear Information System (INIS)
Comparative description and analysis of the direct and adjoint algorithms of calculation of gamma-quantum transmission in shielding using the Monte Carlo method have been carried out. Adjoint estimations for a number of monoenergetic sources have been considered. A brief description of ''COMETA'' program for BESM-6 computer reazaling direct and adjoint algorithms is presented. The program is modular-constructed which allows to widen it the new module-units being joined. Results of solution by the adjoint branch of two analog problems as compared to the analytical data are presented. These results confirm high efficiency of ''COMETA'' program
9. Microlens assembly error analysis for light field camera based on Monte Carlo method
Science.gov (United States)
Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping
2016-08-01
This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.
10. Using Markov Chain Monte Carlo methods to solve full Bayesian modeling of PWR vessel flaw distributions
International Nuclear Information System (INIS)
We present a hierarchical Bayesian method for estimating the density and size distribution of subclad-flaws in French Pressurized Water Reactor (PWR) vessels. This model takes into account in-service inspection (ISI) data, a flaw size-dependent probability of detection (different functions are considered) with a threshold of detection, and a flaw sizing error distribution (different distributions are considered). The resulting model is identified through a Markov Chain Monte Carlo (MCMC) algorithm. The article includes discussion for choosing the prior distribution parameters and an illustrative application is presented highlighting the model's ability to provide good parameter estimates even when a small number of flaws are observed
11. Percolation conductivity of Penrose tiling by the transfer-matrix Monte Carlo method
Science.gov (United States)
Babalievski, Filip V.
1992-03-01
A generalization of the Derrida and Vannimenus transfer-matrix Monte Carlo method has been applied to calculations of the percolation conductivity in a Penrose tiling. Strips with a length~10 4 and widths from 3 to 19 have been used. Disregarding the differences for smaller strip widths (up to 7), the results show that the percolative conductivity of a Penrose tiling has a value very close to that of a square lattice. The estimate for the percolation transport exponent once more confirms the universality conjecture for the 0-1 distribution of resistors.
12. Forward-walking Green's function Monte Carlo method for correlation functions
International Nuclear Information System (INIS)
The forward-walking Green's Function Monte Carlo method is used to compute expectation values for the transverse Ising model in (1 + 1)D, and the results are compared with exact values. The magnetisation Mz and the correlation function pz (n) are computed. The algorithm reproduces the exact results, and convergence for the correlation functions seems almost as rapid as for local observables such as the magnetisation. The results are found to be sensitive to the trial wavefunction, however, especially at the critical point. Copyright (1999) CSIRO Australia
13. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
International Nuclear Information System (INIS)
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
14. Linewidth of Cyclotron Absorption in Band-Gap Graphene: Relaxation Time Approximation vs. Monte Carlo Method
Directory of Open Access Journals (Sweden)
S.V. Kryuchkov
2015-03-01
Full Text Available The power of the elliptically polarized electromagnetic radiation absorbed by band-gap graphene in presence of constant magnetic field is calculated. The linewidth of cyclotron absorption is shown to be non-zero even if the scattering is absent. The calculations are performed analytically with the Boltzmann kinetic equation and confirmed numerically with the Monte Carlo method. The dependence of the linewidth of the cyclotron absorption on temperature applicable for a band-gap graphene in the absence of collisions is determined analytically.
15. Investigation of the optimal parameters for laser treatment of leg telangiectasia using the Monte Carlo method
Science.gov (United States)
Kienle, Alwin; Hibst, Raimund
1996-05-01
Treatment of leg telangiectasia with a pulsed laser is investigated theoretically. The Monte Carlo method is used to calculate light propagation and absorption in the epidermis, dermis and the ectatic blood vessel. Calculations are made for different diameters and depths of the vessel in the dermis. In addition, the scattering and the absorption coefficients of the dermis are varied. On the basis of the considered damage model it is found that for vessels with diameters between 0.3 mm and 0.5 mm wavelengths about 600 nm are optimal to achieve selective photothermolysis.
16. Enhanced least squares Monte Carlo method for real-time decision optimizations for evolving natural hazards
DEFF Research Database (Denmark)
Anders, Annett; Nishijima, Kazuyoshi
The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... prices. In Anders & Nishijima (2011) the LSM is adapted for a real-time operational decision problem; however it is found that further improvement is required in regard to the computational efficiency, in order to facilitate it for practice. This is the focus in the present paper. The idea behind the...
17. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
Science.gov (United States)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
18. NASA astronaut dosimetry: Implementation of scalable human phantoms and benchmark comparisons of deterministic versus Monte Carlo radiation transport
Science.gov (United States)
19. Numerical simulation of C/O spectroscopy in logging by Monte-Carlo method
International Nuclear Information System (INIS)
Numerical simulation of ratio of C/O spectroscopy in logging by Monte-Carlo method is made in this paper. Agree well with the measured spectra, the simulated spectra can meet the requirement of logging practice. Vari- ous kinds of C/O ratios affected by different formation oil saturations,borehole oil fractions, casing sizes and concrete ring thicknesses are investigated. In order to achieve accurate results of processing the spectra, this paper presents a new method for unfolding the C/O inelastic gamma spectroscopy, and analysis for the spectra using the method, The result agrees with the fact. These rules and method can be used as calibrating tools and logging interpretation. (authors)
20. Spin kinetic Monte Carlo method for nanoferromagnetism and magnetization dynamics of nanomagnets with large magnetic anisotropy
Institute of Scientific and Technical Information of China (English)
LIU Bang-gui; ZHANG Kai-cheng; LI Ying
2007-01-01
The Kinetic Monte Carlo (KMC) method based on the transition-state theory, powerful and famous for sim-ulating atomic epitaxial growth of thin films and nanostruc-tures, was used recently to simulate the nanoferromagnetism and magnetization dynamics of nanomagnets with giant mag-netic anisotropy. We present a brief introduction to the KMC method and show how to reformulate it for nanoscale spin systems. Large enough magnetic anisotropy, observed exper-imentally and shown theoretically in terms of first-principle calculation, is not only essential to stabilize spin orientation but also necessary in making the transition-state barriers dur-ing spin reversals for spin KMC simulation. We show two applications of the spin KMC method to monatomic spin chains and spin-polarized-current controlled composite nano-magnets with giant magnetic anisotropy. This spin KMC method can be applied to other anisotropic nanomagnets and composite nanomagnets as long as their magnetic anisotropy energies are large enough.
1. Differential Monte Carlo method for computing seismogram envelopes and their partial derivatives
Science.gov (United States)
Takeuchi, Nozomu
2016-05-01
We present an efficient method that is applicable to waveform inversions of seismogram envelopes for structural parameters describing scattering properties in the Earth. We developed a differential Monte Carlo method that can simultaneously compute synthetic envelopes and their partial derivatives with respect to structural parameters, which greatly reduces the required CPU time. Our method has no theoretical limitations to apply to the problems with anisotropic scattering in a heterogeneous background medium. The effects of S wave polarity directions and phase differences between SH and SV components are taken into account. Several numerical examples are presented to show that the intrinsic and scattering attenuation at the depth range of the asthenosphere have different impacts on the observed seismogram envelopes, thus suggesting that our method can potentially be applied to inversions for scattering properties in the deep Earth.
2. Paediatric CT exposures: comparison between CTDIvol and SSDE methods using measurements and Monte Carlo simulations
International Nuclear Information System (INIS)
Computed tomography (CT) is one of the most used techniques in medical diagnosis, and its use has become one of the main sources of exposure of the population to ionising radiation. This work concentrates on the paediatric patients, since children exhibit higher radiosensitivity than adults. Nowadays, patient doses are estimated through two standard CT dose index (CTDI) phantoms as a reference to calculate CTDI volume (CTDIvol) values. This study aims at improving the knowledge about the radiation exposure to children and to better assess the accuracy of the CTDIvol method. The effectiveness of the CTDIvol method for patient dose estimation was then investigated through a sensitive study, taking into account the doses obtained by three methods: CTDIvol measured, CTDIvol values simulated with Monte Carlo (MC) code MCNPX and the recent proposed method Size-Specific Dose Estimate (SSDE). In order to assess organ doses, MC simulations were executed with paediatric voxel phantoms. (authors)
3. Biases in approximate solution to the criticality problem and alternative Monte Carlo method
International Nuclear Information System (INIS)
The solution to the problem of criticality for the neutron transport equation using the source iteration method is addressed. In particular, the question of convergence of the iterations is examined. It is concluded that slow convergence problems will occur in cases where the optical thickness of the space region in question is large. Furthermore it is shown that in general, the final result of the iterative process is strongly affected by an insufficient accuracy of the individual iterations. To avoid these problems, a modified method of the solution is suggested. This modification is based on the results of the theory of positive operators. The criticality problem is solved by means of the Monte Carlo method by constructing special random variables so that the differences between the observed and exact results are arbitrarily small. The efficiency of the method is discussed and some numerical results are presented
4. Recent advances in the microscopic calculations of level densities by the shell model Monte Carlo method
International Nuclear Information System (INIS)
The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (1) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (2) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59-64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets. (author)
5. On solution to the problem of criticality by alternative MONTE CARLO method
International Nuclear Information System (INIS)
The contribution deals with solution to the problem of criticality for neutron transport equation. The problem is transformed to equivalent one in a suitable set of complex functions and existence and uniqueness of its solution is shown. Then the source iteration method of the solution is discussed. It is pointed out that final result of iterative process is strongly affected by the fact that individual iterations are not computed with sufficient accuracy. To avoid this problem a modified method of the solution is suggested and presented. The modification is based on results of the theory of positive operators and problem of criticality is solved by Monte Carlo method constructing special random process and variable so that differences between results obtained and the exact ones would be arbitrarily small. Efficiency of this alternative method is analysed as well (Author)
6. A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry
International Nuclear Information System (INIS)
The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)
7. Recent Advances in the Microscopic Calculations of Level Densities by the Shell Model Monte Carlo Method
CERN Document Server
Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H
2014-01-01
The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets.
International Nuclear Information System (INIS)
Light transfer in gradient-index media generally follows curved ray trajectories, which will cause light beam to converge or diverge during transfer and induce the rotation of polarization ellipse even when the medium is transparent. Furthermore, the combined process of scattering and transfer along curved ray path makes the problem more complex. In this paper, a Monte Carlo method is presented to simulate polarized radiative transfer in gradient-index media that only support planar ray trajectories. The ray equation is solved to the second order to address the effect induced by curved ray trajectories. Three types of test cases are presented to verify the performance of the method, which include transparent medium, Mie scattering medium with assumed gradient index distribution, and Rayleigh scattering with realistic atmosphere refractive index profile. It is demonstrated that the atmospheric refraction has significant effect for long distance polarized light transfer. - Highlights: • A Monte Carlo method for polarized radiative transfer in gradient index media. • Effect of curved ray paths on polarized radiative transfer is considered. • Importance of atmospheric refraction for polarized light transfer is demonstrated
9. The applicability of certain Monte Carlo methods to the analysis of interacting polymers
Energy Technology Data Exchange (ETDEWEB)
Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)
1998-05-01
The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
10. Analysis of uncertainty quantification method by comparing Monte-Carlo method and Wilk's formula
International Nuclear Information System (INIS)
An analysis of the uncertainty quantification related to LBLOCA using the Monte-Carlo calculation has been performed and compared with the tolerance level determined by the Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LOCA phenomena were determined based on previous PIRT results and documentation during the BEMUSE project. Calulations were conducted on 3,500 cases within a 2-week CPU time on a 14-PC cluster system. The Monte-Carlo exercise shows that the 95% upper limit PCT value can be obtained well, with a 95% confidence level using the Wilks' formula, although we have to endure a 5% risk of PCT under-prediction. The results also show that the statistical fluctuation of the limit value using Wilks' first-order is as large as the uncertainty value itself. It is therefore desirable to increase the order of the Wilks' formula to be higher than the second-order to estimate the reliable safety margin of the design features. It is also shown that, with its ever increasing computational capability, the Monte-Carlo method is accessible for a nuclear power plant safety analysis within a realistic time frame.
11. Simulation of the nucleation of the precipitate Al3Sc in an aluminum scandium alloy using the kinetic monte carlo method
OpenAIRE
Moura, Alfredo de; Esteves, António
2013-01-01
This paper describes the simulation of the phenomenon of nucleation of the precipitate Al3Sc in an Aluminum Scandium alloy using the kinetic Monte Carlo (kMC) method and the density-based clustering with noise (DBSCAN) method to filter the simulation data. To conduct this task, kMC and DBSCAN algorithms were implemented in C language. The study covers a range of temperatures, concentrations, and dimensions, going from 573K to 873K, 0.25% to 5%, and 50x50x50 to 100x100x100. The Al3Sc precipita...
12. Self-optimizing Monte Carlo method for nuclear well logging simulation
Science.gov (United States)
Liu, Lianyan
1997-09-01
In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very
13. Monte Carlo simulation methods of determining red bone marrow dose from external radiation
International Nuclear Information System (INIS)
Objective: To provide evidence for a more reasonable method of determining red bone marrow dose by analyzing and comparing existing simulation methods. Methods: By utilizing Monte Carlo simulation software MCNPX, the absorbed doses of red hone marrow of Rensselaer Polytechnic Institute (RPI) adult female voxel phantom were calculated through 4 different methods: direct energy deposition.dose response function (DRF), King-Spiers factor method and mass-energy absorption coefficient (MEAC). The radiation sources were defined as infinite plate.sources with the energy ranging from 20 keV to 10 MeV, and 23 sources with different energies were simulated in total. The source was placed right next to the front of the RPI model to achieve a homogeneous anteroposterior radiation scenario. The results of different simulated photon energy sources through different methods were compared. Results: When the photon energy was lower than 100 key, the direct energy deposition method gave the highest result while the MEAC and King-Spiers factor methods showed more reasonable results. When the photon energy was higher than 150 keV taking into account of the higher absorption ability of red bone marrow at higher photon energy, the result of the King-Spiers factor method was larger than those of other methods. Conclusions: The King-Spiers factor method might be the most reasonable method to estimate the red bone marrow dose from external radiation. (authors)
14. Wind Turbine Placement Optimization by means of the Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
S. Brusca
2014-01-01
Full Text Available This paper defines a new procedure for optimising wind farm turbine placement by means of Monte Carlo simulation method. To verify the algorithm’s accuracy, an experimental wind farm was tested in a wind tunnel. On the basis of experimental measurements, the error on wind farm power output was less than 4%. The optimization maximises the energy production criterion; wind turbines’ ground positions were used as independent variables. Moreover, the mathematical model takes into account annual wind intensities and directions and wind turbine interaction. The optimization of a wind farm on a real site was carried out using measured wind data, dominant wind direction, and intensity data as inputs to run the Monte Carlo simulations. There were 30 turbines in the wind park, each rated at 20 kW. This choice was based on wind farm economics. The site was proportionally divided into 100 square cells, taking into account a minimum windward and crosswind distance between the turbines. The results highlight that the dominant wind intensity factor tends to overestimate the annual energy production by about 8%. Thus, the proposed method leads to a more precise annual energy evaluation and to a more optimal placement of the wind turbines.
15. Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method
International Nuclear Information System (INIS)
A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media
16. Inconsistencies in widely used Monte Carlo methods for precise calculation of radial resonance captures in uranium fuel rods
International Nuclear Information System (INIS)
Although resonance neutron captures for 238U in water-moderated lattices are known to occur near moderator-fuel interfaces, the sharply attenuated spatial captures here have not been calculated by multigroup transport or Monte Carlo methods. Advances in computer speed and capacity have restored interest in applying Monte Carlo methods to evaluate spatial resonance captures in fueled lattices. Recently published studies have placed complete reliance on the ostensible precision of the Monte Carlo approach without auxiliary confirmation that resonance processes were followed adequately or that the Monte Carlo method was applied appropriately. Other methods of analysis that have evolved from early resonance integral theory have provided a basis for an alternative approach to determine radial resonance captures in fuel rods. A generalized method has been formulated and confirmed by comparison with published experiments of high spatial resolution for radial resonance captures in metallic uranium rods. The same analytical method has been applied to uranium-oxide fuels. The generalized method defined a spatial effective resonance cross section that is a continuous function of distance from the moderator-fuel interface and enables direct calculation of precise radial resonance capture distributions in fuel rods. This generalized method is used as a reference for comparison with two recent independent studies that have employed different Monte Carlo codes and cross-section libraries. Inconsistencies in the Monte Carlo application or in how pointwise cross-section libraries are sampled may exist. It is shown that refined Monte Carlo solutions with improved spatial resolution would not asymptotically approach the reference spatial capture distributions
17. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems
DEFF Research Database (Denmark)
Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.
2002-01-01
A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this...... from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...
18. Simulating rotationally inelastic collisions using a Direct Simulation Monte Carlo method
CERN Document Server
Schullian, O; Vaeck, N; van der Avoird, A; Heazlewood, B R; Rennick, C J; Softley, T P
2015-01-01
A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH$_3$ by He.
19. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.
Directory of Open Access Journals (Sweden)
Kaarina Matilainen
Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
20. Concerned items on variance reduction method of monte carlo calculation written in published literatures. A logic of monte carlo calculation=from experience to science
International Nuclear Information System (INIS)
In the fixed source problem such as a neutron deep penetration calculation with the Monte Carlo method, the application of the variance reduction method is most important for a high figure of merit (FOM) and the most reliable calculation. But, MCNP calculation inputs written in published literature are not to be best solution. The most concerned items are setting method for the lower weight bound on the weight window method and the exclusion radius for a point estimator. In those literatures, the lower weight bound is estimated by engineering judge or weight window generator in the MCNP. In the latter case, the lower weight bound is used with no turning process. Because of abnormal large lower weight bounds, many neutron are killed in no meaning by the Russian Roulette. The adjoint flux method for setting of lower weight bound should be adapted as a standard variance reduction method. The Monte Carlo calculation should be turned from the experience such as engineering judge to science such as adjoint method. (author)
1. Use of Monte Carlo Methods for Evaluating Probability of False Positives in Archaeoastronomy Alignments
Science.gov (United States)
Hull, Anthony B.; Ambruster, C.; Jewell, E.
2012-01-01
Simple Monte Carlo simulations can assist both the cultural astronomy researcher while the Research Design is developed and the eventual evaluators of research products. Following the method we describe allows assessment of the probability for there to be false positives associated with a site. Even seemingly evocative alignments may be meaningless, depending on the site characteristics and the number of degrees of freedom the researcher allows. In many cases, an observer may have to limit comments to "it is nice and it might be culturally meaningful, rather than saying "it is impressive so it must mean something". We describe a basic language with an associated set of attributes to be cataloged. These can be used to set up simple Monte Carlo simulations for a site. Without collaborating cultural evidence, or trends with similar attributes (for example a number of sites showing the same anticipatory date), the Monte Carlo simulation can be used as a filter to establish the likeliness that the observed alignment phenomena is the result of random factors. Such analysis may temper any eagerness to prematurely attribute cultural meaning to an observation. For the most complete description of an archaeological site, we urge researchers to capture the site attributes in a manner which permits statistical analysis. We also encourage cultural astronomers to record that which does not work, and that which may seem to align, but has no discernable meaning. Properly reporting situational information as tenets of the research design will reduce the subjective nature of archaeoastronomical interpretation. Examples from field work will be discussed.
2. Application of Monte Carlo method for dose calculation in thyroid follicle
International Nuclear Information System (INIS)
The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)
3. A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation
OpenAIRE
Saleur, H.; Derrida, B.
1985-01-01
In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents which confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.
4. A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation
International Nuclear Information System (INIS)
In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations
5. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction
CERN Document Server
2003-01-01
Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.
6. Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.
Science.gov (United States)
Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L
2013-01-01
It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches. PMID:23194406
7. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model
Science.gov (United States)
Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa
We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.
8. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot
International Nuclear Information System (INIS)
This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.
9. Calculation of the radiation transport in rock salt using Monte Carlo methods. Final report. HAW project
International Nuclear Information System (INIS)
This report provides absorbed dose rate and photon fluence rate distributions in rock salt around 30 testwise emplaced canisters containing high-level radioactive material (HAW project) and around a single canister containing radioactive material of a lower activity level (INHAW experiment). The site of this test emplacement was located in test galleries at the 800-m-level in the Asse salt mine. The data given were calculated using a Monte Carlo method simulating photon transport in complex geometries of differently composed materials. The aim of these calculations was to enable determination of the dose absorbed in any arbitrary sample of salt to be further examined in the future with sufficient reliability. The geometry of the test arrangement, the materials involved and the calculational method are characterised and the results are shortly described and some figures presenting selected results are shown. In the appendices, the results for emplacement of the highly radioactive canisters are given in tabular form. (orig.)
10. Using neutron source distinguish mustard gas bomb from the others with Monte Carlo simulation method
International Nuclear Information System (INIS)
After Japan was defeated, the chemical weapon that left in China injured people constantly. It made very grave lost to the Chinese because of people's innocent to it. In these accidents, mustard gas bomb is the most. It is more difficult to distinguish mustard gas bomb from other normal bomb in out because it embedded in the earth for long time; leakage, eroding and rust appearance looked very serious. So the untouched measure method, neutron source inducing γ spectrum, showed very important. The Monte Carlo method was used in this paper to compute the γ spectrum when using neutron source irradiate mustard gas bomb. The characteristic radial of Cl, S, Fe and the other elements can picked up clearly. The result play some referenced role in analyzing γ spectrum. (authors)
11. Heat-Flux Analysis of Solar Furnace Using the Monte Carlo Ray-Tracing Method
International Nuclear Information System (INIS)
An understanding of the concentrated solar flux is critical for the analysis and design of solar-energy-utilization systems. The current work focuses on the development of an algorithm that uses the Monte Carlo ray-tracing method with excellent flexibility and expandability; this method considers both solar limb darkening and the surface slope error of reflectors, thereby analyzing the solar flux. A comparison of the modeling results with measurements at the solar furnace in Korea Institute of Energy Research (KIER) show good agreement within a measurement uncertainty of 10%. The model evaluates the concentration performance of the KIER solar furnace with a tracking accuracy of 2 mrad and a maximum attainable concentration ratio of 4400 sun. Flux variations according to measurement position and flux distributions depending on acceptance angles provide detailed information for the design of chemical reactors or secondary concentrators
12. Intra-operative radiation therapy optimization using the Monte Carlo method
International Nuclear Information System (INIS)
The problem addressed with reference to the treatment head optimization has been the choice of the proper design of the head of a new 12 MeV linear accelerator in order to have the required dose uniformity on the target volume while keeping the dose rate sufficiently high and the photon production and the beam impact with the head walls within acceptable limits. The second part of the optimization work, concerning the TPS, is based on the rationale that the TPSs generally used in radiotherapy use semi-empirical algorithms whose accuracy can be inadequate particularly when irregular surfaces and/or inhomogeneities, such as air cavities or bone, are present. The Monte Carlo method, on the contrary, is capable of accurately calculating the dose distribution under almost all circumstances. Furthermore it offers the advantage of allowing to start the simulation of the radiation transport in the patient from the beam data obtained with the transport through the specific treatment head used. Therefore the Monte Carlo simulations, which at present are not yet widely used for routine treatment planning due to the required computing time, can be employed as a benchmark and as an optimization tool for conventional TPSs. (orig.)
13. Intra-operative radiation therapy optimization using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Rosetti, M. [ENEA, Bologna (Italy); Benassi, M.; Bufacchi, A.; D' Andrea, M. [Ist. Regina Elena, Rome (Italy); Bruzzaniti, V. [ENEA, S. Maria di Galeria (Rome) (Italy)
2001-07-01
The problem addressed with reference to the treatment head optimization has been the choice of the proper design of the head of a new 12 MeV linear accelerator in order to have the required dose uniformity on the target volume while keeping the dose rate sufficiently high and the photon production and the beam impact with the head walls within acceptable limits. The second part of the optimization work, concerning the TPS, is based on the rationale that the TPSs generally used in radiotherapy use semi-empirical algorithms whose accuracy can be inadequate particularly when irregular surfaces and/or inhomogeneities, such as air cavities or bone, are present. The Monte Carlo method, on the contrary, is capable of accurately calculating the dose distribution under almost all circumstances. Furthermore it offers the advantage of allowing to start the simulation of the radiation transport in the patient from the beam data obtained with the transport through the specific treatment head used. Therefore the Monte Carlo simulations, which at present are not yet widely used for routine treatment planning due to the required computing time, can be employed as a benchmark and as an optimization tool for conventional TPSs. (orig.)
14. Improvement of the neutron flux calculations in thick shield by conditional Monte Carlo and deterministic methods
Energy Technology Data Exchange (ETDEWEB)
Ghassoun, Jillali; Jehoauni, Abdellatif [Nuclear physics and Techniques Lab., Faculty of Science, Semlalia, Marrakech (Morocco)
2000-01-01
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
15. Improvement of the neutron flux calculations in thick shield by conditional Monte Carlo and deterministic methods
International Nuclear Information System (INIS)
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
16. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method
KAUST Repository
Li, Jun
2014-01-01
In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.
17. Advantages and weakness of the Monte Carlo method used in studies for safety-criticality in nuclear installations
International Nuclear Information System (INIS)
The choice of the Monte Carlo method by the criticality service of the CEA is justified by the advantages of this method with regard to analytical codes. In this paper the authors present the advantages and the weakness of this method. Some studies for remediate at this weakness are presented
18. Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
Energy Technology Data Exchange (ETDEWEB)
Peplow, Douglas E. [ORNL; Miller, Thomas Martin [ORNL; Patton, Bruce W [ORNL; Wagner, John C [ORNL
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.
19. Coherent-wave Monte Carlo method for simulating light propagation in tissue
Science.gov (United States)
Kraszewski, Maciej; Pluciński, Jerzy
2016-03-01
Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.
20. Treatment of the Shrodinger equation through a Monte Carlo method based upon the generalized Feynman-Kac formula
International Nuclear Information System (INIS)
We present a new Monte Carlo method based upon the theoretical proposal of Claverie and Soto. By contrast with other Quantum Monte Carlo methods used so far, the present approach uses a pure diffusion process without any branching. The many-fermion problem (with the specific constraint due to the Pauli principle) receives a natural solution in the framework of this method: in particular, there is neither the fixed-node approximation not the nodal release problem which occur in other approaches (see, e.g., Ref. 8 for a recent account). We give some numerical results concerning simple systems in order to illustrate the numerical feasibility of the proposed algorithm
1. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis
Science.gov (United States)
Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.
2014-05-01
Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution
2. Application of multi-stage Monte Carlo method for solving machining optimization problems
Directory of Open Access Journals (Sweden)
2014-08-01
Full Text Available Enhancing the overall machining performance implies optimization of machining processes, i.e. determination of optimal machining parameters combination. Optimization of machining processes is an active field of research where different optimization methods are being used to determine an optimal combination of different machining parameters. In this paper, multi-stage Monte Carlo (MC method was employed to determine optimal combinations of machining parameters for six machining processes, i.e. drilling, turning, turn-milling, abrasive waterjet machining, electrochemical discharge machining and electrochemical micromachining. Optimization solutions obtained by using multi-stage MC method were compared with the optimization solutions of past researchers obtained by using meta-heuristic optimization methods, e.g. genetic algorithm, simulated annealing algorithm, artificial bee colony algorithm and teaching learning based optimization algorithm. The obtained results prove the applicability and suitability of the multi-stage MC method for solving machining optimization problems with up to four independent variables. Specific features, merits and drawbacks of the MC method were also discussed.
3. Calculation of neutron importance function in fissionable assemblies using Monte Carlo method
International Nuclear Information System (INIS)
The purpose of the present work is to develop an efficient solution method to calculate neutron importance function in fissionable assemblies for all criticality conditions, using Monte Carlo Method. The neutron importance function has a well important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating adjoint flux through out solving the Adjoint weighted transport equation with deterministic methods. However, in complex geometries these calculations are very difficult. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on physical concept of neutron importance has been introduced for calculating neutron importance function in sub-critical, critical and supercritical conditions. For this means a computer program has been developed. The results of the method has been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries and their correctness has been approved for all three criticality conditions. Ultimately, the efficiency of the method for complex geometries has been shown by calculation of neutron importance in MNSR research reactor
4. Generation of organic scintillators response function for fast neutrons using the Monte Carlo method
International Nuclear Information System (INIS)
A computer program (DALP) in Fortran-4-G language, has been developed using the Monte Carlo method to simulate the experimental techniques leading to the distribution of pulse heights due to monoenergetic neutrons reaching an organic scintillator. The calculation of the pulse height distribution has been done for two different systems: 1) Monoenergetic neutrons from a punctual source reaching the flat face of a cylindrical organic scintillator; 2) Environmental monoenergetic neutrons randomly reaching either the flat or curved face of the cylindrical organic scintillator. The computer program has been developed in order to be applied to the NE-213 liquid organic scintillator, but can be easily adapted to any other kind of organic scintillator. With this program one can determine the pulse height distribution for neutron energies ranging from 15 KeV to 10 MeV. (Author)
5. Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy
Science.gov (United States)
King, Julian; Mortlock, Daniel; Webb, John; Murphy, Michael
2010-11-01
Recent attempts to constrain cosmological variation in the fine structure constant, α, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for Δα/α, the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.
6. Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy
CERN Document Server
King, Julian A; Webb, John K; Murphy, Michael T
2009-01-01
Recent attempts to constrain cosmological variation in the fine structure constant, alpha, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for (Delta alpha)/(alpha), the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.
7. Determination of dosimetric characteristics of 125I-103Pd brachytherapy source with Monte-Carlo method
International Nuclear Information System (INIS)
According to dose parameters calculation formula of seed source recommended by AAPM TG43U1, 125I-103Pd seed source dose parameters calculation formula and a variety of radionuclides composite seed source of dose parameters calculation formula can be obtain. Dose rate constant, radial dose function and anisotropy function of 125I-103Pd composite seed source are calculated by Monte-Carlo method, Empiric equations are obtained for radial dose function and anisotropy function by curve fitting. Comparisons with the relative data recommend by AAPM are performed. For the single source, the deviation of dose rate constant is 0.959 (cGy·h-1·U-1), and with 0.6093% from the AAPM. (authors)
8. Monte Carlo study of living polymers with the bond-fluctuation method
Science.gov (United States)
Rouault, Yannick; Milchev, Andrey
1995-06-01
The highly efficient bond-fluctuation method for Monte Carlo simulations of both static and dynamic properties of polymers is applied to a system of living polymers. Parallel to stochastic movements of monomers, which result in Rouse dynamics of the macromolecules, the polymer chains break, or associate at chain ends with other chains and single monomers, in the process of equilibrium polymerization. We study the changes in equilibrium properties, such as molecular-weight distribution, average chain length, and radius of gyration, and specific heat with varying density and temperature of the system. The results of our numeric experiments indicate a very good agreement with the recently suggested description in terms of the mean-field approximation. The coincidence of the specific heat maximum position at kBT=V/4 in both theory and simulation suggests the use of calorimetric measurements for the determination of the scission-recombination energy V in real experiments.
9. Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method
International Nuclear Information System (INIS)
We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels
10. Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods
CERN Document Server
Ferraioli, Luigi; Plagnol, Eric
2012-01-01
We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...
11. MAMONT program for neutron field calculation by the Monte Carlo method
International Nuclear Information System (INIS)
The MAMONT program (MAthematical MOdelling of Neutron Trajectories) designed for three-dimensional calculation of neutron transport by analogue and nonanalogue Monte Carlo methods in the range of energies from 15 MeV to the thermal ones is described. The program is written in FORTRAN and is realized at the BESM-6 computer. Group constants of the library modulus are compiled of the ENDL-83, ENDF/B-4 and JENDL-2 files. The possibility of calculation for the layer spherical, cylindrical and rectangular configurations is envisaged. Accumulation and averaging of slowing-down kinetics functionals (averaged logarithmic losses of energy, time of slowing- down, free paths, the number of collisions, age), diffusion parameters, leakage spectra and fluxes as well as formation of separate isotopes over zones are realized in the process of calculation. 16 tabs
12. Absorbed dose measurements in mammography using Monte Carlo method and ZrO2+PTFE dosemeters
International Nuclear Information System (INIS)
Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO2+PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)
13. Investigation of Reliabilities of Bolt Distances for Bolted Structural Steel Connections by Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Ertekin Öztekin Öztekin
2015-12-01
Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs.
14. Efficiency determination of whole-body counter by Monte Carlo method, using a microcomputer
International Nuclear Information System (INIS)
The purpose of this investigation was the development of an analytical microcomputer model to evaluate a whole body counter efficiency. The model is based on a modified Sryder's model. A stretcher type geometry along with the Monte Carlo method and a Synclair type microcomputer were used. Experimental measurements were performed using two phantoms, one as an adult and the other as a 5 year old child. The phantoms were made in acrylic and and 99mTc, 131I and 42K were the radioisotopes utilized. Results showed a close relationship between experimental and predicted data for energies ranging from 250 keV to 2 MeV, but some discrepancies were found for lower energies. (author)
15. Investigation of physical regularities in gamma gamma logging of oil wells by Monte Carlo method
International Nuclear Information System (INIS)
Some results are given of calculations by the Monte Carlo method of specific problems of gamma-gamma density logging. The paper considers the influence of probe length and volume density of the rocks; the angular distribution of the scattered radiation incident on the instrument; the spectra of the radiation being recorded and of the source radiation; depths of surveys, the effect of the mud cake, the possibility of collimating the source radiation; the choice of source, initial collimation angles, the optimum angle of recording scattered gamma-radiation and the radiation discrimination threshold; and the possibility of determining the mineralogical composition of rocks in sections of oil wells and of identifying once-scattered radiation. (author)
16. Application of Monte Carlo method in modelling physical and physico-chemical processes
International Nuclear Information System (INIS)
The seminar was held on September 9 and 10, 1982 at the Faculty of Nuclear Science and Technical Engineering of the Czech Technical University in Prague. The participants heard 11 papers of which 7 were inputed in INIS. The papers dealt with the use of the Monte Carlo method for modelling the transport and scattering of gamma radiation in layers of materials, the application of low-energy gamma radiation for the determination of secondary X radiation flux, the determination of self-absorption corrections for a 4π chamber, modelling the response function of a scintillation detector and the optimization of geometrical configuration in measuring material density using backscattered gamma radiation. The possibility was studied of optimizing modelling with regard to computer time, and the participants were informed of comouterized nuclear data libraries. (M.D.)
17. Simulation of nuclear material identification system based on Monte Carlo sampling method
International Nuclear Information System (INIS)
Background: Caused by the danger of radioactivity, nuclear material identification is sometimes a difficult problem. Purpose: In order to reflect the particle transport processes in nuclear fission and present the effectiveness of the signatures of Nuclear Materials Identification System (NMIS), based on physical principles and experimental statistical data. Methods: We established a Monte Carlo simulation model of nuclear material identification system and then acquired three channels of time domain pulse signal. Results: Auto-Correlation Functions (AC), Cross-Correlation Functions (CC), Auto Power Spectral Densities (APSD) and Cross Power Spectral Densities (CPSD) between channels can obtain several signatures, which can show some characters of nuclear material. Conclusions: The simulation results indicate that the way can help to further study the features of the system. (authors)
18. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
Science.gov (United States)
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
19. Calculation of narrow beam γ ray mass attenuation coefficients of absorbing medium by Monte Carlo method
International Nuclear Information System (INIS)
The mathematics model of particle transportation was built, based on the sample of the impaction trace of the narrow beam γ photon in the medium according to the principle of interaction between γ photon and the material, and a computer procedure was organized to simulate the process of transportation for the γ photon in the medium and record the emission probability of γ photon and its corresponding thickness of medium with LabWindows/CVI, which was used to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. The results show that it is feasible for Monte Carlo method to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. (authors)
20. A Monte Carlo method for critical systems in infinite volume: the planar Ising model
CERN Document Server
Herdeiro, Victor
2016-01-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
1. Development of a software package for solid-angle calculations using the Monte Carlo method
International Nuclear Information System (INIS)
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4. -- Highlights: • This software package (SAC) can give accurate solid-angle values. • SAC calculate solid angles using the Monte Carlo method and it has higher computation speed than Geant4. • A simple but effective variance reduction technique which was put forward by the authors has been applied in SAC. • A visualization function and a graphical user interface are also integrated in SAC
2. Energy conservation in radiation hydrodynamics. Application to the Monte-Carlo method used for photon transport in the fluid frame
International Nuclear Information System (INIS)
The description of the equations in the fluid frame has been done recently. A simplification of the collision term is obtained, but the streaming term now has to include angular deviation and the Doppler shift. We choose the latter description which is more convenient for our purpose. We introduce some notations and recall some facts about stochastic kernels and the Monte-Carlo method. We show how to apply the Monte-Carlo method to a transport equation with an arbitrary streaming term; in particular we show that the track length estimator is unbiased. We review some properties of the radiation hydrodynamics equations, and show how energy conservation is obtained. Then, we apply the Monte-Carlo method explained in section 2 to the particular case of the transfer equation in the fluid frame. Finally, we describe a physical example and give some numerical results
3. Method to implement the CCD timing generator based on FPGA
Science.gov (United States)
Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin
2010-07-01
With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.
4. Exposure-response modeling methods and practical implementation
CERN Document Server
Wang, Jixian
2015-01-01
Discover the Latest Statistical Approaches for Modeling Exposure-Response RelationshipsWritten by an applied statistician with extensive practical experience in drug development, Exposure-Response Modeling: Methods and Practical Implementation explores a wide range of topics in exposure-response modeling, from traditional pharmacokinetic-pharmacodynamic (PKPD) modeling to other areas in drug development and beyond. It incorporates numerous examples and software programs for implementing novel methods.The book describes using measurement
5. A Monte-Carlo Method for Estimating Stellar Photometric Metallicity Distributions
CERN Document Server
Gu, Jiayin; jing, Yingjie; Zuo, Wenbo
2016-01-01
Based on the Sloan Digital Sky Survey (SDSS), we develop a new monte-carlo based method to estimate the photometric metallicity distribution function (MDF) for stars in the Milky Way. Compared with other photometric calibration methods, this method enables a more reliable determination of the MDF, in particular at the metal-poor and metal-rich ends. We present a comparison of our new method with a previous polynomial-based approach, and demonstrate its superiority. As an example, we apply this method to main-sequence stars with $0.2 6. Time-Varying Noise Estimation for Speech Enhancement and Recognition Using Sequential Monte Carlo Method Directory of Open Access Journals (Sweden) Kaisheng Yao 2004-11-01 Full Text Available We present a method for sequentially estimating time-varying noise parameters. Noise parameters are sequences of time-varying mean vectors representing the noise power in the log-spectral domain. The proposed sequential Monte Carlo method generates a set of particles in compliance with the prior distribution given by clean speech models. The noise parameters in this model evolve according to random walk functions and the model uses extended Kalman filters to update the weight of each particle as a function of observed noisy speech signals, speech model parameters, and the evolved noise parameters in each particle. Finally, the updated noise parameter is obtained by means of minimum mean square error (MMSE estimation on these particles. For efficient computations, the residual resampling and Metropolis-Hastings smoothing are used. The proposed sequential estimation method is applied to noisy speech recognition and speech enhancement under strongly time-varying noise conditions. In both scenarios, this method outperforms some alternative methods. 7. An Evaluation of the Adjoint Flux Using the Collision Probability Method for the Hybrid Monte Carlo Radiation Shielding Analysis International Nuclear Information System (INIS) It is noted that the analog Monte Carlo method has low calculation efficiency at deep penetration problems such as radiation shielding analysis. In order to increase the calculation efficiency, variance reduction techniques have been introduced and applied for the shielding calculation. To optimize the variance reduction technique, the hybrid Monte Carlo method was introduced. For the determination of the parameters using the hybrid Monte Carlo method, the adjoint flux should be calculated by the deterministic methods. In this study, the collision probability method is applied to calculate adjoint flux. The solution of integration transport equation in the collision probability method is modified to calculate the adjoint flux approximately even for complex and arbitrary geometries. For the calculation, C++ program is developed. By using the calculated adjoint flux, importance parameters of each cell in shielding material are determined and used for variance reduction of transport calculation. In order to evaluate calculation efficiency with the proposed method, shielding calculations are performed with MCNPX 2.7. In this study, a method to calculate the adjoint flux in using the Monte Carlo variance reduction was proposed to improve Monte Carlo calculation efficiency of thick shielding problem. The importance parameter for each cell of shielding material is determined by calculating adjoint flux with the modified collision probability method. In order to calculate adjoint flux with the proposed method, C++ program is developed. The results show that the proposed method can efficiently increase the FOM of transport calculation. It is expected that the proposed method can be utilize for the calculation efficiency in thick shielding calculation 8. Technical Note: Implementation of biological washout processes within GATE/GEANT4—A Monte Carlo study in the case of carbon therapy treatments International Nuclear Information System (INIS) Purpose: The imaging of positron emitting isotopes produced during patient irradiation is the only in vivo method used for hadrontherapy dose monitoring in clinics nowadays. However, the accuracy of this method is limited by the loss of signal due to the metabolic decay processes (biological washout). In this work, a generic modeling of washout was incorporated into the GATE simulation platform. Additionally, the influence of the washout on the β+ activity distributions in terms of absolute quantification and spatial distribution was studied. Methods: First, the irradiation of a human head phantom with a 12C beam, so that a homogeneous dose distribution was achieved in the tumor, was simulated. The generated 11C and 15O distribution maps were used as β+ sources in a second simulation, where the PET scanner was modeled following a detailed Monte Carlo approach. The activity distributions obtained in the presence and absence of washout processes for several clinical situations were compared. Results: Results show that activity values are highly reduced (by a factor of 2) in the presence of washout. These processes have a significant influence on the shape of the PET distributions. Differences in the distal activity falloff position of 4 mm are observed for a tumor dose deposition of 1 Gy (Tini = 0 min). However, in the case of high doses (3 Gy), the washout processes do not have a large effect on the position of the distal activity falloff (differences lower than 1 mm). The important role of the tumor washout parameters on the activity quantification was also evaluated. Conclusions: With this implementation, GATE/GEANT 4 is the only open-source code able to simulate the full chain from the hadrontherapy irradiation to the PET dose monitoring including biological effects. Results show the strong impact of the washout processes, indicating that the development of better models and measurement of biological washout data are essential 9. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning International Nuclear Information System (INIS) The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and 10. Studying stellar binary systems with the Laser Interferometer Space Antenna using delayed rejection Markov chain Monte Carlo methods International Nuclear Information System (INIS) Bayesian analysis of Laser Interferometer Space Antenna (LISA) data sets based on Markov chain Monte Carlo methods has been shown to be a challenging problem, in part due to the complicated structure of the likelihood function consisting of several isolated local maxima that dramatically reduces the efficiency of the sampling techniques. Here we introduce a new fully Markovian algorithm, a delayed rejection Metropolis-Hastings Markov chain Monte Carlo method, to efficiently explore these kind of structures and we demonstrate its performance on selected LISA data sets containing a known number of stellar-mass binary signals embedded in Gaussian stationary noise. 11. Criticality analysis of thermal reactors for two energy groups applying Monte Carlo and neutron Albedo method International Nuclear Information System (INIS) The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author) 12. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees. Science.gov (United States) Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J 2008-06-01 Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655 13. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees Science.gov (United States) Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J. 2008-01-01 Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655 14. Report on some methods of determining the state of convergence of Monte Carlo risk estimates International Nuclear Information System (INIS) The Department of the Environment is developing a methodology for assessing potential sites for the disposal of low and intermediate level radioactive wastes. Computer models are used to simulate the groundwater transport of radioactive materials from a disposal facility back to man. Monte Carlo methods are being employed to conduct a probabilistic risk assessment (pra) of potential sites. The models calculate time histories of annual radiation dose to the critical group population. The annual radiation dose to the critical group in turn specifies the annual individual risk. The distribution of dose is generally highly skewed and many simulation runs are required to predict the level of confidence in the risk estimate i.e. to determine whether the risk estimate is converged. This report describes some statistical methods for determining the state of convergence of the risk estimate. The methods described include the Shapiro-Wilk test, calculation of skewness and kurtosis and normal probability plots. A method for forecasting the number of samples needed before the risk estimate is converged is presented. Three case studies were conducted to examine the performance of some of these techniques. (author) 15. Multiple-scaling methods for Monte Carlo simulations of radiative transfer in cloudy atmosphere International Nuclear Information System (INIS) Two multiple-scaling methods for Monte Carlo simulations were derived from integral radiative transfer equation for calculating radiance in cloudy atmosphere accurately and rapidly. The first one is to truncate sharp forward peaks of phase functions for each order of scattering adaptively. The truncated functions for forward peaks are approximated as quadratic functions; only one prescribed parameter is used to set maximum truncation fraction for various phase functions. The second one is to increase extinction coefficients in optically thin regions for each order scattering adaptively, which could enhance the collision chance adaptively in the regions where samples are rare. Several one-dimensional and three-dimensional cloud fields were selected to validate the methods. The numerical results demonstrate that the bias errors were below 0.2% for almost all directions except for glory direction (less than 0.4%) and the higher numerical efficiency could be achieved when quadratic functions were used. The second method could decrease radiance noise to 0.60% for cumulus and accelerate convergence in optically thin regions. In general, the main advantage of the proposed methods is that we could modify the atmospheric optical quantities adaptively for each order of scattering and sample important contribution according to the specific atmospheric conditions. 16. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison International Nuclear Information System (INIS) Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4 to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4 in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23.000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selected (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. This benchmark shows 2 main points. First, independent replicas are an appropriate method to achieve a fare variance estimation when dominance ratio is near 1. Secondly, the diffusion operator with 2 energy groups gives satisfactory results compared to TRIPOLI-4 even with a highly heterogeneous neutron flux map and an harder spectrum 17. Non-Pilot-Aided Sequential Monte Carlo Method to Joint Signal, Phase Noise, and Frequency Offset Estimation in Multicarrier Systems Directory of Open Access Journals (Sweden) Christelle Garnier 2008-05-01 Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm. 18. A New Monte Carlo Photon Transport Code for Research Reactor Hotcell Shielding Calculation using Splitting and Russian Roulette Methods International Nuclear Information System (INIS) The Monte Carlo method was used to build a new code for the simulation of particle transport. Several calculations were done after that for verification, where different sources were used, the source term was obtained using the ORIGEN-S code. Water and lead shield were used with spherical geometry, and the tally results were obtained on the external surface of the shield, afterward the results were compared with the results of MCNPX for verification of the new code. The variance reduction techniques of splitting and Russian Roulette were implemented in the code to be more efficient, by reducing the amount of custom programming required, by artificially increasing the particles being tallied with decreasing the weight. The code shows lower results than the results of MCNPX, this can be interpreted by the effect of the secondary gamma radiation that can be produced by the electron, which is ejected by the primary radiation. In the future a more study will be made on the effect of the electron production and transport, either by a real transport of the electron or by simply using an approximation such the thick target bremsstahlung(TTB) option which is used in MCNPX 19. A New Monte Carlo Photon Transport Code for Research Reactor Hotcell Shielding Calculation using Splitting and Russian Roulette Methods Energy Technology Data Exchange (ETDEWEB) Alnajjar, Alaaddin [Univ. of Science and Technology, Daejeon (Korea, Republic of); Park, Chang Je; Lee, Byunchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of) 2013-10-15 The Monte Carlo method was used to build a new code for the simulation of particle transport. Several calculations were done after that for verification, where different sources were used, the source term was obtained using the ORIGEN-S code. Water and lead shield were used with spherical geometry, and the tally results were obtained on the external surface of the shield, afterward the results were compared with the results of MCNPX for verification of the new code. The variance reduction techniques of splitting and Russian Roulette were implemented in the code to be more efficient, by reducing the amount of custom programming required, by artificially increasing the particles being tallied with decreasing the weight. The code shows lower results than the results of MCNPX, this can be interpreted by the effect of the secondary gamma radiation that can be produced by the electron, which is ejected by the primary radiation. In the future a more study will be made on the effect of the electron production and transport, either by a real transport of the electron or by simply using an approximation such the thick target bremsstahlung(TTB) option which is used in MCNPX. 20. Analysis of communication costs for domain decomposed Monte Carlo methods in nuclear reactor analysis International Nuclear Information System (INIS) A domain decomposed Monte Carlo communication kernel is used to carry out performance tests to establish the feasibility of using Monte Carlo techniques for practical Light Water Reactor (LWR) core analyses. The results of the prototype code are interpreted in the context of simplified performance models which elucidate key scaling regimes of the parallel algorithm. 1. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo International Nuclear Information System (INIS) The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme. 2. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code International Nuclear Information System (INIS) The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to 3. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code Energy Technology Data Exchange (ETDEWEB) Morgan C. White 2000-07-01 The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second 4. Monte Carlo implementation of Schiff's approximation for estimating radiative properties of homogeneous, simple-shaped and optically soft particles: Application to photosynthetic micro-organisms Science.gov (United States) Charon, Julien; Blanco, Stéphane; Cornet, Jean-François; Dauchet, Jérémi; El Hafi, Mouna; Fournier, Richard; Abboud, Mira Kaissar; Weitz, Sebastian 2016-03-01 In the present paper, Schiff's approximation is applied to the study of light scattering by large and optically-soft axisymmetric particles, with special attention to cylindrical and spheroidal photosynthetic micro-organisms. This approximation is similar to the anomalous diffraction approximation but includes a description of phase functions. Resulting formulations for the radiative properties are multidimensional integrals, the numerical resolution of which requires close attention. It is here argued that strong benefits can be expected from a statistical resolution by the Monte Carlo method. But designing such efficient Monte Carlo algorithms requires the development of non-standard algorithmic tricks using careful mathematical analysis of the integral formulations: the codes that we develop (and make available) include an original treatment of the nonlinearity in the differential scattering cross-section (squared modulus of the scattering amplitude) thanks to a double sampling procedure. This approach makes it possible to take advantage of recent methodological advances in the field of Monte Carlo methods, illustrated here by the estimation of sensitivities to parameters. Comparison with reference solutions provided by the T-Matrix method is presented whenever possible. Required geometric calculations are closely similar to those used in standard Monte Carlo codes for geometric optics by the computer-graphics community, i.e. calculation of intersections between rays and surfaces, which opens interesting perspectives for the treatment of particles with complex shapes. 5. Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks KAUST Repository Ben Hammouda, Chiheb 2015-05-12 In biochemical systems, stochastic e↵ects can be caused by the presence of small numbers of certain reactant molecules. In this setting, discrete state-space and stochastic simulation approaches were proved to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti↵ness. For such problems, the existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap method, can be very slow. Therefore, implicit tau-leap approxima- tions were developed to improve the numerical stability and provide more e cient simulation algorithms for these systems. One of the interesting tasks for SRNs is to approximate the expected values of some observables of the process at a certain fixed time T. This is can be achieved using Monte Carlo (MC) techniques. However, in a recent work, Anderson and Higham in 2013, proposed a more computationally e cient method which combines multi-level Monte Carlo (MLMC) technique with explicit tau-leap schemes. In this MSc thesis, we propose new fast stochastic algorithm, particularly designed 5 to address sti↵ systems, for approximating the expected values of some observables of SRNs. In fact, we take advantage of the idea of MLMC techniques and drift-implicit tau-leap approximation to construct a drift-implicit MLMC tau-leap estimator. In addition to accurately estimating the expected values of a given observable of SRNs at a final time T , our proposed estimator ensures the numerical stability with a lower cost than the MLMC explicit tau-leap algorithm, for systems including simultane- ously fast and slow species. The key contribution of our work is the coupling of two drift-implicit tau-leap paths, which is the basic brick for 6. Comparison of ISO-GUM and Monte Carlo Method for Evaluation of Measurement Uncertainty International Nuclear Information System (INIS) To supplement the ISO-GUM method for the evaluation of measurement uncertainty, a simulation program using the Monte Carlo method (MCM) was developed, and the MCM and GUM methods were compared. The results are as follows: (1) Even under a non-normal probability distribution of the measurement, MCM provides an accurate coverage interval; (2) Even if a probability distribution that emerged from combining a few non-normal distributions looks as normal, there are cases in which the actual distribution is not normal and the non-normality can be determined by the probability distribution of the combined variance; and (3) If type-A standard uncertainties are involved in the evaluation of measurement uncertainty, GUM generally offers an under-valued coverage interval. However, this problem can be solved by the Bayesian evaluation of type-A standard uncertainty. In this case, the effective degree of freedom for the combined variance is not required in the evaluation of expanded uncertainty, and the appropriate coverage factor for 95% level of confidence was determined to be 1.96 7. Testing planetary transit detection methods with grid-based Monte-Carlo simulations. Science.gov (United States) Bonomo, A. S.; Lanza, A. F. The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform studies on the outer layers of the planet by analysing the light of the star passing through the planet's atmosphere. We have developed a new method to detect transits of Earth-sized planets in front of solar-like stars that allows us to reduce the impact of stellar microvariability on transit detection. A large Monte Carlo numerical experiment has been designed to test the performance of our approach in comparison with other transit detection methods for stars of different magnitudes and planets of different radius and orbital period, as will be observed by the space experiments CoRoT and Kepler. The large computational load of this experiment has been managed by means of the Grid infrastructure of the COMETA consortium. 8. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods Science.gov (United States) Akhavan, Azadeh; Vosoughi, Naser 2015-12-01 Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA. 9. Practical implementation of hyperelastic material methods in FEA models OpenAIRE Elgström, Eskil 2014-01-01 This thesis will be focusing on studies about the hyperelastic material method and how to best implement it in a FEA model. It will look more specific at the Mooney-Rivlin method, but also have a shorter explanation about the different methods. This is due to problems Roxtec has today about simulating rubber takes long time, are instable and unfortunately not completely trustworthy, therefore a deep study about the hyperelastic material method were chosen to try and address these issuers. The... 10. Implementing the Open Method of Co-ordination in Pensions Directory of Open Access Journals (Sweden) Jarosław POTERAJ 2009-01-01 Full Text Available The article presents an insight into the European Union Open Methodof Co-ordination (OMC in area of pension. The author’s goal was to presentthe development and the effects of implementation the OMC. The introductionis followed by three topic paragraphs: 1. the OMC – step by step, 2. theevaluation of the OMC, and 3. the effects of OMC implementation. In thesummary, the author highlights as except of advantages there are alsodisadvantages of the implementation of the OMC, and there are many doubtsexist in the context of efficiency of performing that method in the future. 11. Implementation of the Maximum Entropy Method for Analytic Continuation CERN Document Server Levy, Ryan; Gull, Emanuel 2016-01-01 We present$\\texttt{Maxent}\$, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv2 and extensively documented. This paper shows the use of the programs in detail.
12. Implementing Collaborative Learning Methods in the Political Science Classroom
Science.gov (United States)
Wolfe, Angela
2012-01-01
Collaborative learning is one, among other, active learning methods, widely acclaimed in higher education. Consequently, instructors in fields that lack pedagogical training often implement new learning methods such as collaborative learning on the basis of trial and error. Moreover, even though the benefits in academic circles are broadly touted,…
13. Evaluation of the NHS R & D implementation methods programme
OpenAIRE
Hanney, S; Soper, B; Buxton, MJ
2010-01-01
Chapter 1: Background and introduction • Concern with research implementation was a major factor behind the creation of the NHS R&D Programme in 1991. In 1994 an Advisory Group was established to identify research priorities in this field. The Implementation Methods Programme (IMP) flowed from this and its Commissioning Group funded 36 projects. Funding for the IMP was capped before the second round of commissioning. The Commissioning Group was disbanded and eventually responsibility for t...
14. A Model Based Security Testing Method for Protocol Implementation
OpenAIRE
Yu Long Fu; Xiao Long Xin
2014-01-01
The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of ...
15. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles
KAUST Repository
Guerra, Marta L.
2009-02-23
We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.
16. Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method
Directory of Open Access Journals (Sweden)
KRSTIVOJEVIC, J. P.
2015-08-01
Full Text Available The results of a comprehensive investigation of the influence of current transformer (CT saturation on restricted earth fault (REF protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i laboratory measurements and (ii calculations based on the input data obtained by the Monte Carlo (MC simulation. To make a detailed assessment of the current transformer performance the uncertain input data for the CT model were obtained by applying the MC method. In this way, different levels of remanent flux in CT core are taken into consideration. By the generated CT secondary currents, the algorithm for REF protection based on phase comparison in time domain is tested. On the basis of the obtained results, a method of adjustment of the triggering threshold in order to ensure safe operation during transients, and thereby improve the algorithm security, has been proposed. The obtained results indicate that power transformer REF protection would be enhanced by using the proposed adjustment of triggering threshold in the algorithm which is based on phase comparison in time domain.
17. Monte Carlo Methods for Top-k Personalized PageRank Lists and Name Disambiguation
CERN Document Server
Avrachenkov, Konstantin; Nemirovsky, Danil A; Smirnova, Elena; Sokol, Marina
2010-01-01
We study a problem of quick detection of top-k Personalized PageRank lists. This problem has a number of important applications such as finding local cuts in large graphs, estimation of similarity distance and name disambiguation. In particular, we apply our results to construct efficient algorithms for the person name disambiguation problem. We argue that when finding top-k Personalized PageRank lists two observations are important. Firstly, it is crucial that we detect fast the top-k most important neighbours of a node, while the exact order in the top-k list as well as the exact values of PageRank are by far not so crucial. Secondly, a little number of wrong elements in top-k lists do not really degrade the quality of top-k lists, but it can lead to significant computational saving. Based on these two key observations we propose Monte Carlo methods for fast detection of top-k Personalized PageRank lists. We provide performance evaluation of the proposed methods and supply stopping criteria. Then, we apply ...
18. Use of Monte Carlo Bootstrap Method in the Analysis of Sample Sufficiency for Radioecological Data
International Nuclear Information System (INIS)
There are operational difficulties in obtaining samples for radioecological studies. Population data may no longer be available during the study and obtaining new samples may not be possible. These problems do the researcher sometimes work with a small number of data. Therefore, it is difficult to know whether the number of samples will be sufficient to estimate the desired parameter. Hence, it is critical do the analysis of sample sufficiency. It is not interesting uses the classical methods of statistic to analyze sample sufficiency in Radioecology, because naturally occurring radionuclides have a random distribution in soil, usually arise outliers and gaps with missing values. The present work was developed aiming to apply the Monte Carlo Bootstrap method in the analysis of sample sufficiency with quantitative estimation of a single variable such as specific activity of a natural radioisotope present in plants. The pseudo population was a small sample with 14 values of specific activity of 226Ra in forage palm (Opuntia spp.). Using the R software was performed a computational procedure to calculate the number of the sample values. The re sampling process with replacement took the 14 values of original sample and produced 10,000 bootstrap samples for each round. Then was calculated the estimated average θ for samples with 2, 5, 8, 11 and 14 values randomly selected. The results showed that if the researcher work with only 11 sample values, the average parameter will be within a confidence interval with 90% probability . (Author)
19. Systematic hierarchical coarse-graining with the inverse Monte Carlo method
International Nuclear Information System (INIS)
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile
20. Statistical Modification Analysis of Helical Planetary Gears based on Response Surface Method and Monte Carlo Simulation
Institute of Scientific and Technical Information of China (English)
ZHANG Jun; GUO Fan
2015-01-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
1. Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Science.gov (United States)
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
2. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
3. Simulation of Watts Bar initial startup tests with continuous energy Monte Carlo methods
International Nuclear Information System (INIS)
The Consortium for Advanced Simulation of Light Water Reactors is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable numeric reference for VERA neutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients. (author)
4. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models
KAUST Repository
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
5. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
Science.gov (United States)
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
6. Adjoint-based deviational Monte Carlo methods for phonon transport calculations
Science.gov (United States)
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
7. Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Lyubartsev, Alexander P., E-mail: [email protected] [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Naômé, Aymeric, E-mail: [email protected] [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Vercauteren, Daniel P., E-mail: [email protected] [UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Laaksonen, Aatto, E-mail: [email protected] [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Science for Life Laboratory, 17121 Solna (Sweden)
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
8. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.
9. Application of the Monte Carlo method for investigation of dynamical parameters of rotors supported by magnetorheological squeeze film damping devices
Czech Academy of Sciences Publication Activity Database
Zapoměl, Jaroslav; Ferfecki, Petr; Kozánek, Jan
2014-01-01
Roč. 8, č. 1 (2014), s. 129-138. ISSN 1802-680X Institutional support: RVO:61388998 Keywords : uncertain parameters of rigid motors * magnetorheological dampers * force transmission * Monte Carlo method Subject RIV: BI - Acoustics http://www.kme.zcu.cz/acm/acm/article/view/247/275
10. Studies of criticality Monte Carlo method convergence: use of a deterministic calculation and automated detection of the transient
International Nuclear Information System (INIS)
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for keff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to keff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
11. 欧式期权定价的Monte-Carlo方法%Monte-Carlo methods for Pricing European-style options
Institute of Scientific and Technical Information of China (English)
张丽虹
2015-01-01
讨论各种欧式期权价格的Monte-Carlo方法。根据Black-Scholes期权定价模型以及风险中性理论,首先详细地讨论如何利用Monte-Carlo方法来计算标准欧式期权价格;然后讨论如何引入控制变量以及对称变量来提高Monte-Carlo方法的精确性;最后用Monte-Carlo方法来计算标准欧式期权、欧式—两值期权、欧式—回望期权以及欧式—亚式期权的价格,并讨论相关方法的优缺点。%We discuss Monte-Carlo methods for pricing European options.Based on the famous Black-Scholes model,we first discuss the Monte-Carlo simulation method to pricing standard European options according to Risk neutral theory.Methods to improve the Monte-Carlo simulation performance including introducing control variates and antithetic variates are also discussed.Finally we apply the proposed Monte-Carlo methods to price the European binary options,European lookback options and European Asian options.
12. Algorithms for modeling radioactive decays of π-and μ-mesons by the Monte-Carlo method
International Nuclear Information System (INIS)
Effective algorithms for modeling decays of μsup(→e) ννγ and πsup(→e)νγ by the Monte-Carlo method are described. The algorithms developed allowed to considerably reduce time needed to calculate the efficiency of decay detection. They were used for modeling in experiments on the study of pions and muons rare decays
13. SEMI-BLIND CHANNEL ESTIMATION OF MULTIPLE-INPUT/MULTIPLE-OUTPUT SYSTEMS BASED ON MARKOV CHAIN MONTE CARLO METHODS
Institute of Scientific and Technical Information of China (English)
Jiang Wei; Xiang Haige
2004-01-01
This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.
14. Verification of Burned Core Modeling Method for Monte Carlo Simulation of HANARO
International Nuclear Information System (INIS)
The reactor core has been managed well by the HANARO core management system called HANAFMS. The heterogeneity of the irradiation device and core made the neutronic analysis difficult and sometimes doubtable. To overcome the deficiency, MCNP was utilized in neutron transport calculation of the HANARO. For the most part, a MCNP model with the assumption that all fuels are filled with fresh fuel assembly showed acceptable analysis results for a design of experimental devices and facilities. However, it sometimes revealed insufficient results in the design, which requires good accuracy like neutron transmutation doping (NTD), because it didn't consider the flux variation induced by depletion of the fuel. In this study, a depleted-core modeling method previously proposed was applied to build burned core model of HANARO and verified through a comparison of the calculated result from the depleted-core model and that from an experiment. The modeling method to establish a depleted-core model for the Monte Carlo simulation was verified by comparing the neutron flux distribution obtained by the zirconium activation method and the reaction rate of 30Si(n, γ) 31Si obtained by a resistivity measurement method. As a result, the reaction rate of 30Si(n, γ) 31Si also agreed well with about 3% difference. It was therefore concluded that the modeling method and resulting depleted-core model developed in this study can be a very reliable tool for the design of the planned experimental facility and a prediction of its performance in HANARO
15. Verification of Burned Core Modeling Method for Monte Carlo Simulation of HANARO
Energy Technology Data Exchange (ETDEWEB)
Cho, Dongkeun; Kim, Myongseop [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
The reactor core has been managed well by the HANARO core management system called HANAFMS. The heterogeneity of the irradiation device and core made the neutronic analysis difficult and sometimes doubtable. To overcome the deficiency, MCNP was utilized in neutron transport calculation of the HANARO. For the most part, a MCNP model with the assumption that all fuels are filled with fresh fuel assembly showed acceptable analysis results for a design of experimental devices and facilities. However, it sometimes revealed insufficient results in the design, which requires good accuracy like neutron transmutation doping (NTD), because it didn't consider the flux variation induced by depletion of the fuel. In this study, a depleted-core modeling method previously proposed was applied to build burned core model of HANARO and verified through a comparison of the calculated result from the depleted-core model and that from an experiment. The modeling method to establish a depleted-core model for the Monte Carlo simulation was verified by comparing the neutron flux distribution obtained by the zirconium activation method and the reaction rate of {sup 30}Si(n, γ) {sup 31}Si obtained by a resistivity measurement method. As a result, the reaction rate of {sup 30}Si(n, γ) {sup 31}Si also agreed well with about 3% difference. It was therefore concluded that the modeling method and resulting depleted-core model developed in this study can be a very reliable tool for the design of the planned experimental facility and a prediction of its performance in HANARO.
16. Application of the measurement-based Monte Carlo method in nasopharyngeal cancer patients for intensity modulated radiation therapy
International Nuclear Information System (INIS)
This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs
17. Enhancing Dissemination and Implementation Research Using Systems Science Methods
Science.gov (United States)
Lich, Kristen Hassmiller; Neal, Jennifer Watling; Meissner, Helen I.; Yonas, Michael; Mabry, Patricia L.
2015-01-01
PURPOSE Dissemination and implementation (D&I) research seeks to understand and overcome barriers to adoption of behavioral interventions that address complex problems; specifically interventions that arise from multiple interacting influences crossing socio-ecological levels. It is often difficult for research to accurately represent and address the complexities of the real world, and traditional methodological approaches are generally inadequate for this task. Systems science methods, expressly designed to study complex systems, can be effectively employed for an improved understanding about dissemination and implementation of evidence-based interventions. METHODS Case examples of three systems science methods – system dynamics modeling, agent-based modeling, and network analysis – are used to illustrate how each method can be used to address D&I challenges. RESULTS The case studies feature relevant behavioral topical areas: chronic disease prevention, community violence prevention, and educational intervention. To emphasize consistency with D&I priorities, the discussion of the value of each method is framed around the elements of the established Reach Effectiveness Adoption Implementation Maintenance (RE-AIM) framework. CONCLUSIONS Systems science methods can help researchers, public health decision makers and program implementers to understand the complex factors influencing successful D&I of programs in community settings, and to identify D&I challenges imposed by system complexity. PMID:24852184
18. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
Science.gov (United States)
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
19. Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method
Science.gov (United States)
Tan, Yi; Robinson, Allen L.; Presto, Albert A.
2014-12-01
Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual
20. Analysis of the Tandem Calibration Method for Kerma Area Product Meters Via Monte Carlo Simulations
International Nuclear Information System (INIS)
The IAEA recommends that uncertainties of dosimetric measurements in diagnostic radiology for risk assessment and quality assurance should be less than 7% on the confidence level of 95%. This accuracy is difficult to achieve with kerma area product (KAP) meters currently used in clinics. The reasons range from the high energy dependence of KAP meters to the wide variety of configurations in which KAP meters are used and calibrated. The tandem calibration method introduced by Poeyry, Komppa and Kosunen in 2005 has the potential to make the calibration procedure simpler and more accurate compared to the traditional beam-area method. In this method, two positions of the reference KAP meter are of interest: (a) a position close to the field KAP meter and (b) a position 20 cm above the couch. In the close position, the distance between the two KAP meters should be at least 30 cm to reduce the effect of back scatter. For the other position, which is recommended for the beam-area calibration method, the distance of 70 cm between the KAP meters was used in this study. The aim of this work was to complement existing experimental data comparing the two configurations with Monte Carlo (MC) simulations. In a geometry consisting of a simplified model of the VacuTec 70157 type KAP meter, the MCNP code was used to simulate the kerma area product, PKA, for the two (close and distant) reference planes. It was found that PKA values for the tube voltage of 40 kV were about 2.5% lower for the distant plane than for the close one. For higher tube voltages, the difference was smaller. The difference was mainly caused by attenuation of the X ray beam in air. Since the problem with high uncertainties in PKA measurements is also caused by the current design of X ray machines, possible solutions are discussed. (author) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175802826881409, "perplexity": 1407.4959226655521}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00459-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/440557/for-what-integers-n-does-phi2n-phin/440561 | For what integers $n$ does $\phi(2n) = \phi(n)$?
For what integers $n$ does $\phi(2n) = \phi(n)$?
Could anyone help me start this problem off? I'm new to elementary number theory and such, and I can't really get a grasp of the totient function.
I know that $$\phi(n) = n\left(1-\frac1{p_1}\right)\left(1-\frac1{p_2}\right)\cdots\left(1-\dfrac1{p_k}\right)$$ but I don't know how to apply this to the problem. I also know that $$\phi(n) = (p_1^{a_1} - p_1^{a_1-1})(p_2^{a_2} - p_2^{a_2 - 1})\cdots$$
Help
-
Euler's $\phi$ function is multiplicative. More elaborately if for $a,b\in N$ with $(a,b)=1$ then $\phi (ab)=\phi (a)\phi (b)$. So let $n=2^km$ with $m$ being odd. Then we have if $k\ge 1$, \begin{align} \phi (n)&=\phi(2^k)\phi(m)=2^{k-1}\phi(m) \\ \phi(2n)&=\phi(2^{k+1})\phi(m)=2^{k}\phi(m)\end{align} So $\phi (n)\ne \phi(2n)$. So $k<1\Rightarrow k=0\Rightarrow n$ must be odd.
Another easy proof: Let $n=2^k\prod_{i=1}^{n}p_i^{\alpha_i}$ with $k\ge 1$ and $2\ne p_i =$ primes, then we have $\phi (n)=\frac{n}{2}\prod_{i=1}^{n}(1-\frac{1}{p_i})$ and $\phi (2n)=\frac{2n}{2}\prod_{i=1}^{n}(1-\frac{1}{p_i})$.Can $\phi (n)$ be equal to $\phi(2n)$? Now consider $n=2k+1$ and find $\phi (n)$ and $\phi (2n)$. What do you see?
-
I know that the function is multiplicitive, but I don't understand how to use that information. Sorry. – Ozera Jul 10 '13 at 15:46
I guess the 2nd proof will clear things up. – Abhra Abir Kundu Jul 10 '13 at 15:47
Actually I'm not really sure what it says, but I'm going to continue thinking about what the importance of it being multipicitive. – Ozera Jul 10 '13 at 15:56
Now is it clear @Ozera – Abhra Abir Kundu Jul 10 '13 at 16:08
Hint If $n$ is odd, then gcd$(n,2)=1$ thus
$$\phi(2n)=\phi(2) \phi(n) \,.$$
If $n$ is even, write $n=2^km$ with $m$ odd and $k \geq 1$.
$$\phi(n)=\phi(2^k) \phi(m) \,.$$ $$\phi(2n)=\phi(2^{k+1}) \phi(m) \,.$$
-
Hint: You may also prove in general that
$$\varphi(mn)=\frac{d\varphi(m)\varphi(n)}{\varphi(d)}$$
where $d=\gcd(m,n).$
-
+1 Nice remarking the fact. :-) – Babak S. Aug 22 '13 at 7:36
The formula you gave is incorrect. It should have been $$\phi(mn) = \frac{d\phi(m)\phi(n)}{\phi(d)}$$ – Balarka Sen Jan 17 at 15:32
Dear @BalarkaSen, yes, thanks for the correction. – Ehsan M. Kermani Jan 18 at 20:15
$\displaystyle{\left\lfloor n + 1 \over 2\right\rfloor}$ is a solution.
-
@TMM Sorry. I was not careful when I checked it. I'll delete it after you read this comment. – Felix Marin Jan 18 at 21:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833847284317017, "perplexity": 463.46036940835876}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772398.133/warc/CC-MAIN-20141217075252-00064-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://planetmath.org/alternativedefinitionsofcountable | # alternative definitions of countable
The following are alternative ways of characterizing a countable set.
###### Proposition 1.
Let $A$ be a set and $\mathbb{N}$ the set of natural numbers. The following are equivalent:
1. 1.
there is a surjection from $\mathbb{N}$ to $A$.
2. 2.
there is an injection from $A$ to $\mathbb{N}$.
3. 3.
either $A$ is finite or there is a bijection between $A$ and $\mathbb{N}$.
###### Proof.
First notice that if $A$ were the empty set, then any map to or from $A$ is empty, so $(1)\Leftrightarrow(2)\Leftrightarrow(3)$ vacuously. Now, suppose that $A\neq\varnothing$.
$(1)\Rightarrow(2)$. Suppose $f:\mathbb{N}\to A$ is a surjection. For each $a\in A$, let $f^{-1}(a)$ be the set $\{n\in\mathbb{N}\mid f(n)=a\}$. Since $f^{-1}(a)$ is a subset of $\mathbb{N}$, which is well-ordered, $f^{-1}(a)$ itself is well-ordered, and thus has a least element (keep in mind $A\neq\varnothing$, the existence of $a\in A$ is guaranteed, so that $f^{-1}(a)\neq\varnothing$ as well). Let $g(a)$ be this least element. Then $a\mapsto g(a)$ is a well-defined mapping from $A$ to $\mathbb{N}$. It is one-to-one, for if $g(a)=g(b)=n$, then $a=f(n)=b$.
$(2)\Rightarrow(1)$. Suppose $g:A\to\mathbb{N}$ is one-to-one. So $g^{-1}(n)$ is at most a singleton for every $n\in\mathbb{N}$. If it is a singleton, identify $g^{-1}(n)$ with that element. Otherwise, identify $g^{-1}(n)$ with a designated element $a_{0}\in A$ (remember $A$ is non-empty). Define a function $f:\mathbb{N}\to A$ by $f(n):=g^{-1}(n)$. By the discussion above, $g^{-1}(n)$ is a well-defined element of $A$, and therefore $f$ is well-defined. $f$ is onto because for every $a\in A$, $f(g(a))=a$.
$(3)\Rightarrow(2)$ is clear.
$(2)\Rightarrow(3)$. Let $g:A\to\mathbb{N}$ be an injection. Then $g(A)$ is either finite or infinite. If $g(A)$ is finite, so is $A$, since they are equinumerous. Suppose $g(A)$ is infinite. Since $g(A)\subseteq\mathbb{N}$, it is well-ordered. The (induced) well-ordering on $g(A)$ implies that $g(A)=\{n_{1},n_{2},\ldots\}$, where $n_{1}.
Now, define $h:\mathbb{N}\to A$ as follows, for each $i\in\mathbb{N}$, $h(i)$ is the element in $A$ such that $g(h(i))=n_{i}$. So $h$ is well-defined. Next, $h$ is injective. For if $h(i)=h(j)$, then $n_{i}=g(h(i))=g(h(j))=n_{j}$, implying $i=j$. Finally, $h$ is a surjection, for if we pick any $a\in A$, then $g(a)\in g(A)$, meaning that $g(a)=n_{i}$ for some $i$, so $h(i)=g(a)$. ∎
Therefore, countability can be defined in terms of either of the above three statements.
Note that the axiom of choice is not needed in the proof of $(1)\Rightarrow(2)$, since the selection of an element in $f^{-1}(a)$ is definite, not arbitrary.
For example, we show that $\mathbb{N}^{2}$ is countable. By the proposition above, we either need to find a surjection $f:\mathbb{N}\to\mathbb{N}^{2}$, or an injection $g:\mathbb{N}^{2}\to\mathbb{N}$. Actually, in this case, we can find both:
1. 1.
the function $f:\mathbb{N}\to\mathbb{N}^{2}$ given by $f(a)=(m,n)$ where $a=2^{m}(2n+1)$ is surjective. First, the function is well-defined, for every positive integer has a unique representation as the product of a power of $2$ and an odd number. It is surjective because for every $(m,n)$, we see that $f(2^{m}(2n+1))=(m,n)$.
2. 2.
the function $g:\mathbb{N}^{2}\to\mathbb{N}$ given by $f(m,n)=2^{m}3^{n}$ is clearly injective.
Note that the injectivity of $g$, as well as $f$ being well-defined, rely on the unique factorization of integers by prime numbers. In this entry (http://planetmath.org/ProductOfCountableSets), we actually find a bijection between $\mathbb{N}$ and $\mathbb{N}^{2}$.
As a corollary, we record the following:
###### Corollary 1.
Let $A,B$ be sets, $f:A\to B$ a function.
• If $f$ is an injection, and $B$ is countable, so is $A$.
• If $f$ is a surjection, and $A$ countable, so is $B$.
The proof is left to the reader.
Title alternative definitions of countable AlternativeDefinitionsOfCountable 2013-03-22 19:02:49 2013-03-22 19:02:49 CWoo (3771) CWoo (3771) 7 CWoo (3771) Definition msc 03E10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 98, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985612034797668, "perplexity": 188.83020515403007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00374.warc.gz"} |
http://mathhelpforum.com/advanced-math-topics/30845-infinity-equation-help-print.html | infinity equation,,,,help!
• March 12th 2008, 07:05 PM
cgoplin
infinity equation,,,,help!
don't know how to show this equation on screen E smallx2 right hand corner then space dx +infinity symbol on top - infinity symbol on bottom with two fishhooks? oposing what is it? please help!
• March 12th 2008, 07:11 PM
TheEmptySet
Quote:
Originally Posted by cgoplin
don't know how to show this equation on screen E smallx2 right hand corner then space dx +infinity symbol on top - infinity symbol on bottom with two fishhooks? oposing what is it? please help!
$\int_{\gamma-i \infty}^{\gamma+i \infty}f(z)dz$
If this is the case it is called a contour integral. It requires the use of complex variables.
• March 12th 2008, 07:36 PM
ThePerfectHacker
Fishhooks (Rofl) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824597835540771, "perplexity": 3887.4075527428968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275437.19/warc/CC-MAIN-20160524002115-00239-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://infinitefuture.blogspot.com/2011/01/vector-dot-products-and-cos-and.html | Vector dot products and cos and Pythagorus
\newcommand{\abs}[1]{\lvert#1\rvert}
\newcommand{\norm}[1]{\lVert#1\rVert}
\abs{\vec{X}}\ \ \ \ \ \
\norm{\vec{X}}
As it relates to how LaTeX instantiates a new command. It can be seen that the format starts with "newcommand" , the name of the new command, number of arguments, and how it is created using previously defined methods with the #(n) implying an argument pass in curly brackets. It is a lot like a function in shell which uses \$1 .... as args passed.
cos\theta = \frac
{\vec{v}\cdot \vec{w}}
{|\vec{v}|\cdot|{\vec{w}|}}
It does seem that the concept of sin and cos are defined in a rather circular way ( no pun intended ). The values at an angle are defined by the relationship of dimension itself and so long as a standard is defined, the numbers are the same in relationship to scale. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076430797576904, "perplexity": 697.0053809941154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648000.93/warc/CC-MAIN-20180322190333-20180322210333-00200.warc.gz"} |
http://math.stackexchange.com/questions/120027/standard-deviation-why-divide-by-n-1-rather-than-n | # Standard Deviation: Why divide by $(N-1)$ rather than $N$?
The forumlae for standard deviation seems to be the square root of the sum of the squared deviation from mean divided by $N-1$.
Why isn't it simply the square root of the mean of the squared deviation from mean? i.e, divided by $N$.
Why is it divided by $N-1$ rather than $N$?
-
To prevent bias, as explained here and here. – William DeMeo Mar 14 '12 at 11:43
This might help: stats.stackexchange.com/questions/3931/… – Byron Schmuland Mar 14 '12 at 12:26
The reason is because it gives you an unbiased estimator. But, do not confuse this with giving the best estimator. In my time series class, my professor tells me that in time series you usually divide by n instead, because it's actually a better approximation. I couldn't explain to you why or anything. – Graphth Mar 14 '12 at 12:42
Well, one thing is that the samples are not independent in time series... I'm sure that has something to do with it. – Graphth Mar 14 '12 at 12:51
If you have n samples, the variance is defined as: $$s^2=\frac{\sum_{i=1}^n (X_i-m)^2}{n}$$ where $m$ is the average of the distribution. In order to have an estimator non $biased$ you have to have: $$E(s^2)=\sigma^2$$ where $\sigma$ is the real unknown value of the variance. It's possible to show that $$E(s^2)=E\left(\frac{\sum_{i=1}^n (X_i-m)^2}{n} \right)=\frac{n}{n-1}\sigma^2$$ So if you want to estimate the 'real' value of $\sigma^2$ you must divide by $n-1$
The sample variance uses the sample average, not $m$. – Byron Schmuland Mar 14 '12 at 12:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052437543869019, "perplexity": 341.60166030183694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435376093097.69/warc/CC-MAIN-20150627033453-00300-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://www.ck12.org/physics/Velocity-and-Acceleration/lesson/user:a3Jvc2VuYXVAcGVyaGFtLmsxMi5tbi51cw../Velocity-and-Acceleration/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Velocity and Acceleration ( Read ) | Physics | CK-12 Foundation
# Velocity and Acceleration
%
Progress
Practice Velocity and Acceleration
Progress
%
Velocity and Acceleration
Students will learn the meaning of acceleration, how it is different than velocity and how to calculate average acceleration.
### Key Equations
$v =$ velocity (m/s)
$v_i =$ initial velocity
$v_f =$ final velocity
$\Delta v =$ change in velocity $= v_f - v_i$
$v_{avg} = \frac{\Delta x}{\Delta t}$
$a =$ acceleration $(m/s^2)$
$a_{avg} = \frac{\Delta v}{\Delta t}$
Guidance
• Acceleration is the rate of change of velocity. So in other words, acceleration tells you how quickly the velocity is increasing or decreasing. An acceleration of $5 \ m/s^2$ indicates that the velocity is increasing by $5 m/s$ in the positive direction every second.
• Deceleration is the term used when an object’s speed (i.e. magnitude of its velocity) is decreasing due to acceleration in the opposite direction of its velocity.
#### Example 1
A Top Fuel dragster can accelerate from 0 to 100 mph (160 km/hr) in 0.8 seconds. What is the average acceleration in $m/s^2$ ?
Question: $a_{avg} = ? \ [m/s^2]$
Given: $v_i = 0 \ m/s$
${\;} \qquad \ \ v_f = 160 \ km/hr$
${\;} \qquad \ \quad t = 0.8 \ s$
Equation: $a_{avg} = \frac{\Delta v }{t}$
Plug n’ Chug: Step 1: Convert km/hr to m/s
$v_f = \left( 160 \frac{km}{hr} \right ) \left( \frac{1,000 \ m}{1 \ km} \right ) \left ( \frac{1 \ hr}{3,600 \ s} \right ) = 44.4 \ m/s$
Step 2: Solve for average acceleration:
$a_{avg} = \frac{\Delta v}{t} = \frac{v_f - v_i}{t} = \frac{44.4 \ m/s - 0 \ m/s}{0.8 \ s} = 56 \ m/s^2$
Answer: $\boxed {\mathbf{56 \ m/s^2}}$
### Time for Practice
1. Ms. Reitman’s scooter starts from rest and accelerates at $2.0 m/s^2$ . What is the scooter's velocity after 1s? after 2s? after 7s?
1. 2 m/s, 4 m/s, 14 m/s | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 21, "texerror": 0, "math_score": 0.965247392654419, "perplexity": 2338.054025862654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858580.32/warc/CC-MAIN-20150124161058-00125-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://civilservicereview.com/2015/09/ | ## The Ratio Word Problems Tutorial Series
This is a series of tutorials regarding ratio word problems. Ratio is defined as the relationship between two numbers where the second number is how many times the first number is contained. In this series of problems, we will learn about the different types of ratio word problems.
How to Solve Word Problems Involving Ratio Part 1 details the intuitive meaning of ratio. It uses arithmetic calculations in order to explain its meaning. After the explanation, the algebraic solution to the problem is also discussed.
How to Solve Word Problems Involving Ratio Part 2 is a continuation of the first part. In this part, the ratio of three quantities are described. Algebraic methods is used as a solution to solve the problem.
How to Solve Word Problems Involving Ratio Part 3 in this post, the ratio of two quantities are given. Then, both quantities are increased resulting to another ratio.
How to Solve Word Problems Involving Ratio Part 4 involves the difference of two numbers whose ratio is given.
If you have more math word problems involving ratio that are different from the ones mention above, feel free to comment below and let us see if we can solve them.
## How to Solve Word Problems Involving Ratio Part 4
This is the fourth and the last part of the solving problems involving ratio series. In this post, we are going to solve another ratio word problem.
Problem
The ratio of two numbers 1:3. Their difference is 36. What is the larger number?
Solution and Explanation
Let x be the smaller number and 3x be the larger number.
3x – x = 36
2x = 36
x = 18
So, the smaller number is 18 and the larger number is 3(18) = 54.
Check:
The ratio of 18:54 is 1:3? Yes, 3 times 18 equals 54.
Is their difference 36? Yes, 54 – 18 = 36.
Therefore, we are correct.
## How to Solve Word Problems Involving Ratio Part 3
In the previous two posts, we have learned how to solve word problems involving ratio with two and three quantities. In posts, we are going to learn how to solve a slightly different problem where both numbers are increased.
Problem
The ratio of two numbers is 3:5 and their sum is 48. What must be added to both numbers so that the ratio becomes 3:4?
Solution and Explanation
First, let us solve the first sentence. We need to find the two numbers whose ratio is 3:5 and whose sum is 48.
Now, let x be the number of sets of 3 and 5.
3x + 5x = 48
8x = 48
x = 6
Now, this means that the numbers are 3(6) = 18 and 5(6) = 30.
Now if the same number is added to both numbers, then the ratio becomes 3:4.
Recall that in the previous posts, we have discussed that ratio can also be represented by fraction. So, we can represent 18:30 as $\frac{18}{30}$. Now, if we add the same number to both numbers (the numerator and the denominator), we get $\frac{3}{4}$. If we let that number y, then
$\dfrac{18 + y}{30 + y} = \dfrac{3}{4}$.
Cross multiplying, we have
$4(18 + y) = 3(30 + y)$.
By the distributive property,
$72 + 4y = 90 + 3y$
$4y - 3y = 90 - 72$
$y = 18$.
So, we add 18 to both the numerator and denominator of $\frac{18}{30}$. That is,
$\dfrac{18 + 18}{30 + 18} = \dfrac{36}{48}$.
Now, to check, is $\dfrac{36}{48} = \frac{3}{4}$? Yes, it is. Divide both the numerator and the denominator by 12 to reduce the fraction to lowest terms.
## How to Solve Word Problems Involving Ratio Part 2
This is the second part of a series of post on Solving Ratio Problems. In the first part, we have learned how to solve intuitively and algebraically problems involving ratio of two quantities. In this post, we are going to learn how to solve a ratio problem involving 3 quantities.
Problem 2
The ratio of the red, green, and blue balls in a box is 2:3:1. If there are 36 balls in the box, how many green balls are there?
Solution and Explanation
From the previous, post we have already learned the algebraic solutions of problems like the one shown above. So, we can have the following:
Let $x$ be the number of grous of balls per color.
$2x + 3x + x = 36$
$6x = 36$
$x = 6$
So, there are 6 groups. Now, since we are looking for the number of green balls, we multiply x by 3.
So, there are 6 groups (3 green balls per group) = 18 green balls.
Check:
From above, $x = 6(1)$ is the number of blue balls. The expression 2x represent the number of red balls, so we have 2x = 2(6) = 12 balls. Therefore, we have 12 red balls, 18 green balls, and 6 blue balls.
We can check by adding them: 12 + 18 + 6 = 36.
This satisfies the condition above that there are 36 balls in all. Therefore, we are correct.
## How to Solve Word Problems Involving Ratio Part 1
In a dance school, 18 girls and 8 boys are enrolled. We can say that the ratio of girls to boys is 18:8 (read as 18 is to 8). Ratio can also be expressed as fraction so we can say that the ratio is 18/8. Since we can reduce fractions to lowest terms, we can also say that the ratio is 9/4 or 9:4. So, ratio can be a relationship between two quantities. It can also be ratio between two numbers like 4:3 which is the ratio of the width and height of a television screen.
Problem 1
The ratio of boys and girls in a dance club is 4:5. The total number of students is 63. How many girls and boys are there in the club?
Solution and Explanation
The ratio of boys is 4:5 means that for every 4 boys, there are 5 girls. That means that if there are 2 groups of 4 boys, there are also 2 groups of 5 girls. So by calculating them and adding, we have
4 + 5 = 9
4(2) +5(2) =18
4(3) +5(3) =27
4(4) +5(4) = 36
4(5) +5(5) = 45
4(6) +5(6) =54
4(7) +5(7) =63
As we can see, we are looking for the number of groups of 4 and, and the answer is 7 groups of each. So there are 4(7) = 28 boys and 5(7) = 35 girls.
As you can observe, the number of groups of 4 is the same as the number of groups of 5. Therefore, the question above is equivalent to finding the number of groups (of 4 and 5), whose total number of persons add up to 63.
Algebraically, if we let x be the number of groups of 4, then it is also the number of groups of 5. So, we can make the following equation.
4 x number of groups + 5 x number of groups of 5 = 63
Or
4x + 5x = 63.
Simplifying, we have
9x = 63
x = 7.
So there are 4(7) = 28 boys and 5(7) = 35 girls. As we can see, we confirmed the answer above using algebraic methods.
## How to Solve Investment Word Problems in Algebra
Investment word problems in Algebra is one of the types of problems that usually come out in the Civil Service Exam. In solving investment word problems, you should know the basic terms used. Some of these terms are principal (P) or the money invested, the rate (R) or the percent of interest, the interest (I) or the return of investment (profit), and the time or how long the money is invested. Investment is the product of the principal, the rate, and the time, and therefore, we have the formula
I = PRT.
This tutorial series discusses the different types of problems in investment and discussed the method and strategies used in solving them.
How to Solve Investment Problems Part 1 discusses the common terminology used in investment problems. It also discusses an investment problem where the principal is invested at two different interest rates.
How to Solve Investment Problems Part 2 is a discussion of another investment problem just like in part 1. In the problem, the principal is invested at two different interest rates and the interest in one investment is larger than the other.
How to Solve Investment Problems Part 3 is very similar to part 2, only that the smaller interest amount is described.
How to Solve Investment Problems Part 4 discusses an investment problem with a given interest in one investment and an unknown amount of investment at another rate to satisfy a percentage of interest for the entire investment.
## How to Solve Investment Problems Part 4
This is the fourth part of the Solving Investment Problems Series. In this part, we discuss a problem which is very similar to the third part. We discuss an investment at two different interest rates.
Problem
Mr. Garett invested a part of $20 000 at a bank at 4% yearly interest. How much does he have to invest at another bank at a 8% yearly interest so that the total interest of the money is 7%. Solution and Explanation Let x be the money invested at 8% (1) We know that the interest of 20,000 invested at 4% yearly interest is 20,000(0.04) (2) We also know that the interest of the money invested at 8% is (0.08)(x) (3) The interest of total amount of money invested is 7%. So, (20,000 + x)(0.07) Now, the interest in (1) added to the interest in (2) is equal to the interest in (3). Therefore, 20,000(0.04) + (0.08)(x) = (20,000 + x)(0.07) Simplifying, we have 800 + 0.08x = 1400 + 0.07x To eliminate the decimal point, we multiply both sides by 100. That is 80000 + 8x = 140000 + 7x 8x – 7x = 140000 – 80000 x = 60000 This means that he has to invest$60,000 at 8% interest in order for the total to be 7% of the entire investment.
Check:
$20,000 x 0.04 =$800
$60,000 x 0.08 = 4800 Adding the two interest, we have$5600. We check if this is really 7% of the total investment.
Our total investment is $80,000. Now,$80,000 x 0.07 = \$5600. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369075894355774, "perplexity": 363.4173940657746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00474.warc.gz"} |
https://brilliant.org/problems/negate-the-roots/ | # Negate the Roots
Algebra Level 3
The roots of the monic polynomial $x^5 + a x^4 + b x^3 + c x^2 + d x + e$ are $$-r_1$$, $$-r_2$$, $$-r_3$$, $$-r_4$$, and $$-r_5$$, where $$r_1$$, $$r_2$$, $$r_3$$, $$r_4$$, and $$r_5$$ are the roots of the polynomial $x^5 + 9x^4 + 13x^3 - 57 x^2 - 86 x + 120.$ Find $$|a+b+c+d+e|.$$
Details and assumptions
A root of a polynomial is a number where the polynomial is zero. For example, 6 is a root of the polynomial $$2x - 12$$.
A polynomial is monic if its leading coefficient is 1. For example, the polynomial $$x^3 + 3x - 5$$ is monic but the polynomial $$-x^4 + 2x^3 - 6$$ is not.
The notation $$| \cdot |$$ denotes the absolute value. The function is given by $|x | = \begin{cases} x & x \geq 0 \\ -x & x < 0 \\ \end{cases}$ For example, $$|3| = 3, |-2| = 2$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550517201423645, "perplexity": 91.34753998659893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156690.32/warc/CC-MAIN-20180920234305-20180921014705-00172.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/209716-kullback-lieber-divergence.html | 1. ## Kullback-Lieber divergence
The Kullback-Lieber divergence between two distributions with pdfs f(x) and g(x) is defined
by
$KL(F;G) = \int_{-\infty}^{\infty} ln \left(\frac{f(x)}{g(x)}\right)f(x)dx$
Compute the Kullback-Lieber divergence when F is the standard normal distribution and G
is the normal distribution with mean and variance 1. For what value of is the divergence
minimized?
I was never instructed on this kind of divergence so I am a bit lost on how to solve this kind of integral. I get that I can simplify my two normal equations in the natural log but my guess is that I should wait until after I take the integral. Any help is appreciated.
2. ## Re: Kullback-Lieber divergence
Hi WUrunner,
If we simplify what's inside the logarithm we should get
$KL(F;G)=\int_{-\infty}^{\infty}\left(\frac{1}{2}-x\right)\left(\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}\right)dx.$
Now multiply this out to get
$KL(F;G)=\frac{1}{2}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} }e^{-\frac{x^{2}}{2}}\right)dx + \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}(-x)}e^{-\frac{x^{2}}{2}}\right)dx.$
Now we know that the first integral is $\sqrt{2\pi}$ (see Gaussian integral - Wikipedia, the free encyclopedia). The second integral can be computed using a u-substitution or by noting that we're integrating an odd function about a symmetric interval. When we put all the pieces together (if I've done the computations correctly) we should get
$KL(F;G)=\frac{1}{2}.$
Does this straighten things out? Let me know if anything is unclear. Good luck! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922236800193787, "perplexity": 273.5293169070606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00389.warc.gz"} |
https://forum.allaboutcircuits.com/threads/question-dealing-with-gain-bandwidth-of-inverting-op-amp.58857/ | # Question dealing with gain bandwidth of inverting op-amp
#### uofmx12
Joined Mar 8, 2011
55
In this circuit:
http://i812.photobucket.com/albums/zz41/uofmx12/E31ckt.jpg
A) what two estimates of the gain bandwidth product of your circuit can you make
B) Needed to make a circuit with bandwidth of 50kHz with gain of 1000V/V, would this circuit be chosen?
-No?
C) What if requirements were 10k Hz and 100V/V, would this circuit be chosen?
-Yes?
#### steveb
Joined Jul 3, 2008
2,436
In this circuit:
http://i812.photobucket.com/albums/zz41/uofmx12/E31ckt.jpg
A) what two estimates of the gain bandwidth product of your circuit can you make
B) Needed to make a circuit with bandwidth of 50kHz with gain of 1000V/V, would this circuit be chosen?
-No?
C) What if requirements were 10k Hz and 100V/V, would this circuit be chosen?
-Yes?
Is there any more information provided? For exampe, did they mention which OPAMP it is, or give the gain-bandwidth product for the OPAMP?
#### uofmx12
Joined Mar 8, 2011
55
inverting op-amp circuit and 100Hz frequency. That was provided.
#### steveb
Joined Jul 3, 2008
2,436
inverting op-amp circuit and 100Hz frequency. That was provided.
OK, that helps. Do you understand what that means? In other words do you know the meaning of gain bandwidth product? We need to have a baseline of whether your trouble is that you don't understand the definition, or if you just don't know how to apply the definition to that particular problem.
Your above information is still a little vague though. So is the 100 Hz frequency referenced to a particular inverting opamp, or to the one in your circuit? Or, is it referenced to the OPAMP itself, without any components (open loop)?
Last edited:
#### Audioguru
Joined Dec 20, 2007
11,249
The input resistors attenuate the input 101 times then the feedback resistors cause the opamp to amplify 1001 times.
The result is an amplification of only about 9.9 times. The output will probably be very noisy (hiss).
An audio opamp (something better and newer than a lousy old 741 opamp) that is open loop has a gain of 33,000 to about one million at 100Hz.
#### steveb
Joined Jul 3, 2008
2,436
... the feedback resistors cause the opamp to amplify 1001 times.
The result is an amplification of only about 9.9 times ...
Isn't it 1000 times for the 1M and 1K feedback portion, and net gain of -9.8 due to voltage attenuation and source resistance of the input voltage divider?
Last edited:
#### Audioguru
Joined Dec 20, 2007
11,249
Isn't it 1000 times for the 1M and 1K feedback portion, and net gain of -9.8 due to voltage attenuation and source resistance of the input voltage divider?
The input resistors attenuate 101 times and the feedback causes 1001 times gain.
So the result is a gain of about only -9.9 times.
#### steveb
Joined Jul 3, 2008
2,436
The input resistors attenuate 101 times and the feedback causes 1001 times gain.
So the result is a gain of about only -9.9 times.
How are you defining your feedback portion? If it's the two resistors 1M and 1K, then this is an inverting amplifier configuration and the gain of that portion is -R2/R1=-1000.
Looking at the whole circuit, you can make a Thevenin equivalent source of Vi*10/1010 and a source resistance of 10 in parallel with 1000, which is about 9.9 Ohms. Or Vi/101 and 9.9 ohm source resistance. Now the effective input resistance on the inverting amplifier (looking in from the ideal Thevenin voltage of Vi/101) is 1009.9 ohms due to the source resistance, and the net gain of the inverting Opamp is -1000000/1009.9=-990. Now combine this with the attenuated voltage and the net gain is -990/101 which is about equal to 9.8. Isn't it?
Alternatively, you can analyze the full circuit with equations and you come up with Av=-R3*R2/(R4*R1+R3*(R4+R1)) where R2 and R1 are the OPAMP feedback resistors 1M and 1K, respectively; and R4 and R3 are the voltage divider resistors 1000 and 10, respectively. Again, the gain is about -9.8.
Last edited:
#### The Electrician
Joined Oct 9, 2007
2,786
Another way of calculating the signal gain is to note that Ra, Rb and R1 form a T network. Use the delta-wye transformation to convert it into a pi network. Then the shunt resistors of the pi network can be ignored because the one across the input voltage has no effect if the input voltage source has zero output impedance, and the shunt across the - input of the opamp has no effect because that terminal is a virtual ground.
We are left with the series element of the pi network, which is 102000 ohms. Then the signal gain is 1000000/102000 = 500/51 = 9.80392
#### steveb
Joined Jul 3, 2008
2,436
Another way of calculating the signal gain is to note that Ra, Rb and R1 form a T network. Use the delta-wye transformation to convert it into a pi network. Then the shunt resistors of the pi network can be ignored because the one across the input voltage has no effect if the input voltage source has zero output impedance, and the shunt across the - input of the opamp has no effect because that terminal is a virtual ground.
We are left with the series element of the pi network, which is 102000 ohms. Then the signal gain is 1000000/102000 = 500/51 = 9.80392
Very elegant method !
I'm still a little confused on the OPs question. I have the gist of what he's asking, but the precise question and the precise information he was given is not fully clear to me.
For example, "What are the two estimates one can make?". I'm not sure what this means. It seems we need to have the OPAMPs gain-bandwidth product to make any estimates, and once we have that, shouldn't we just have one estimate for the entire circuit GB product?
Maybe I'm missing something?
#### The Electrician
Joined Oct 9, 2007
2,786
#### uofmx12
Joined Mar 8, 2011
55
Very elegant method !
I'm still a little confused on the OPs question. I have the gist of what he's asking, but the precise question and the precise information he was given is not fully clear to me.
For example, "What are the two estimates one can make?". I'm not sure what this means. It seems we need to have the OPAMPs gain-bandwidth product to make any estimates, and once we have that, shouldn't we just have one estimate for the entire circuit GB product?
Maybe I'm missing something?
That is all that was asked. I left off no other information. Not an important question now.
#### t_n_k
Joined Mar 6, 2009
5,455
Thought I'd simulate the results for a couple of different op-amps ...
#### Attachments
• 23.7 KB Views: 28
#### ftsolutions
Joined Nov 21, 2009
48
sometimes I think that professors need to actually look at a databook once in a while so that their questions are not quite so open to variable interpretation (or generate more questions). Maybe that was the intent? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013553619384766, "perplexity": 1707.4547196056496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890586.57/warc/CC-MAIN-20201026061044-20201026091044-00473.warc.gz"} |
https://edurev.in/course/quiz/attempt/7813_Electromagnetic-Wave-Propagation-MCQ-Test/24b22aa6-bd8a-4c7c-8a35-a584e18cc314 | Courses
# Electromagnetic Wave Propagation - MCQ Test
## 20 Questions MCQ Test GATE ECE (Electronics) 2022 Mock Test Series | Electromagnetic Wave Propagation - MCQ Test
Description
This mock test of Electromagnetic Wave Propagation - MCQ Test for Railways helps you for every Railways entrance exam. This contains 20 Multiple Choice Questions for Railways Electromagnetic Wave Propagation - MCQ Test (mcq) to study with solutions a complete question bank. The solved questions answers in this Electromagnetic Wave Propagation - MCQ Test quiz give you a good mix of easy questions and tough questions. Railways students definitely take this Electromagnetic Wave Propagation - MCQ Test exercise for a better result in the exam. You can find other Electromagnetic Wave Propagation - MCQ Test extra questions, long questions & short questions for Railways on EduRev as well by searching above.
QUESTION: 1
Solution:
QUESTION: 2
Solution:
QUESTION: 3
### A non magnetic medium has an intrinsic impedance 360∠30o Ω Que: The loss tangent is
Solution:
QUESTION: 4
A non magnetic medium has an intrinsic impedance 360∠30o Ω
Que: The Dielectric constant is
Solution:
QUESTION: 5
The amplitude of a wave traveling through a lossy nonmagnetic medium reduces by 18% every meter. The wave operates at 10 MHz and the electric field leads the magnetic field by 24o
Que: The propagation constant is
Solution:
QUESTION: 6
The amplitude of a wave traveling through a lossy nonmagnetic medium reduces by 18% every meter. The wave operates at 10 MHz and the electric field leads the magnetic field by 24o
Que: The skin depth is
Solution:
QUESTION: 7
A 60 m long aluminium (σ = 3.5 x 107 S/m, μr = 1, ε2 = 1) pipe with inner and outer radii 9 mm and 12 mm carries a total current of 16 sin(106 πt)A. The effective resistance of the pipe is
Solution:
QUESTION: 8
Silver plated brass wave guide is operating at 12 GHz. If at least the thickness of silver ( σ = 6.1 x 107 S/m, μr = εr = 1 ) is 5 δ the minimum thickness required for wave-guide is
Solution:
QUESTION: 9
A uniform plane wave in a lossy nonmagnetic media has
The magnitude of the wave at z = 4 m and t = T/8 is
Solution:
QUESTION: 10
A uniform plane wave in a lossy nonmagnetic media has
Que: The loss suffered by the wave in the interval 0 < z < 3 m is
Solution:
1 Np = 8.686 DB, 0.6 Np = 5.21 dB.
QUESTION: 11
Region 1, z < 0 and region 2, z > 0, are both perfect dielectrics. A uniform plane wave traveling in the uz direction has a frequency of 3 x 1010 rad /s. Its wavelength in the two region are λ1 = 5 cm and λ2 = 3 cm.
Que: On the boundary the reflected energy is
Solution:
QUESTION: 12
Region 1, z < 0 and region 2, z > 0, are both perfect dielectrics. A uniform plane wave traveling in the uz direction has a frequency of 3 x 1010 rad /s. Its wavelength in the two region are λ1 = 5 cm and λ2 = 3 cm.
Que: The SWR is
Solution:
QUESTION: 13
A uniform plane wave is incident from region 1 ( μr = 1, σ = 0) to free space. If the amplitude of incident wave is one-half that of reflected wave in region, then the value of εr is
Solution:
QUESTION: 14
A 150 MHz uniform plane wave is normally incident from air onto a material. Measurements yield a SWR of 3 and the appearance of an electric field minimum at 0.3λ in front of the interface. The impedance of material is
Solution:
QUESTION: 15
A plane wave is normally incident from air onto a semi-infinite slab of perfect dielectric (εr = 3.45) . The fraction of transmitted power is
Solution:
QUESTION: 16
Consider three lossless region :
Que: The lowest frequency, at which a uniform plane wave incident from region 1 onto the boundary at z = 0 will have no reflection, is
Solution:
This frequency gives the condition
QUESTION: 17
Consider three lossless region :
Que: If frequency is 50 MHz, the SWR in region 1 is
Solution:
At 50 MHz
QUESTION: 18
A uniform plane wave in air is normally incident onto a lossless dielectric plate of thickness λ/8 , and of intrinsic impedance η = 260 Ω. The SWR in front of the plate is
Solution:
QUESTION: 19
The E-field of a uniform plane wave propagating in a dielectric medium is given by
The dielectric constant of medium is
Solution:
QUESTION: 20
An electromagnetic wave from an under water source with perpendicular polarization is incident on a water-air interface at angle 20o with normal to surface. For water assume εr = 81, μr = 1. The critical angle θc is
Solution: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190106153488159, "perplexity": 2141.418555010462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00131.warc.gz"} |
https://mathoverflow.net/questions/186196/finding-a-norm-on-mathbbrx-such-that-the-natural-embedding-of-a-metric/186205 | # Finding a norm on $\mathbb{R}^X$ such that the "natural" embedding of a metric space $X$ in $\mathbb{R}^X$ becomes an isometry
Let $(X,d)$ be a metric space and consider the function $T:X \to \mathbb{R}^X$ such that $T(x)(y) = 1$ if $y = x$ and $0$ for all other $y$. Is there a norm on $\mathbb{R}^X$ such that $T$ is an isometry? That is, $||T(a) - T(b)|| = d(a,b)$ for all $a,b \in X$.
I'm at a loss to know how to approach this. I didn't come up with any good ideas on how to define a proper norm, and I have absolutely no clue how to begin trying to prove such a norm could not exist. Any ideas?
• It seems to me like a far more natural choice of $T$ would be $T(x)(y)=d(x,y)$. If $X$ is bounded, this is an isometry with respect to the sup norm on the space of bounded functions from $X$ to $\mathbb{R}$. In general, if $X$ is infinite, I would not expect there to be any natural norm that is well-defined on all of $\mathbb{R}^X$ (in particular, there does not exist a norm that makes every projection continuous). Nov 4, 2014 at 17:51
• Yes, d(x,y) is a much better embedding of X in in $R^X$, but my initial idea was the embedding described above and even though it wasn't the best one, I found the question of existence of a suitable norm(no matter how "weird" it could potentially be) quite interesting in itself.
– Ormi
Nov 4, 2014 at 18:17
• The answer to your question is "almost yes" but I'm curious to know in what context this question arose. Were you asked to find such a norm, or did you read somewhere that such a norm exists? Nov 4, 2014 at 18:47
• I was asked to prove that every metric space can be isometrically embedded in a Banach space, so that the image is linearly independent. My first idea was the one I described, since the linear independence is obvious there, and if a good norm could be found, any normed space be embedded in a Banach space, so that would give the desired result. Kuratowski's embedding seems to do the job much better(though I'm still not sure about the linear independence), but I found it curious to see if it was possible to find the right norm here and what the technique for doing it would be.
– Ormi
Nov 4, 2014 at 19:03
• I'm not asking this question with a "please solve my homework" intention, because I'm going to just keep trying to work out the linear independence for Kuratowski's embedding. I'm just genuinely curious about what can be achieved with my initial idea, so if you could give me some hints for that, or refer me to somewhere where I can read up about it, I'd be grateful. I also thought about defining the norm on the subspace of functions with finite supports, could this be the subspace you mentioned?
– Ormi
Nov 4, 2014 at 19:47
Note that your embedding map $T$ actually takes values in the subspace $\newcommand{\R}{{\mathbb R}}$ $c_{00}(X;\R)$ of finitely supported functions $X\to\R$. If you merely want a norm on this subspace which makes $T$ an embedding, then this is possible via the Arens–Eells construction:
R. Arens, J. Eells, On embedding uniform and topological spaces. Pacific J. Math. 6 (1956) no. 3, 397-403.
(Arens and Eells proved a more general result: if you just want the embedding theorem for metric spaces then it is in Weaver's book Lipschitz spaces and also in some more recent work of e.g. Godefroy and Kalton. Google should provide links to various downloadable papers/preprints.)
The embedding is usually phrased in terms of sending $x\in X$ to $\delta_x \in c_{00}(X;\R)$, which is just another way of describing your map $T$. Of course the problem is defining the norm! One can either define it as an inf over various representations or a sup when paired with another more familiar Banach space. Let me choose the second way.
Start by fixing a basepoint $x_0\in X$. Given $f\in \R^X$ with $f(x_0)=0$ define its Lipschitz norm to be $$\Vert f\Vert_L = \sup_{x,y\in X; x\neq y} \frac{|f(x)-f(y)|}{d(x,y)} \in [0,\infty] .$$ Then, given $c=\sum_{x\in X} c_x \delta_x$ where only finitely many of the $c_x$ are non-zero, define $$\Vert c \Vert_{\bf AE} = \sup\left\{ \sum_{x\in X} c_x f(x) \;\colon\; f\in\R^X, \Vert f\Vert_L\leq 1, f(x_0)=0 \right\}$$.
The completion of $c_{00}(X;{\mathbb R})$ with respect to the norm $\Vert\cdot\Vert_{\bf AE}$ is the Arens–Eells space of $X$ (I'm using the terminology and borrowing the definition from Weaver's book.)
Let's check that $x\mapsto\delta_x$ is an isometry. Let $x,y\in X$ with $x\neq y$. If $f(x_0)=0$ and $\Vert f\Vert_L\leq 1$ then pairing $x$ with $\delta_x-\delta_y$ gives $f(x)-f(y)$, which is bounded in modulus by $d(x,y)$ owing to the Lipschitz condition. So $\Vert \delta_x - \delta_y \Vert_{\bf AE} \leq d(x,y)$. On the other hand, consider the function $$h(z)=d(z,y)- d(x_0,y) \quad(z\in X).$$ Clearly $h(x_0)=0$, and the triangle inequality for $d$ shows us that $\Vert h\Vert_L\leq 1$. Hence $$\Vert \delta_x -\delta_y \Vert_{\bf AE} \geq \vert h(x)-h(y)\vert = d(x,y).$$ Putting these together gives $\Vert \delta_x - \delta_y \Vert_{\bf AE} =d(x,y)$ as required.
For those who like the category-theoretic perspetive: the Arens–Eells space can be viewed as a left adjoint to the functor ${\bf U}: {\sf Ban} \to {\sf Met}_0$ where:
• the first category has Banach spaces as objects and bounded linear maps as the morphisms;
• the second category has pointed metric spaces as objects, and basepoint-preserving Lipschitz maps as the morphisms;
• and given a Banach space $E$, ${\bf U}(E)$ is defined to be the underlying metric space of $E$, with $0_E$ as the basepoint.
Then the Arens–Eells embedding can be regarded as the unit of this adjunction.
In more "down-to-earth" language: given a pointed metric space $(X,x_0)$ let ${\bf AE}(X,x_0)$ be the Arens–Eells space as defined above. Then for any Banach space $E$ and any Lipschitz map $f: X \to E$ satisfying $f(x_0)=0$, there is a unique extension of $f$ to a continuous linear map $F: {\bf AE}(X,x_0) \to E$. Thus ${\bf AE}(X,x_0)$ can be viewed as the "free Banach space generated by $(X,x_0)$".
• Thank you for the answer. Unfortunately I can't quite find those books you suggested available anywhere. I guess I'll just familiarise myself with uniform spaces and then will be able to read the original Aren and Eell's paper.
– Ormi
Nov 4, 2014 at 22:20
• The Godefroy-Kalton paper is available at kaltonmemorial.missouri.edu/docs/sm2003c.pdf Nov 5, 2014 at 14:02
• But is it a norm? It seems that $\delta_{x_0}$ has norm 0. Mar 19, 2015 at 21:37
• In my definition I require $f(x_0)=0$ Mar 19, 2015 at 23:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656480550765991, "perplexity": 147.09690176911226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00069.warc.gz"} |
https://findfilo.com/maths-question-answers/consider-two-curves-c-1-y-2-4-sqrt-y-x-and-c-2-x-2zly | Consider two curves C_(1):y^(2)=4[sqrt(y)]x and C_(2):x^(2)=4[sqrt | Filo
Class 12
Math
Calculus
Area
540
150
Consider two curves denotes the greatest integer function. Then the area of region enclosed by these two curves within the square formed by the lines is
540
150
Connecting you to a tutor in 60 seconds. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888611912727356, "perplexity": 679.1070024334282}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00359.warc.gz"} |
https://hal-cea.archives-ouvertes.fr/cea-01383761 | # Multi-frequency study of the newly confirmed supernova remnant MCSNR J0512−6707 in the Large Magellanic Cloud
Abstract : Aims. We present a multi-frequency study of the supernova remnant MCSNR J0512−6707 in the Large Magellanic Cloud.Methods. We used new data from XMM-Newton to characterise the X-ray emission and data from the Australian Telescope Compact Array, the Magellanic Cloud Emission Line Survey, and Spitzer to gain a picture of the environment into which the remnant is expanding. We performed a morphological study, determined radio polarisation and magnetic field orientation, and performed an X-ray spectral analysis.Results. We estimated the remnant’s size to be 24.9 ( ± 1.5) × 21.9 ( ± 1.5) pc, with the major axis rotated ~29° east of north. Radio polarisation images at 3 cm and 6 cm indicate a higher degree of polarisation in the northwest and southeast tangentially oriented to the SNR shock front, indicative of an SNR compressing the magnetic field threading the interstellar medium. The X-ray spectrum is unusual as it requires a soft (~0.2 keV) collisional ionisation equilibrium thermal plasma of interstellar medium abundance, in addition to a harder component. Using our fit results and the Sedov dynamical model, we showed that the thermal emission is not consistent with a Sedov remnant. We suggested that the thermal X-rays can be explained by MCSNR J0512−6707 having initially evolved into a wind-blown cavity and is now interacting with the surrounding dense shell. The origin of the hard component remains unclear. We could not determine the supernova type from the X-ray spectrum. Indirect evidence for the type is found in the study of the local stellar population and star formation history in the literature, which suggests a core-collapse origin.Conclusions. MCSNR J0512−6707 likely resulted from the core-collapse of high mass progenitor which carved a low density cavity into its surrounding medium, with the soft X-rays resulting from the impact of the blast wave with the surrounding shell. The unusual hard X-ray component requires deeper and higher spatial resolution radio and X-ray observations to confirm its origin.
Keywords :
Document type :
Journal articles
Domain :
Complete list of metadatas
Cited literature [26 references]
https://hal-cea.archives-ouvertes.fr/cea-01383761
Contributor : Edp Sciences <>
Submitted on : Wednesday, October 19, 2016 - 11:01:53 AM
Last modification on : Friday, April 5, 2019 - 8:13:17 PM
### File
aa26987-15.pdf
Publication funded by an institution
### Citation
P. J. Kavanagh, M. Sasaki, L. M. Bozzetto, S. D. Points, M. D. Filipović, et al.. Multi-frequency study of the newly confirmed supernova remnant MCSNR J0512−6707 in the Large Magellanic Cloud. Astronomy and Astrophysics - A&A, EDP Sciences, 2015, 583, pp.A121. ⟨10.1051/0004-6361/201526987⟩. ⟨cea-01383761⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8824122548103333, "perplexity": 3594.0799368718613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00400.warc.gz"} |
http://georeference.org/doc/transverse_mercator.htm | # Transverse Mercator
A conformal cylindrical projection: The transverse aspect of Mercator projection. Also known as Gauss Conformal (ellipsoidal form only), Gauss-Kruger (ellipsoidal form only) and Transverse Cylindrical Orthomorphic. Shown greatly zoomed in since profound distortion occurs outside the target region.
Limitations
The accuracy of Transverse Mercator projections quickly decreases from the central meridian. Therefore, it is strongly recommended to restrict the longitudinal extent of the projected region to +/- 10 degrees from the central meridian. [The US Army standard allows +/- 24 degrees from the central meridian].
This requirement is met within all State Plane zones that use Transverse Mercator projections.
Scale
True along the central meridian or along two straight lines on the map equidistant from and parallel to the central meridian. Scale is constant along any straight line on the map parallel to the central meridian. These lines are only approximately straight for the projection of the ellipsoid, and will be the case within Manifold when ellipsoidal Earth models (the standards) are used.
Scale increases with distance from the central meridian, and becomes infinite 90° from the central meridian.
Distortion
Infinitesimally small circles of equal size on the globe appear as circles on the map (indicating conformality) but increase in size away from the central meridian (indicating area distortion).
Usage
Many of the topographic and planimetric map quadrangles throughout the world at scales of 1:24,000 to 1:250,000. Basis for the Universal Transverse Mercator (UTM) grid and projection. Basis for the State Plane Coordinate System in U.S. States having predominantly north-south extent. Recommended for conformal mapping of regions having predominantly north-south extent.
Origin
Presented by Johann Heighrich Lambert (1728 - 1777) of Alsace in 1772. Formulas for ellipsoidal use developed by Carl Friedrich Gauss of Germany in 1822 and by L. Kruger of Germany, L.P. Lee of New Zealand, and others in the 20th Century.
Options
Specifying latitude origin and longitude origin centers the map projection. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563032150268555, "perplexity": 2978.689332823977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659512.19/warc/CC-MAIN-20160924173739-00087-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://www.legisquebec.gouv.qc.ca/en/version/cr/q-2,%20r.%2026.1?code=se:14&history=20220704 | ### Q-2, r. 26.1 - Regulation respecting the operation of industrial establishments
14. A holder of an authorization to operate an industrial establishment shall keep an up-to-date record in which he shall enter any infringement of the contaminant discharge standards applicable to him and established by the Minister under the first paragraph of section 26 of the Act.
The record shall contain, for each infringement,
(1) the exact time at which the holder became aware of the infringement;
(2) the exact location and time at which the infringement occurred;
(3) the causes of the infringement and the circumstances in which it occurred; and
(4) the measures taken or planned by the holder to reduce or eliminate the effects of the infringement and to eliminate and prevent its causes.
A holder of an authorization shall send to the Minister, within 30 days of the end of each calendar month, a copy of the information entered in the record during the previous month.
The information in the record shall be conserved by the holder for at least 2 years from the date on which the information is sent to the Minister.
O.C. 601-93, s. 14; O.C. 871-2020, s. 7.
14. A holder of a depollution attestation shall keep an up-to-date record in which he shall enter any infringement of the contaminant discharge standards applicable to him and established by the Minister under the first paragraph of section 31.15 of the Act.
The record shall contain, for each infringement,
(1) the exact time at which the holder became aware of the infringement;
(2) the exact location and time at which the infringement occurred;
(3) the causes of the infringement and the circumstances in which it occurred; and
(4) the measures taken or planned by the holder to reduce or eliminate the effects of the infringement and to eliminate and prevent its causes.
A holder of a depollution attestation shall send to the Minister, within 30 days of the end of each calendar month, a copy of the information entered in the record during the previous month.
The information in the record shall be conserved by the holder for at least 2 years from the date on which the information is sent to the Minister.
O.C. 601-93, s. 14. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612118363380432, "perplexity": 1907.2043011209028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00190.warc.gz"} |
http://rosa.unipr.it/FSDA/ScoreYJpn.html | # ScoreYJpn
Computes the score test for YJ transformation separately for pos and neg observations
## Syntax
• outSC=ScoreYJpn(y,X)example
• outSC=ScoreYJpn(y,X,Name,Value)example
## Description
outSC =ScoreYJpn(y, X) Ex in which positive and negative observations require the same lambda.
outSC =ScoreYJpn(y, X, Name, Value) Ex in which positive and negative observation require different lambdas.
## Examples
expand all
### Ex in which positive and negative observations require the same lambda.
rng('default')
rng(1)
n=100;
y=randn(n,1);
% Transform the value to find out if we can recover the true value of
% the transformation parameter
la=0.5;
ytra=normYJ(y,[],la,'inverse',true);
% Start the analysis
X=ones(n,1);
[outSC]=ScoreYJ(ytra,X,'intercept',0);
[outSCpn]=ScoreYJpn(ytra,X,'intercept',0);
la=[-1 -0.5 0 0.5 1]';
disp([la outSCpn.Score(:,1) outSC.Score outSCpn.Score(:,2)])
% Comment: if we consider the 5 most common values of lambda
% the value of the score test when lambda=0.5 is the only one which is not
% significant. Both values of the score test for positive and negative
% observations confirm that this value of the transformation parameter is
% OK for both sides of the distribution.
-1.0000 40.2357 24.1288 15.7149
-0.5000 20.4741 14.7964 9.9619
0 8.9009 6.9774 4.5230
0.5000 1.3042 0.2740 -1.0664
1.0000 -4.8797 -6.2978 -7.9574
### Ex in which positive and negative observation require different lambdas.
rng(1000)
n=100;
y=randn(n,1);
% Tranform in a different way positive and negative values
lapos=0;
ytrapos=normYJ(y(y>=0),[],lapos,'inverse',true);
laneg=1;
ytraneg=normYJ(y(y<0),[],laneg,'inverse',true);
ytra=[ytrapos; ytraneg];
% Start the analysis
X=ones(n,1);
[outSC]=ScoreYJ(ytra,X,'intercept',0);
[outSCpn]=ScoreYJpn(ytra,X,'intercept',0);
la=[-1 -0.5 0 0.5 1]';
disp([la outSCpn.Score(:,1) outSC.Score outSCpn.Score(:,2)])
% Comment: if we consider the 5 most common values of lambda
% the value of the score test when lambda=0.5 is the only one which is not
% significant. However when lambda=0.5 the score test for negative
% observations is highly significant. The difference between the test for
% positive and the test for negative is 2.7597+0.7744=3.5341, which is very
% large. This indicates that the two tails need a different value of the
% transformation parameter.
-1.0000 89.5466 39.6867 28.3433
-0.5000 33.4110 24.9236 19.4072
0 10.3643 11.4446 10.8674
0.5000 -0.7744 0.8272 2.7597
1.0000 -9.8327 -9.4050 -6.8708
## Related Examples
expand all
### Extended score with all default options for the wool data.
XX=load('wool.txt');
y=XX(:,end);
X=XX(:,1:end-1);
% Score test using the five most common values of lambda.
% In this case (given that all observations are positive the extended
% score test for positive observations reduces to the standard score test
% while that for negative is equal to NaN.
[outSc]=ScoreYJpn(y,X);
### Extended score test using Darwin data given by Yeo and Yohnson.
y=[6.1, -8.4, 1.0, 2.0, 0.7, 2.9, 3.5, 5.1, 1.8, 3.6, 7.0, 3.0, 9.3, 7.5 -6.0]';
n=length(y);
X=ones(n,1);
% Score and extended score test in the grid of lambda 1, 1.1, ..., 2
la=[1:0.1:2];
% Given that there are no explanatory variables the test must be
% called with intercept 0
outpn=ScoreYJpn(y,X,'intercept',0,'la',la);
out=ScoreYJ(y,X,'intercept',0,'la',la);
disp([la' outpn.Score(:,1) out.Score outpn.Score(:,2)])
## Input Arguments
### y — Response variable. Vector.
A vector with n elements that contains the response variable. It can be either a row or a column vector.
Data Types: single| double
### X — Predictor variables. Matrix.
Data matrix of explanatory variables (also called 'regressors') of dimension (n x p-1). Rows of X represent observations, and columns represent variables.
Missing values (NaN's) and infinite values (Inf's) are allowed, since observations (rows) with missing or infinite values will automatically be excluded from the computations.
Data Types: single| double
### Name-Value Pair Arguments
Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.
Example: 'intercept',false , 'la',[0 0.5] , 'nocheck',1
### intercept —Indicator for constant term.true (default) | false.
Indicator for the constant term (intercept) in the fit, specified as the comma-separated pair consisting of 'Intercept' and either true to include or false to remove the constant term from the model.
Example: 'intercept',false
Data Types: boolean
### la —transformation parameter.vector.
It specifies for which values of the transformation parameter it is necessary to compute the score test. Default value of lambda is la=[-1 -0.5 0 0.5 1]; that is the five most common values of lambda
Example: 'la',[0 0.5]
Data Types: double
### nocheck —Check input arguments.scalar.
If nocheck is equal to 1 no check is performed on matrix y and matrix X. Notice that y and X are left unchanged. In other words the additional column of ones for the intercept is not added. As default nocheck=0.
Example: 'nocheck',1
Data Types: double
## Output Arguments
### outSC — description Structure
containing the following fields:
Value Description
Score
score test. Matrix.
Matrix of size length(lambda)-by-2 which contains the value of the score test for each value of lambda specfied in optional input parameter la. The first column refers to the test for positive observations while the second column refers to the test for negative observations. If la is not specified, the number of rows of outSc.Score is equal to 5 and will contain the values of the score test for the 5 most common values of lambda.
## References
Yeo, I.K. and Johnson, R. (2000), A new family of power transformations to improve normality or symmetry, "Biometrika", Vol. 87, pp. 954-959.
Atkinson, A.C. and Riani, M. (2018), Extensions of the score test, Submitted. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88258957862854, "perplexity": 3278.326711308793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370520039.50/warc/CC-MAIN-20200404042338-20200404072338-00391.warc.gz"} |
https://infoscience.epfl.ch/record/147334 | Formats
Format
BibTeX
MARC
MARCXML
DublinCore
EndNote
NLM
RefWorks
RIS
### Abstract
The scaling properties of DNA knots of different complexities were studied by atomic force microscope. Following two different protocols DNA knots are adsorbed onto a mica surface in regimes of (i) strong binding, that induces a kinetic trapping of the three-dimensional (3D) configuration, and of (ii) weak binding, that permits (partial) relaxation on the surface. In (i) the radius of gyration of the adsorbed DNA knot scales with the 3D Flory exponent nu = 0.60 within error. In (ii), we find nu approximate to 0.66, a value between the 3D and 2D (nu = 3/4) exponents. Evidence is also presented for the localization of knot crossings in 2D under weak adsorption conditions. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336346745491028, "perplexity": 2688.5218896863944}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00137.warc.gz"} |
https://www.physicsforums.com/threads/linear-algebra-span-linear-independence-proof.640234/ | # Linear Algebra: Span, Linear Independence Proof
• Start date
• #1
97
0
## Homework Statement
Suppose v_1,v_2,v_3,...v_n are vectors such that v_1 does not equal the zero vector
and v_2 not in span{v_1}, v_3 not in span{v_1,v_2}, v_n not in span{v_1,v_2,...v_(n-1)}
show that v_1,v_2,v_3,....,V_n are linearly independent.
## Homework Equations
linear independence, span
## The Attempt at a Solution
he gave us a hint, which was to use induction
heres what i have so far
for the base case n=1
v_1 does not equal 0
so for cv_1=0, c must equal 0 making v_1 linearly independent
then assume v_n is linearly independent to show v_(n+1) is linearly independent
since v_n is linearly independent, then v_1,v_2,v_3,v_(n-1) are all linearly independent as well, my books states this as a remark to linear independence so i assume i can use it
and v_(n+1) not in span{v_1,...v_n}
therefore c_1v_1+c_2v_2+....+c_nv_n+c_(n+1)v_(n+1)=0 if either
c_(n+1)v_(n+1)=-c_1v_1-c_2v_2-.....-c_nv_n
or c_(n+1)v_(n+1)=0
the former isnt true since its not in the span of all the vectors before it so then the latter must hold true
this is where i started doubting myself because then i would have to show that v_(n+1) is not zero and im unsure on how to do that, also im a beginner with proofs so im not even sure if im doing this correctly using induction
• #2
Dick
Homework Helper
26,260
619
## Homework Statement
Suppose v_1,v_2,v_3,...v_n are vectors such that v_1 does not equal the zero vector
and v_2 not in span{v_1}, v_3 not in span{v_1,v_2}, v_n not in span{v_1,v_2,...v_(n-1)}
show that v_1,v_2,v_3,....,V_n are linearly independent.
## Homework Equations
linear independence, span
## The Attempt at a Solution
he gave us a hint, which was to use induction
heres what i have so far
for the base case n=1
v_1 does not equal 0
so for cv_1=0, c must equal 0 making v_1 linearly independent
then assume v_n is linearly independent to show v_(n+1) is linearly independent
since v_n is linearly independent, then v_1,v_2,v_3,v_(n-1) are all linearly independent as well, my books states this as a remark to linear independence so i assume i can use it
and v_(n+1) not in span{v_1,...v_n}
therefore c_1v_1+c_2v_2+....+c_nv_n+c_(n+1)v_(n+1)=0 if either
c_(n+1)v_(n+1)=-c_1v_1-c_2v_2-.....-c_nv_n
or c_(n+1)v_(n+1)=0
the former isnt true since its not in the span of all the vectors before it so then the latter must hold true
this is where i started doubting myself because then i would have to show that v_(n+1) is not zero and im unsure on how to do that, also im a beginner with proofs so im not even sure if im doing this correctly using induction
You are really close. You can say v_(n+1) is not the zero vector. The zero vector is in the span of any set of vectors. Try and restate your argument knowing that.
Last edited:
• #3
97
0
so can i just say since the zero vector is in the span of any set of vectors and v_(n+1) is not in the span of all the vectors before it then v_(n+1) is not the zero vector??
if thats correct then c_(n+1) must equal zero thus showing that all the vectors are linearly independent
• #4
Dick
Homework Helper
26,260
619
so can i just say since the zero vector is in the span of any set of vectors and v_(n+1) is not in the span of all the vectors before it then v_(n+1) is not the zero vector??
if thats correct then c_(n+1) must equal zero thus showing that all the vectors are linearly independent
Yes, that's pretty much it. If c_(n+1) is nonzero then v_(n+1) is in the span, contradiction. If c_(n+1) is zero then it shows they are linearly independent. Well done. You are better at proofs than you thought.
• #5
97
0
cool thanks!
• Last Post
Replies
6
Views
2K
• Last Post
Replies
5
Views
1K
• Last Post
Replies
1
Views
2K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
7
Views
665
• Last Post
Replies
2
Views
947
• Last Post
Replies
2
Views
4K
• Last Post
Replies
2
Views
7K
• Last Post
Replies
1
Views
2K
• Last Post
Replies
1
Views
2K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962226152420044, "perplexity": 1383.9566731935897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00021.warc.gz"} |
https://www.chemeurope.com/en/encyclopedia/Static_light_scattering.html | My watch list
my.chemeurope.com
# Static light scattering
Static light scattering is a technique in physical chemistry that uses the intensity traces at a number of angles to derive information about the radius of gyration $\ R_g$, molecular mass $\ M_w$ of the polymer or polymer complexes, and the second virial coefficient $\ A_2$, for example, micellar formation (1-5).
There are typically a number of analyses developed to analyze the scattering of particles in solution to derive the above named physical characteristics of particles. A simple static light scattering experiment entails the average intensity of the sample that is corrected for the scattering of the solvent will yield the Rayleigh ratio, $\ R$ as a function of the angle or the wave vector $\ q$ as follows:
$\ R(\theta_{sample}) = R(\theta_{solvent})I_{sample}/I_{solvent}$
yielding the difference in the Rayleigh ratio, $\ \Delta R(\theta)$ between the sample and solvent:
$\ \Delta R(\theta) = R(\theta_{sample})-R(\theta_{solvent})$
In addition, the setup of the laser light scattering is corrected with a liquid of a known refractive index and Rayleigh ratio e.g. toluene, benzene or decalin. This is applied at all angles to correct for the distance of the scattering volume to the detector.
One must note that although data analysis can be performed without a so-called material constant $\ K$ defined below, the inclusion of this constant can lead to the calculation of other physical parameters of the system.
$\ K=4\pi^2 n_0^2 (dn/dc)^2/N_A\lambda^4$
where $\ (dn/dc)$ is the refractive index increment, $\ n_0$ is the refractive index of the solvent, $\ N_A$ is Avogadro's number (6.023x1023) and $\ \lambda$ is the wavelength of the laser light reaching the detector. This equation is for linearly polarized light like the one from a He-Ne gas laser.
## Data Analyses
### Guinier plot
The scattered intensity can be plotted as a function of the angle to give information on the $\ R_g$ which can simple be calculated using the Guinier approximation as follows:
$\ ln(\Delta R(\theta)) = 1 - (R_g^2/3)q^2$
where $\ ln(\Delta R(\theta))=lnP(\theta)$ also known as the form factor with $\ q = 4\pi n_0 sin(\theta/2)/\lambda$. Hence a plot of the corrected Rayleigh ratio,$\ \Delta R(\theta)$ versus $\ sin(\theta/2)$ or $\ q^2$ will yield a slope $\ -R_g^2/3$. However, this approximation is only true for $\ qR_g<1$. Note that for a Guinier plot, the value of dn/dc and the concentration is not needed.
### Kratky plot
The Kratky plot is typically used to analyze the conformation of proteins, but can be used to analyze the random walk model of polymers. A Kratky plot can be made by plotting $\ sin^2(\theta/2)\Delta R(\theta)$ versus $\ sin(\theta/2)$ or $\ q^2\Delta R(\theta)$ versus $\ q$.
### Debye plot
This method is used to derive the molecular mass and 2nd virial coefficient,$\ A_2$, of the polymer or polymer complex system. The difference to the Zimm plot is that the experiments are performed using a single angle. Since only one angle is used (typically 90o), the $\ R_g$ cannot be determined as can be seen from the following equation:
$\ Kc/\Delta R(\theta) = 1/M_w + 2A_2c$
### Zimm plot
For polymers and polymer complexes which are of a monodisperse nature $\ PDI<0.3$ as determined by dynamic light scattering, a Zimm plot is a conventional means of deriving the parameters such as $\ R_g$, molecular mass $\ M_w$ and the second virial coefficient $\ A_2$.
One must note that if the material constant $\ K$ defined above is not implemented, a Zimm plot will only yield $\ R_g$. Hence implementing $\ K$ will yield the following equation:
$\ Kc/\Delta R(\theta)=1/{M_wP(\theta)}+A_2c = 1/M_w(1+q^2(R_g^2/3))+2A_2c$
Experiments are performed at several angles and at least 4 concentrations. Performing a Zimm analysis on a single concentration is known as a partial Zimm analysis and is only valid for dilute solutions of strong point scatterers. The partial Zimm however, does not yield the second virial coefficient, due to the absence of the variable concentration of the sample.
## References
1. A. Einstein, Ann. Phys. 33 (1910), 1275
2. C.V. Raman, Indian J. Phys. 2 (1927), 1
3. P.Debye, J. Appl. Phys. 15 (1944), 338
4. B.H. Zimm, J. Chem. Phys 13 (1945), 141
5. B.H. Zimm, J. Chem. Phys 16 (1948), 1093 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8752846121788025, "perplexity": 689.907422462372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00682.warc.gz"} |
https://math.stackexchange.com/questions/2558153/if-fx-to-y-is-continuous-x-is-compact-and-fx-y-then-y-is-compact | # If $f:X\to Y$ is continuous, $X$ is compact and $f(X)=Y,$ then $Y$ is compact.
Let $$(X,\tau),(Y,\eta)$$ be topological spaces. If $$f:X\to Y$$ is continuous, $$f(X)=Y$$ and $$X$$ is compact, then $$Y$$ is compact.
We want to show that for every open cover for $$Y$$ there exist a finite open subcover. So let $$\{U_\alpha:\alpha\in I\}$$ be ab open cover for $$Y$$. Thus $$Y=\bigcup_{\alpha\in I} U_\alpha$$
Now how can I use the hypothesis that $$f$$ is continuous?
I was thinking about this property of continuity: $$\forall U\in\eta,f^{-1}(U)\in\tau$$ but I don't know how to make a relation with the compactness of $$X$$.
As $$X$$ is compact, then $$\forall U\subset\tau,X\subset\bigcup U$$ and $$\exists u_1,...,u_n\in U:U\subset\bigcup_iU_i$$.
Could anyone help me please?
• To use compactness of $X$, you need a collection of open sets which cover $X$. Hmm... I wonder where you're going to get one of those? All you have is a collection of open sets which cover $Y$. How can you turn open sets in $Y$ into open sets in $X$? Hmm... – Alex Kruckman Dec 9 '17 at 5:57
Write $X=\displaystyle\bigcup_{\alpha\in I}f^{-1}(U_{\alpha})$. Continuity is for $f^{-1}(U_{\alpha})$.
• – Peter Szilas Dec 9 '17 at 7:41
• Can you explain how the hypothesis of surjective is applied? I can see that $f^{-1}(U_{\alpha}),\forall \alpha\in I$ are all in $X$. When it says $f^{-1}(U_{\alpha}),\forall \alpha\in I$ it means that $f^{-1}$ is evaluated in every set of Y? – user486983 Dec 9 '17 at 20:12
• in other words, after the evaluation in $f^{-1}$, Y will be 'empty' since all it's elements have been evaluated? – user486983 Dec 9 '17 at 20:14
• In your below comment, you have written $f\left(\displaystyle\bigcup_{i}f^{-1}(U_{\alpha_{i}})\right)=\displaystyle\bigcup_{i}f(f^{-1}(U_{\alpha_{i}}))=\displaystyle\bigcup_{i}U_{\alpha_{i}}$. Surjective: $f(f^{-1}(A))=A$. – user284331 Dec 9 '17 at 20:16
• Sorry I don't quite follow your second question. – user284331 Dec 9 '17 at 20:17
Consider the collection of sets $f^{-1}(U_{\alpha})$ for all $\alpha$.
They are open as $f$ is continuous and cover $X$ since $f$ is surjective.
Since $X$ is compact take a finite subcover $f^{-1}(U_{\alpha_1}), \ldots, f^{-1}(U_{\alpha_n})$.
What can you now deduce about $U_{\alpha_1}, \ldots, U_{\alpha_n}$?
• As $X$ is compact, $X=\bigcup_if^{-1}(U_{\alpha_i} )$. Therefore $Y=f(X)\subset f (\bigcup_{i} f^{-1}(U_{\alpha_i}))$ $=\bigcup_i U_{\alpha_i}$ ? – user486983 Dec 9 '17 at 6:24
• Yes, almost correct, the only issue is that you need to write the index of the union clearly to indicate that is the finite union, if not, it is confusing if you are doing uncountably union or what. – user284331 Dec 9 '17 at 6:45
• @user284331 ok, thanks. – user486983 Dec 9 '17 at 7:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042215347290039, "perplexity": 240.8598627673066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00252.warc.gz"} |
http://www.conservapedia.com/Monty_Hall_problem | # Monty Hall problem
The Monty Hall Problem is a basic example problem in statistic and probability theory based on the premise of the television show Let's Make a Deal, originally hosted by Monty Hall.
## Problem Statement
A contestant on a game show is presented with three doors. Behind one of the doors is a car, and behind the other two doors are goats. The contestant chooses door 1. The host must then open a door to reveal a goat; he opens door 3. The host then gives the contestant a chance to switch his choice to door 2. If the contestant is trying to win the car, is it to his advantage to switch his choice?
## Solution
It may be tempting to say that the contestant neither gains nor loses anything if he switches. Since there are two closed doors, and one of them is the winning door, it may appear that the probability of winning is 1/2 whether the contestant switches or not. Such reasoning is incorrect; the contestant always has a higher probability of winning if he switches.
### Illustration using scenario outcomes
There are three possible scenarios in the problem.
1. The contestant initially chooses the door hiding the car. The host reveals one goat, leaving the other goat behind the remaining door. In this case, switching loses.
2. The contestant initially chooses the door hiding goat 1. The host must reveal goat 2. Switching wins the car.
3. The contestant initially chooses the door hiding goat 2. The host must reveal goat 1. Switching wins the car.
If the contestant switches, two scenarios can lead to wins; the other option loses. Hence, the contestant has a 2/3 probability of success if he switches, but only a 1/3 probability of winning if he does not.
### Solution using Bayes' theorem
The problem can also be solved by using Bayes' theorem to evaluate the posterior probability that the car is behind the initially chosen door, given that the host has opened another door.
Let "Prize x" be the event that the prize is behind door x, and let "Open x" be the event that Monty Hall opens door x. Then before the doors are open, P(Prize 1) = P(Prize 2) = P(Prize 3) = 1/3. P(Open 2 | Prize 1) = 1/2, as if the prize is behind door 1, Monty Hall has two doors he can open, as he must reveal a goat, not the prize behind door 1. P(Open 2) = P(Open 3) = 1/2, as there are two doors Monty Hall can open, both equally likely. Thus, using Bayes' theorem, we get:
$P(Prize 1| Open 2) = \frac{P(Open 2| Prize 1)P(Prize 1)}{P(Open 2)} = \frac{\frac{1}{2}*\frac{1}{3}}{\frac{1}{2}} = \frac{1}{3}$
That is, the probability that the prize is behind door 1, given that Monty opens door 2, is 1/3, so the probability that it is behind door 3 is 2/3. Thus, the contestant should switch. The logic applies equally if Monty opens door 3. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076688289642334, "perplexity": 726.9365822352207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776428349.3/warc/CC-MAIN-20140707234028-00049-ip-10-180-212-248.ec2.internal.warc.gz"} |
http://www.ma.utexas.edu/mediawiki/index.php?title=Starting_page&diff=cur&oldid=1022 | (Difference between revisions)
Revision as of 16:16, 17 November 2012 (view source)Nestor (Talk | contribs)← Older edit Latest revision as of 19:09, 23 September 2013 (view source)Tianling (Talk | contribs) (One intermediate revision not shown) Line 35: Line 35: * The denoising algorithms in [[nonlocal image processing]] are able to detect patterns in a better way than the PDE based models. A simple model for denoising is the [[nonlocal mean curvature flow]]. * The denoising algorithms in [[nonlocal image processing]] are able to detect patterns in a better way than the PDE based models. A simple model for denoising is the [[nonlocal mean curvature flow]]. * The [[Boltzmann equation]] models the evolution of dilute gases and it is intrinsically an integral equation. In fact, simplified [[kinetic models]] can be used to derive the [[fractional heat equation]] without resorting to stochastic processes. * The [[Boltzmann equation]] models the evolution of dilute gases and it is intrinsically an integral equation. In fact, simplified [[kinetic models]] can be used to derive the [[fractional heat equation]] without resorting to stochastic processes. - * In conformal geometry, the Paneitz operators encode information about the manifold, they include fractional powers of the Laplacian, which are nonlocal operators. + * In conformal geometry, the [[conformally invariant operators]] encode information about the manifold. They include fractional powers of the Laplacian. * In oceanography, the temperature on the surface may diffuse though the atmosphere giving rise to the [[surface quasi-geostrophic equation]]. * In oceanography, the temperature on the surface may diffuse though the atmosphere giving rise to the [[surface quasi-geostrophic equation]]. * Models for [[dislocation dynamics]] in crystals. * Models for [[dislocation dynamics]] in crystals.
## Latest revision as of 19:09, 23 September 2013
Welcome! This is the Nonlocal Equations Wiki (67 articles and counting)
In this wiki we collect several results about nonlocal elliptic and parabolic equations. If you want to know what a nonlocal equation refers to, a good starting point would be the Intro to nonlocal equations. If you want to find information on a specific topic, you may want to check the list of equations or use the search option on the left.
We also keep a list of open problems and of upcoming events.
The wiki has an assumed bias towards regularity results and consequently to equations for which some regularization occurs. But we also include some topics which are tangentially related, or even completely unrelated, to regularity. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835800290107727, "perplexity": 1473.9041384721843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676599291.24/warc/CC-MAIN-20180723164955-20180723184955-00364.warc.gz"} |
https://ora.ox.ac.uk/objects/uuid:1308effc-6fca-4daa-94f8-9a5c4fa44235 | Journal article
A HYPERBOLIC SYSTEM OF CONSERVATION LAWS FOR FLUID FLOWS THROUGH COMPLIANT AXISYMMETRIC VESSELS
Abstract:
We are concerned with the derivation and analysis of one-dimensional hyperbolic systems of conservation laws modelling uid ows such as the blood ow through compliant axisymmetric vessels. Early models derived are nonconservative and/or nonhomogeneous with measure source terms, which are endowed with infinitely many Riemann solutions for some Riemann data. In this paper, we derive a one-dimensional hyperbolic system that is conservative and homogeneous. Moreover, there exists a unique global R...
Publication status:
Published
Authors
Chen, G-QG More by this author
Journal:
ACTA MATHEMATICA SCIENTIA
Volume:
30
Issue:
2
Pages:
391-427
Publication date:
2010-03-05
DOI:
ISSN:
0252-9602
URN:
uuid:1308effc-6fca-4daa-94f8-9a5c4fa44235
Source identifiers:
203601
Local pid:
pubs:203601
Language:
English
Keywords: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111405372619629, "perplexity": 2279.8906163652728}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904039.84/warc/CC-MAIN-20201029095029-20201029125029-00666.warc.gz"} |
https://de.maplesoft.com/support/help/Maple/view.aspx?path=LinearAlgebra/Modular/IntegerDeterminant | IntegerDeterminant - Maple Help
LinearAlgebra[Modular]
IntegerDeterminant
determinant of an integer matrix using modular methods
Calling Sequence IntegerDeterminant(M)
Parameters
M - Square Matrix with integer entries
Description
• The IntegerDeterminant function computes the determinant of the integer matrix M. This is a programmer level function, it does not perform argument checking. Thus, argument checking must be handled external to this function.
• Note: The IntegerDeterminant command uses a probabilistic approach that achieves great gains for structured systems. Information on controlling the probabilistic behavior can be found in EnvProbabilistic.
• This function is used by the Determinant function in the LinearAlgebra package when a Matrix is determined to contain only integer entries.
• This command is part of the LinearAlgebra[Modular] package, so it can be used in the form IntegerDeterminant(..) only after executing the command with(LinearAlgebra[Modular]). However, it can always be used in the form LinearAlgebra[Modular][IntegerDeterminant](..).
Examples
A 3x3 matrix
> $\mathrm{with}\left(\mathrm{LinearAlgebra}\left[\mathrm{Modular}\right]\right):$
> $M≔\mathrm{Matrix}\left(\left[\left[2,1,3\right],\left[4,3,1\right],\left[-2,1,-3\right]\right]\right)$
${M}{≔}\left[\begin{array}{ccc}{2}& {1}& {3}\\ {4}& {3}& {1}\\ {-2}& {1}& {-3}\end{array}\right]$ (1)
> $\mathrm{IntegerDeterminant}\left(M\right)$
${20}$ (2)
A 100x100 matrix
> $M≔\mathrm{LinearAlgebra}\left[\mathrm{RandomMatrix}\right]\left(100\right):$
> $\mathrm{tt}≔\mathrm{time}\left(\right):$
> $\mathrm{IntegerDeterminant}\left(M\right)$
${38562295347802366242417909657285032281105091485000162871067163275296273582728190925949289361981964881806516849833008824879568403928373759144147382030798909099402726531205056808283212790472544339698767179236612577117605985054960334148934541347201762137455}$ (3)
> $\mathrm{time}\left(\right)-\mathrm{tt}$
${0.042}$ (4) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336393475532532, "perplexity": 1249.8378019275713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00086.warc.gz"} |
https://www.physics.uoguelph.ca/events/2018/11/uncovering-dynamics-spacetime | # Uncovering the Dynamics of Spacetime
MacN 415
## Speaker
William East, Perimeter Institute
## Abstract
With the ground-breaking gravitational wave detections from LIGO/Virgo, we have entered a new era where we can actually observe the action of strongly curved spacetime originally predicted by Einstein. Going hand in hand with this, there has been a renaissance in the theoretical and computational tools we use to understand and interpret the dynamics of gravity and matter in this regime. I will describe some of the rich behavior exhibited by sources of gravitational waves such as the mergers of black holes and neutron stars. I will also discuss some of the open questions, and what these events could teach us, not only about the extremes of gravity, but about the behavior of matter at extreme densities, the solution of astrophysical mysteries, and even the existence of new types of particles. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495768904685974, "perplexity": 597.6038596037769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00032.warc.gz"} |
https://www.lessonplanet.com/teachers/ratio-and-proportion | Ratio and Proportion
This Ratio and Proportion lesson plan also includes:
Middle and high schoolers analyze the formation of ratios as they develop comparisons of two known quantities. These comparisons are used to formulate proportions and solve problems. Learner worksheet and teacher exemplar resources are included, with keys. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9303954243659973, "perplexity": 3438.168809897138}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865468.19/warc/CC-MAIN-20180523082914-20180523102914-00171.warc.gz"} |
http://mathhelpforum.com/calculus/80699-angle-between-vectors.html | # Math Help - angle between the vectors
1. ## angle between the vectors
Please check my work. Thank you.
Find the dot product of the following vectors. Find the angle between the vectors.
v=3i+2j, w=-4i-5j
Dot product v x w=-12-10=-22
cos of angle = -22/sq. root of 13 x sq. root of 41=-0.9529
the angle = 17.64 degrees.
Please tell me where I made a mistake, because when I graph the vectors, the angle looks like it is about 100 degrees.
Thank you very much.
2. Originally Posted by oceanmd
Please check my work. Thank you.
Find the dot product of the following vectors. Find the angle between the vectors.
v=3i+2j, w=-4i-5j
Dot product v x w=-12-10=-22
cos of angle = -22/sq. root of 13 x sq. root of 41=-0.9529
the angle = 17.64 degrees.
Please tell me where I made a mistake, because when I graph the vectors, the angle looks like it is about 100 degrees.
Thank you very much.
a bit larger than 100 degrees, but it is obtuse ...
$\theta = \arccos(-0.9529) = 162.3^{\circ}$
inverse cosine of a negative value yields a quad II angle.
3. Thank you very much | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392210245132446, "perplexity": 1168.5778530514187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657139314.4/warc/CC-MAIN-20140914011219-00301-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://slideplayer.com/slide/3191529/ | # Computational Genomics Lecture #3a
## Presentation on theme: "Computational Genomics Lecture #3a"— Presentation transcript:
Computational Genomics Lecture #3a
Multiple sequence alignment Background Readings: Chapters 2.5, 2.7 in the text book, Biological Sequence Analysis, Durbin et al., 2001. Chapters , in Introduction to Computational Molecular Biology, Setubal and Meidanis, 1997. Chapter 15 in Gusfield’s book. p. 81 in Kanehisa’s book Much of this class has been edited from Nir Friedman’s lecture which is available at Changes made by Dan Geiger, then Shlomo Moran, and finally Benny Chor.
Ladies and Gentlemen Boys and Girls the holy grail Multiple Sequence Alignment
Multiple Sequence Alignment
S1=AGGTC Possible alignment A - T G C S2=GTTCG S3=TGAAC Possible alignment A G - T C
Multiple Sequence Alignment
Aligning more than two sequences. Definition: Given strings S1, S2, …,Sk a multiple (global) alignment map them to strings S’1, S’2, …,S’k that may contain blanks, where: |S’1|= |S’2|=…= |S’k| The removal of spaces from S’i leaves Si
Multiple alignments We use a matrix to represent the alignment of k sequences, K=(x1,...,xk). We assume no columns consists solely of blanks. The common scoring functions give a score to each column, and set: score(K)= ∑i score(column(i)) x1 x2 x3 x4 M Q _ I L R - K P V For k=10, a scoring function has 2k -1 > 1000 entries to specify. The scoring function is symmetric - the order of arguments need not matter: score(I,_,I,V) = score(_,I,I,V).
SUM OF PAIRS M Q _ I L R - K P V
A common scoring function is SP – sum of scores of the projected pairwise alignments: SPscore(K)=∑i<j score(xi,xj). M Q _ I L R - K P V Note that we need to specify the score(-,-) because a column may have several blanks (as long as not all entries are blanks). In order for this score to be written as ∑i score(column(i)), we set score(-,-) = 0. Why ? Because these entries appear in the sum of columns but not in the sum of projected pairwise alignments (lines).
SUM OF PAIRS M Q _ I L R - K P V
Definition: The sum-of-pairs (SP) value for a multiple global alignment A of k strings is the sum of the values of all projected pairwise alignments induced by A where the pairwise alignment function score(xi,xj) is additive. M Q _ I L R - K P V
Example Consider the following alignment: a c - c d b - 3 3 +4
- c - a d b d = 12 a - b c d a d Using the edit distance and for , this alignment has a SP value of
Multiple Sequence Alignment
Given k strings of length n, there is a natural generalization of the dynamic programming algorithm that finds an alignment that maximizes SP-score(K) = ∑i<j score(xi,xj). Instead of a 2-dimensional table, we now have a k-dimensional table to fill. For each vector i =(i1,..,ik), compute an optimal multiple alignment for the k prefix sequences x1(1,..,i1),...,xk(1,..,ik). The adjacent entries are those that differ in their index by one or zero. Each entry depends on 2k-1 adjacent entries.
The idea via K=2 V[i,j] V[i+1,j] V[i,j+1] V[i+1,j+1]
Recall the notation: and the following recurrence for V: V[i,j] V[i+1,j] V[i,j+1] V[i+1,j+1] Note that the new cell index (i+1,j+1) differs from previous indices by one of 2k-1 non-zero binary vectors (1,1), (1,0), (0,1).
Multiple Sequence Alignment
Given k strings of length n, there is a generalization of the dynamic programming algorithm that finds an optimal SP alignment. Computational Cost: Instead of a 2-dimensional table we now have a k-dimensional table to fill. Each dimension’s size is n+1. Each entry depends on 2k-1 adjacent entries. Number of evaluations of scoring function : O(2knk)
Complexity of the DP approach
Number of cells nk. Number of adjacent cells O(2k). Computation of SP score for each column(i,b) is o(k2) Total run time is O(k22knk) which is totally unacceptable ! Maybe one can do better?
But MSA is Intractable Not much hope for a polynomial algorithm because the problem has been shown to be NP complete (proof is quite Tricky and recent. Some previous proofs were bogus). Look at Isaac Elias presentation of NP completeness proof. Need heuristic or approximation to reduce time.
Multiple Sequence Alignment – Approximation Algorithm
Now we will see an O(k2n2) multiple alignment algorithm for the SP-score that approximate the optimal solution’s score by a factor of at most 2(1-1/k) < 2.
Star-score(K) = ∑j>0score(S1,Sj).
Star Alignments Rather then summing up all pairwise alignments, select a fixed sequence S1 as a center, and set Star-score(K) = ∑j>0score(S1,Sj). The algorithm to find optimal alignment: at each step, add another sequence aligned with S1, keeping old gaps and possibly adding new ones (i.e. keeping old alignment intact).
Multiple Sequence Alignment – Approximation Algorithm
Polynomial time algorithm: assumption: the function δ is a distance function: (triangle inequality) Let D(S,T) be the value of the minimum global alignment between S and T.
Multiple Sequence Alignment – Approximation Algorithm (cont.)
Polynomial time algorithm: The input is a set Γ of k strings Si. 1. Find “center string” S1 that minimizes 2. Call the remaining strings S2, …,Sk. 3. Add a string to the multiple alignment that initially contains only S1 as follows: Suppose S1, …,Si-1 are already aligned as S’1, …,S’i-1. Add Si by running dynamic programming algorithm on S’1 and Si to produce S’’1 and S’i. Adjust S’2, …,S’i-1 by adding gaps to those columns where gaps were added to get S’’1 from S’1. Replace S’1 by S’’1.
Multiple Sequence Alignment – Approximation Algorithm (cont.)
Time analysis: Choosing S1 – running dynamic programming algorithm times – O(k2n2) When Si is added to the multiple alignment, the length of S1 is at most i* n, so the time to add all k strings is
Multiple Sequence Alignment – Approximation Algorithm (cont.)
Performance analysis: M - The alignment produced by this algorithm. d(i,j) - the distance M induces on the pair Si,Sj. M* - optimal alignment. For all i, d(1,i)=D(S1,Si) (we performed optimal alignment between S’1 and Si and )
Multiple Sequence Alignment – Approximation Algorithm (cont.)
Performance analysis: Triangle inequality Definition of S1
Multiple Sequence Alignment – Approximation Algorithm
Algorithm relies heavily on scoring function being a distance. It produced an alignment whose SP score is at most twice the minimum. What if scoring function was similarity? Can we get an efficient algorithm whose score is half the maximum? Third of maximum? … We dunno !
Tree Alignments Assume that there is a tree T=(V,E) whose leaves are the input sequences. Want to associate a sequence in each internal node. Tree-score(K) = ∑(i,j)Escore(xi,xj). Finding the optimal assignment of sequences to the internal nodes is NP Hard. We will meet this problem again in the study of phylogenetic trees (it is related to the parsimony problem).
Multiple Sequence Alignment Heuristics
Example - 4 sequences A, B, C, D. A. B D A C A B C D Perform all 6 pair wise alignments. Find scores. Build a “similarity tree”. distant similar B. Multiple alignment following the tree from A. B Align most similar pairs allowing gaps to optimize alignment. D A Align the next most similar pair. C Now, “align the alignments”, introducing gaps if necessary to optimize alignment of (BD) with (AC).
(modified from Speed’s ppt presentation, see p. 81 in Kanehisa’s book)
The tree-based progressive method for multiple sequence alignment, used in practice (Clustal) (a) a tree (dendrogram) obtained by “cluster analysis” (b) pairwise alignment of sequences’ alignments. (a) (b) L W R D G R G A L Q L W R G G R G A A Q D W R - G R T A S G DEHUG3 DEPGG3 DEBYG3 DEZYG3 DEBSGF L R R - A R T A S A L - R G A R A A A E (modified from Speed’s ppt presentation, see p. 81 in Kanehisa’s book)
Visualization of Alignment | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8498950004577637, "perplexity": 3309.0196551731824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00578.warc.gz"} |
https://danielfilan.com/2021/07/05/simple_example_conditional_orthogonality_ffs.html | # Daniel Filan
## A simple example of conditional orthogonality in finite factored sets
Reader’s note: It looks like the math on my website is all messed up. To read it better, I suggest checking it out on the Alignment Forum.
Recently, MIRI researcher Scott Garrabrant has publicized his work on finite factored sets. It allegedly offers a way to understand agency and causality in a set-up like the causal graphs championed by Judea Pearl. Unfortunately, the definition of conditional orthogonality is very confusing. I’m not aware of any public examples of people demonstrating that they understand it, but I didn’t really understand it until an hour ago, and I’ve heard others say that it went above their heads. So, I’d like to give an example of it here.
In a finite factored set, you have your base set (S), and a set $B$ of ‘factors’ of your set. In my case, the base set $S$ will be four-dimensional space - I’m sorry, I know that’s one more dimension than the number that well-adjusted people can visualize, but it really would be a much worse example if I were restricted to three dimensions. We’ll think of the points in this space as tuples $(x_1, x_2, x_3, x_4)$ where each $x_i$ is a real number between, say, -2 and 2[^1]. We’ll say that $X_1$ is the ‘factor’, aka partition, that groups points together based on what their value of $x_1$ is, and similarly for $X_2$, $X_3$, and $X_4$, and set $B = {X_1, X_2, X_3, X_4}$. I leave it as an exercise for the reader to check whether this is in fact a finite factored set. Also, I’ll talk about the ‘value’ of partitions and factors - technically, I suppose you could say that the ‘value’ of some partition at a point is the set in the partition that contains the point, but I’ll use it to mean that, for example, the ‘value’ of $X_1$ at point $(x_1, x_2, x_3, x_4)$ is $x_1$. If you think of partitions as questions where different points in $S$ give different answers, the ‘value’ of a partition at a point is the answer to the question.
[EDIT: for the rest of the post, you might want to imagine $S$ as points in space-time, where $x_4$ represents the time, and $(x_1, x_2, x_3)$ represent spatial coordinates - for example, inside a room, where you’re measuring from the north-east corner of the floor. In this analogy, we’ll imagine that there’s a flat piece of sheet metal leaning on the floor against two walls, over that corner. We’ll try conditioning on that - so, looking only at points in space-time that are spatially located on that sheet - and see that distance left is no longer orthogonal to distance up, but that both are still orthogonal to time.]
Now, we’ll want to condition on the set $E = {(x_1, x_2, x_3, x_4) \mid x_1 + x_2 + x_3 = 1}$. The thing with $E$ is that once you know you’re in $E$, $x_1$ is no longer independent of $x_2$, like it was before, since they’re linked together by the condition that $x_1 + x_2 + x_3 = 1$. However, $x_4$ has nothing to do with that condition. So, what’s going to happen is that conditioned on being in $E$, $X_1$ is orthogonal to $X_4$ but not to $X_2$.
In order to show this, we’ll check the definition of conditional orthogonality, which actually refers to this thing called conditional history. I’ll write out the definition of conditional history formally, and then try to explain it informally: the conditional history of $X$ given $E$, which we’ll write as $h(X \mid E)$, is the smallest set of factors $H \subseteq B$ satisfying the following two conditions:
1. For all $s,t \in E$, if $s \sim_b t$ for all $b \in H$, then $s \sim_X t$.
2. For all $s, t \in E$ and $r \in S$, if $r \sim_b s$ for all $b \in H$ and $r \sim_{b’} t$ for all $b’ \in B \setminus H$, then $r \in E$.
Condition 1 means that, if you think of the partitions as carving up the set $S$, then the partition $X$ doesn’t carve $E$ up more finely than if you carved according to everything in $h(X \mid E)$. Another way to say that is that if you know you’re in $E$, knowing everything in the conditional history of $X$ in $E$ tells you what the ‘value’ of $X$ is, which hopefully makes sense.
Condition 2 says that if you want to know if a point is in $E$, you can separately consider the ‘values’ of the partitions in the conditional history, as well as the other partitions that are in $B$ but not in the conditional history. So it’s saying that there’s no ‘entanglement’ between the partitions in and out of the conditional history regarding $E$. This is still probably confusing, but it will make more sense with examples.
Now, what’s conditional orthogonality? That’s pretty simple once you get conditional histories: $X$ and $Y$ are conditionally orthogonal given $E$ if the conditional history of $X$ given $E$ doesn’t intersect the conditional history of $Y$ given $E$. So it’s saying that once you’re in $E$, the things determining $X$ are different to the things determining $Y$, in the finite factored sets way of looking at things.
Let’s look at some conditional histories in our concrete example: what’s the history of $X_1$ given $E$? Well, it’s got to contain $X_1$, because otherwise that would violate condition 1: you can’t know the value of $X_1$ without being told the value of $X_1$, even once you know you’re in $E$. But that can’t be the whole thing. Consider the point $s = (0.5, 0.4, 0.4, 0.7)$. If you just knew the value of $X_1$ at $s$, that would be compatible with $s$ actually being $(0.5, 0.25, 0.25, 1)$, which is in $E$. And if you just knew the values of $X_2$, $X_3$, and $X_4$, you could imagine that $s$ was actually equal to $(0.2, 0.4, 0.4, 0.7)$, which is also in $E$. So, if you considered the factors in ${X_1}$ separately to the other factors, you’d conclude that $s$ could be in $E$ - but it’s actually not! This is exactly the thing that condition 2 is telling us can’t happen. In fact, the conditional history of $X_1$ given $E$ is ${X_1, X_2, X_3}$, which I’ll leave for you to check. I’ll also let you check that the conditional history of $X_2$ given $E$ is ${X_1, X_2, X_3}$.
Now, what’s the conditional history of $X_4$ given $E$? It has to include $X_4$, because if someone doesn’t tell you $X_4$ you can’t figure it out. In fact, it’s exactly ${X_4}$. Let’s check condition 2: it says that if all the factors outside the conditional history are compatible with some point being in $E$, and all the factors inside the conditional history are compatible with some point being in $E$, then it must be in $E$. That checks out here: you need to know the values of all three of $X_1$, $X_2$, and $X_3$ at once to know if something’s in $E$, but you get those together if you jointly consider those factors outside your conditional history, which is ${X_1, X_2, X_3}$. So looking at $(0.5, 0.4, 0.4, 0.7)$, if you only look at the values that aren’t told to you by the conditional history, which is to say the first three numbers, you can tell it’s not in $E$ and aren’t tricked. And if you look at $(0.5, 0.25, 0.25, 0.7)$, you look at the factors in ${X_4}$ (namely $X_4$), and it checks out, you look at the factors outside ${X_4}$ and that also checks out, and the point is really in $E$.
Hopefully this gives you some insight into condition 2 of the definition of conditional history. It’s saying that when we divide factors up to get a history, we can’t put factors that are entangled by the set we’re conditioning on on ‘different sides’ - all the entangled factors have to be in the history, or they all have to be out of the history.
In summary: $h(X_1 \mid E) = h(X_2 \mid E) = {X_1, X_2, X_3}$, and $h(X_4 \mid E) = {X_4}$. So, is $X_1$ orthogonal to $X_2$ given $E$? No, their conditional histories overlap - in fact, they’re identical! Is $X_1$ orthogonal to $X_4$ given $E$? Yes, they have disjoint conditional histories.
Some notes:
• In this case, $X_1$ was already orthogonal to $X_4$ before conditioning. It would be nice to come up with an example where two things that weren’t already orthogonal become so after conditioning. [EDIT: see my next post]
• We didn’t really need the underlying set to be finite for this example to work, suggesting that factored sets don’t really need to be finite for all the machinery Scott discusses.
• We did need the range of each variable to be bounded for this to work nicely. Because all the numbers need to be between -2 and 2, once you’re in $E$, if $x_1 = 2$ then $x_2$ can’t be bigger than 1, otherwise $x_3$ can’t go negative enough to get the numbers to add up to 1. But if they could all be arbitrary real numbers, then even once you were in $E$, knowing $x_1$ wouldn’t tell you anything about $x_2$, but we’d still have that $X_1$ wasn’t orthogonal to $X_2$ given $E$, which would be weird.
[^1] I know what you’re saying - “That’s not a finite set! Finite factored sets have to be finite!” Well, if you insist, you can think of them as only the numbers between -2 and 2 with two decimal places. That makes the set finite and doesn’t really change anything. (Which suggests that a more expansive concept could be used instead of finite factored sets.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085917830467224, "perplexity": 207.4681699031321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00588.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-6-section-6-6-logarithmic-and-exponential-equations-6-6-assess-your-understanding-page-465/2 | ## College Algebra (10th Edition)
$\color{blue}{x = \left\{-2, 0\right\}}$
Let $u = x+3$ Replacing $x+3$ with $u$ gves: $(x+3)^2-4(x+3)+3=0 \\u^2-4u+3=0$ Factor the trinomial to obtain: $(u-3)(u-1)=0$ Use the Zero-Product Property (which states that if $xy=0$, then either $x=0$ or $y=0$ or both are zero) by equating each factor to zero to obtain: $u-3=0 \text{ or } u-1=0$ Solve each equation to obtain: $u=3$ or $u=1$ Replace $u$ with $x+3$ to obtain: \begin{array}{ccc} &u=3 &\text{or} &u=1 \\&x+3=3 &\text{or} &x+3=1 \\&x=3-3 &\text{or} &x=1-3 \\&x=0 &\text{or} &x=-2 \end{array} Therefore, the solutions to the given equation are: $\color{blue}{x = \left\{-2, 0\right\}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082357287406921, "perplexity": 1489.1328319558772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.65/warc/CC-MAIN-20180923000827-20180923021227-00154.warc.gz"} |
https://physics.stackexchange.com/questions/496498/solid-in-liquid-heat-transfer-temperatures-entropy-changes | # Solid in Liquid Heat Transfer (Temperatures/Entropy Changes)
Suppose we have a solid of temperature $$T_s$$ and heat capacity $$C_p$$ submerged into a pool of water that has temperature $$T_w$$.
If $$T_s \gt T_w$$ and the pressure of the isolated pool-solid system is constant, how much heat will the solid lose, how much heat will the pool gain and what will the entropy change be for every part?
Edit I do understand that the format of the question resembles that of a plain exercise. However, aid in a question as such will mostly help me understand what kind of a process this heat transmission is and how it could be described mathematically. That said, I have indeed worked on the question and reached a certain point of progress but would appreciate some help.
• Please show us what you got so far. Also, please understand that the final state and the changes in temperature and entropy of the solid and the water for this irreversible process are independent of the details of the process. – Chet Miller Aug 12 at 23:55
• Here is a link to a cookbook recipe for determining the change in entropy for an irreversible process such as this: physicsforums.com/insights/grandpa-chets-entropy-recipe – Chet Miller Aug 12 at 23:57
• For a start, thank you for your answer. Well, concerning my progress, from the formula δq=Cp*dT (for the solid), I got via integration that Δq=Cp*(Ts'-Ts)<0, where Ts' is the occuring temperature of the solid after its immersion in the lake. Moreover, I know that for the solid dS=δq/Τ which, via integration, gives the formula ΔS=Cp=lnT – WannaBeScientist Aug 13 at 0:37
• I assume that you are looking for the steady state answer? If not, are you looking for a function of heat transfer vs. time? – David White Aug 13 at 0:45
• Well, @ChetMiller, I did grasp the idea, so thank you for your analysis. It is evident that by use of these formulas and the energy balance equations, I shall be able to determine the final temperatures thus solving the problem. – WannaBeScientist Aug 17 at 12:59
First the general case: When matter is heated or cooled under constant pressure, the amount of heat is $$Q = \Delta H$$ and the entropy change is $$\Delta S = \int \frac{dH}{T}$$ In the special case that there is no phase change and the heat capacity is constant, then $$Q = m\, C_P (T_f - T_i)$$ and $$\Delta S = m\, C_P \ln\frac{T_f}{T_i}$$ where $$T_i$$ and $$T_f$$ is the initial and final temperature. The last two equations will solve your problem. First write the energy balance to find out what is the final temperature. Once you know the final temperature, calculate the heat and entropy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692294359207153, "perplexity": 378.8631531207314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00348.warc.gz"} |
http://www.pearltrees.com/cleverchris/subfields/id2067529 | # Subfields
Thermodynamics. Annotated color version of the original 1824 Carnot heat engine showing the hot body (boiler), working body (system, steam), and cold body (water), the letters labeled according to the stopping points in Carnot cycle Thermodynamics applies to a wide variety of topics in science and engineering.
Historically, thermodynamics developed out of a desire to increase the efficiency and power output of early steam engines, particularly through the work of French physicist Nicolas Léonard Sadi Carnot (1824) who believed that the efficiency of heat engines was the key that could help France win the Napoleonic Wars.[1] Irish-born British physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854:[2] Statics. Statics is the branch of mechanics that is concerned with the analysis of loads (force and torque, or "moment") on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity.
When in static equilibrium, the system is either at rest, or its center of mass moves at constant velocity. Vectors Example of a beam in static equilibrium. The sum of force and moment is zero. A scalar is a quantity, such as mass or temperature, which only has a magnitude. A bold faced character VAn underlined character VA character with an arrow over it . Vectors can be added using the parallelogram law or the triangle law. Force Theory of relativity. The theory of relativity, or simply relativity in physics, usually encompasses two theories by Albert Einstein: special relativity and general relativity.[1] Concepts introduced by the theories of relativity include:
Quantum mechanics. In advanced topics of quantum mechanics, some of these behaviors are macroscopic (see macroscopic quantum phenomena) and emerge at only extreme (i.e., very low or very high) energies or temperatures (such as in the use of superconducting magnets).
For example, the angular momentum of an electron bound to an atom or molecule is quantized. Plasma (physics) Plasma (from Greek πλάσμα, "anything formed"[1]) is one of the four fundamental states of matter (the others being solid, liquid, and gas).
When air or gas is ionized plasma forms with similar conductive properties to that of metals. Plasma is the most abundant form of matter in the Universe, because most stars are in plasma state.[2][3] Artist's rendition of the Earth's plasma fountain, showing oxygen, helium, and hydrogen ions that gush into space from regions near the Earth's poles. The faint yellow area shown above the north pole represents gas lost from Earth into space; the green area is the aurora borealis, where plasma energy pours back into the atmosphere.[6] Plasma is loosely described as an electrically neutral medium of positive and negative particles (i.e. the overall charge of a plasma is roughly zero). Optics.
Optics is the branch of physics which involves the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it.[1] Optics usually describes the behaviour of visible, ultraviolet, and infrared light.
Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.[1] Some phenomena depend on the fact that light has both wave-like and particle-like properties. Explanation of these effects requires quantum mechanics. When considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, and medicine (particularly ophthalmology and optometry).
History . Where and . Mechanics. Classical versus quantum The major division of the mechanics discipline separates classical mechanics from quantum mechanics.
Historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newton's laws of motion in Principia Mathematica; Quantum Mechanics was discovered in 1925. Both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Quantum mechanics is of a wider scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances.
Mathematical physics. Mathematical Physics refers to development of mathematical methods for application to problems in physics.
The Journal of Mathematical Physics defines the field as: "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories".[1] Scope There are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. Kinematics. Fluid dynamics. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems.
Electromagnetism. Electromagnetism, or the electromagnetic force is one of the four fundamental interactions in nature, the other three being the strong interaction, the weak interaction, and gravitation. This force is described by electromagnetic fields, and has innumerable physical instances including the interaction of electrically charged particles and the interaction of uncharged magnetic force fields with electrical conductors.
The word electromagnetism is a compound form of two Greek terms, ἢλεκτρον, ēlektron, "amber", and μαγνήτης, magnetic, from "magnítis líthos" (μαγνήτης λίθος), which means "magnesian stone", a type of iron ore. Dynamics (mechanics) Generally speaking, researchers involved in dynamics study how a physical system might develop or alter over time and study the causes of those changes.
In addition, Newton established the fundamental physical laws which govern dynamics in physics. By studying his system of mechanics, dynamics can be understood. In particular, dynamics is mostly related to Newton's second law of motion. Physical cosmology. Physical cosmology is the study of the largest-scale structures and dynamics of the Universe and is concerned with fundamental questions about its formation, evolution, and ultimate fate.[1] For most of human history, it was a branch of metaphysics and religion.
Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed us to understand those physical laws. Condensed matter physics. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists identify themselves as condensed matter physicists,[2] and The Division of Condensed Matter Physics (DCMP) is the largest division of the American Physical Society.[3] The field overlaps with chemistry, materials science, and nanotechnology, and relates closely to atomic physics and biophysics.
Theoretical condensed matter physics shares important concepts and techniques with theoretical particle and nuclear physics.[4] References to "condensed" state can be traced to earlier sources. Classical mechanics. Aerodynamics. A vortex is created by the passage of an aircraft wing, revealed by smoke. Acoustics. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729956746101379, "perplexity": 982.5029302010075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989897.84/warc/CC-MAIN-20150728002309-00001-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://rd.springer.com/chapter/10.1007/978-1-4614-0391-3_4 | # Continuous Random Variables and Probability Distributions
• Jay L. Devore
• Kenneth N. Berk
Chapter
Part of the Springer Texts in Statistics book series (STS)
## Abstract
As mentioned at the beginning of Chapter 3, the two important types of random variables are discrete and continuous. In this chapter, we study the second general type of random variable that arises in many applied problems. Sections 4.1 and 4.2 present the basic definitions and properties of continuous random variables, their probability distributions, and their moment generating functions. In Section 4.3, we study in detail the normal random variable and distribution, unquestionably the most important and useful in probability and statistics. Sections 4.4 and 4.5 discuss some other continuous distributions that are often used in applied work. In Section 4.6, we introduce a method for assessing whether given sample data is consistent with a specified distribution. Section 4.7 discusses methods for finding the distribution of a transformed random variable.
## Bibliography
1. Bury, Karl, Statistical Distributions in Engineering, Cambridge University Press, Cambridge, England, 1999. A readable and informative survey of distributions and their properties.Google Scholar
2. Johnson, Norman, Samuel Kotz, and N. Balakrishnan, Continuous Univariate Distributions, vols. 1–2, Wiley, New York, 1994. These two volumes together present an exhaustive survey of various continuous distributions.Google Scholar
3. Nelson, Wayne, Applied Life Data Analysis, Wiley, New York, 1982. Gives a comprehensive discussion of distributions and methods that are used in the analysis of lifetime data.
4. Olkin, Ingram, Cyrus Derman, and Leon Gleser, Probability Models and Applications (2nd ed.), Macmillan, New York, 1994. Good coverage of general properties and specific distributions.Google Scholar
## Authors and Affiliations
1. 1.Statistics DepartmentCalifornia Polytechnic State UniversitySan Luis ObispoUSA
2. 2.Department of MathematicsIllinois State UniversityNormalUSA | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110976457595825, "perplexity": 1001.3223485924938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156622.36/warc/CC-MAIN-20180920214659-20180920235059-00477.warc.gz"} |
https://www.physicsforums.com/threads/deflection-of-beams-problem.306872/ | # Deflection of beams problem
1. Apr 12, 2009
### Aerstz
1. The problem statement, all variables and given/known data
Deflection of simply-supported beam problem. Please see the attached image of an example problem from a textbook:
http://img14.imageshack.us/img14/619/hearnbeamproblem.png [Broken]
I have absolutely no idea why
A = - (wL^3)/24
and why
0 = (wL^4)/12 - (wL^4)/24 +AL
is used in the determination of A.
I especially do not know why (wL^4)/12 is used in the above equation.
I would have thought that A would represent the left beam support, where I also would have thought that x = 0. But, according to the example in the attached image, x at A = L.
Last edited by a moderator: May 4, 2017
2. Apr 12, 2009
### Aerstz
Another example of a deflection problem:
http://img18.imageshack.us/img18/2868/beerbeamproblem.png [Broken]
I am not sure why C1 = 1/2PL^2, but I have no idea why C2 = -1/3PL^3.
My maths is very weak; I think I just need some kind soul to gently walk me through this!
Last edited by a moderator: May 4, 2017
3. Apr 12, 2009
### Aerstz
And a third example:
http://img133.imageshack.us/img133/9210/be3amproblemthree.png [Broken]
I have no idea how 52.08 came to equal A.
Last edited by a moderator: May 4, 2017
4. Apr 13, 2009
### PhanthomJay
These terms A, B, C1 in the problems above refer to the constants of integration of the differential equation, as determined from the boundary conditions, and do not in any way refer to the support reactions. Boundary conditions are established at the ends of the beams based on the support condition. For example, if there is no deflection at a left end support, then the vertical deflection, y, equals 0 , when x, the horizontal distance from the left end, is 0. The values of the constants of integration are derived by carefully following the given steps in the examples.
5. Apr 13, 2009
### Aerstz
That's the problem; I am unable to follow the steps in the examples. The steps are too big; I need smaller steps to bridge the gaps.
To me, the examples seem to go from A straight to Z in one giant leap. I need to know B,C,D...etc., in between. Currently I am completely blind to what these intermediate steps are.
For example, and as I asked above in the first post: Why does A = - (wL^3)/24? What I mean to ask is, how was the (wL^3)/24 arrived at? I am extremely challenged with this 'simple' mathematics and I really need a kind soul to guide me through it very gently and slowly!
6. Apr 13, 2009
### PhanthomJay
I hear you. Looking at part of the first problem, step by step, inch by inch:
1. $$EI(y) = wLx^3/12 -wx^4/24 + Ax + B$$
Now since at the left end, at x = 0, we know there is no deflection at that point; thus, y = 0 when x =0, so substitute these zero values into Step 1 to obtain
2. $$0 = 0 - 0 + 0 + B$$, which yields
3. $$B = 0$$, thus Eq. 1 becomes
4. $$EI(y) = wLx^3/12 - wx^4/24 + Ax$$
Now since at the right end, at x = L, we also know that y = 0 , substitute X=L and y=0 into Eq. 4 to yield
5. $$0 = wL(L^3)/12 - wL^4/24 + AL$$ or
6. $$0 = w(L^4)/12 - wL^4/24 + AL$$ .
Now since the first term in Eq. 6 above, $$wL^4/12$$, can be rewritten as $$2wL^4/24$$, then
7. $$0 = (2wL^4/24 - wL^4/24) +AL$$, or
8. $$0 = wL^4/24 + AL$$. Now divide both sides of the equation by L, and thus
9. $$0 = wL^3/24 + A = 0$$,
and now solve for A by subtracting $$(wL^3/24)$$ from both sides of the equation to get
10. $$0 -wL^3/24 = (wL^3/24 -wL^3/24) + A$$, or
11. $$-wL^3/24 = 0 + A$$
12. $$A = -wL^3/24$$
7. Apr 13, 2009
### Aerstz
Thank you very much, Jay. I appreciate you taking the time to lay the process out as you did!
It is much clearer now, so hopefully I should be able to get past this quagmire and actually progress with some work!
Similar Discussions: Deflection of beams problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936626315116882, "perplexity": 944.7417032061331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00043.warc.gz"} |
https://www.pollingindicator.com/p/method.html | ### Method
The Irish Polling Indicator combines all national election polls to one estimate of political support for each party. The creator is Tom Louwerse, Assistant Professor in Political Science, Leiden University (the Netherlands). Work on the Irish Poling Indicator started when he was working at the Department of Political Science at Trinity College Dublin.
The approach used by the Irish Polling Indicator is described in detail in this artile published in Irish Political Studies (Open Access).
The polls used are published national surveys by Behaviour & Attitudes, Ipsos MRBI, Millward Brown, Red C Research.
Basic idea
The basic idea of the Irish Polling Indicator is to take all available polling information together to arrive at the best estimate of current support for parties. Polls are great tools for measuring public opinion, but because only a limited sample is surveyed, we need to take into account sampling error. By combining multiple polls, we can reduce this error.
Moreover, with so many polls going around it is difficult to get a random sample of voters to participate in any one public opinion survey. And those that do participate might not have a clear idea who to vote for, something that is often adjusted for in polls. This may lead to structural differences between the results of different polling companies, so-called house effects.
But how do you average two polls if one is conducted today, another one week ago and yet another one 3 weeks old? Just take the average of the three? Weight the more recent ones more heavily perhaps, but by how much exactly? The Polling Indicator assumes that public opinion changes every day, but only by so much. If Labour was on 10% last week and turns out to poll 18% today, we might question whether one of these polls (or even both) are outliers, which just by chance contains many more or less Labour voters than there are in the general public. The Polling Indicator assumes that support for a party can go up or down, but that radical changes are quite rare. But if one party is generally more volatile, it will take this into account.
Minor parties and independents
The Irish Polling Indicator contains a rather large category of 'Others/independents', which lumps together minor parties and independent candidates. It is somewhat complicated to break this down in the context of the Irish Polling Indicator's model, as these groups have not been consistently reported in polls over the entire parliamentary term. As a way around this, support for these groups is analysed on a group-by-group basis. Basically, these analyses take into account all of the things the main model also looks at, but it does not guarantee that the support for these parties adds up exactly to the total for 'Others/independents'. In practise this is not too important. The breakdown includes three parties (AAA-PBP, Renua and Social Democrats) and Independent candidates/Independent Alliance, so there will be other even smaller parties that are not included in the breakdown. Therefore the total of Other/Independents is likely to be higher than the sum of the four groups in the breakdown.
Model
This part is a little tricky and you probably need some statistical training to fully grasp it. The Irish Polling Indicator is based on a Bayesian statistical model, based on the work of several political scientists. It provides an estimate for each party's support on each day d. The percentage that this party gets in poll i is called $$P_i$$ - this is something we know. What we want to know is what this party's support among the whole electorate ($$A_d$$) is on each day. So how do we estimate this?
First, we know what happens if we draw many random samples from a population. So if we would have a population with 20% support for Fine Gael, and draw a lot of random samples of size 1,000 from this population, most of these samples would yield a percentage for Fine Gael that would be pretty close to 20%. But some would be further away. In fact, we know that the values that we possibly might obtain in all of these samples follows a normal distribution with a mean of $$A_d$$ and a standard deviation of $$\sqrt{\frac{A_d (1-A_d)}{N}}$$. Here N stands for the sample size, 1000 in our example. Since we do not know $$A_d$$ we approximate the standard deviation by using $$P_i$$ instead, so the first part of the model would look like this:
\begin{aligned} P_i & \sim \mathcal{N}(A_d, \sqrt{\frac{P_i (1-P_i)}{N}} ) \\ \end{aligned} The percentage that we find in the poll comes from a normal distribution with a mean of the real party support on the day the poll was held ($$A_d$$) and a standard deviation which mainly depends on the sample size ($$N$$). We don't know $$A_d$$, but are going to estimate it.
The actual model is somewhat more complicated because it takes into account two other things. First, the standard deviation in the formula above (also called the standard error in this case), is only known through the simple formula above if we have a random sample. Real-world polls usually have a more complicated strategy to select a sample, which may increase the standard error. By weighting their respondents (i.e. if you have 75% men in the survey, you might want to weight that down to 50%) error might be reduced. Therefore we allow the standard deviation ($$F_i$$) to be a factor $$D$$ smaller or larger than we would have with a simple random sample.
Secondly, there might be structural differences between pollsters which cause a certain polling company to overestimate or underestimate a certain party. So, they sample from a distribution with mean $$M_d$$, which is in fact a combination of the real percentage $$A_d$$ plus their house effect $$H_{b_i}$$. If their house effect is 0, they are polling from the 'correct' distribution and we only have to deal with sampling error. If their house effect is large, they might structurally underestimate or overestimate a party.
This yields the following model (for each party):
\begin{aligned} (1)~~ P_i & \sim \mathcal{N}(M_d, F_iD) \\ (2)~~ M_d & = A_d + H_{b_i} \end{aligned}
The next part of the model relates a party's percentage today ($$A_d$$) to its percentage yesterday ($$A_{d-1}$$). As explained above, we expect that day-to-day change in support is limited. To ensure that party support sums to 100%, these day-to-day changes will be modelled in terms of the log-ratio of support (where the first party will be fixed at a log-ratio of 0). For each day, the support is allow to change somewhat up or down:
\begin{aligned} (3)~~ LA_{d} & \sim\mathcal{\mathcal{N}}(LA_{d-1},\tau_{p}) \end{aligned}
We can calculate the vote share for each party based on these log-ratios as follows:
\begin{aligned} (4)~~ A_{d} & =\frac{exp(LA_{d})}{\sum exp(LA_{i})} \end{aligned}
Priors
For the statistical nerds: The Bayesian of the model has the following priors:
\begin{aligned} (5)~~ \tau_{p} & \sim Uniform(0,0.2) \\ (6)~~ H_{b} & \sim Uniform(-0.2,0.2) \\ (7)~~ D & \sim Uniform(\sqrt{\frac{1}{3}},\sqrt{3}) \\ \end{aligned}
The house effects $$H_{b_i}$$ are constrained to sum to zero over the companies $$b$$ to allow for model identification.
The model is estimated in JAGS 3.4. It is usually run with 6 chains, with 30,000 burn-in iterations and 60,000 iterations (150 thinning interval), leaving 2,400 MCMC draws from the posterior distribution. Although the model is slow-mixing, this seems to be adequate and a good balance between speed and accuracy.
Sources
Fisher, S. D., Ford, R., Jennings, W., Pickup, M., & Wlezien, C. (2011). From polls to votes to seats: Forecasting the 2010 British general election. Electoral Studies, 30(2), 250-257.
Jackman, S. (2005). Pooling the polls over an election campaign. Australian Journal of Political Science, 40(4), 499-517.
Pickup, M. A., & Wlezien, C. (2009). On filtering longitudinal public opinion data: Issues in identification and representation of true change. Electoral Studies, 28(3), 354-367.
Pickup, M., & Johnston, R. (2008). Campaign trial heats as election forecasts: Measurement error and bias in 2004 presidential campaign polls. International Journal of Forecasting, 24(2), 272-284.
Pickup M. (2011). 'Methodology' http://pollob.politics.ox.ac.uk/documents/methodology.pdf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667016625404358, "perplexity": 1299.336840728116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512015.74/warc/CC-MAIN-20181018214747-20181019000247-00064.warc.gz"} |
https://www.arxiv-vanity.com/papers/1310.4290/ | Extending Common Intervals Searching from Permutations to Sequences
Irena Rusu
L.I.N.A., UMR 6241, Université de Nantes, 2 rue de la Houssiniére,
BP 92208, 44322 Nantes, France
Abstract
Common intervals have been defined as a modelisation of gene clusters in genomes represented either as permutations or as sequences. Whereas optimal algorithms for finding common intervals in permutations exist even for an arbitrary number of permutations, in sequences no optimal algorithm has been proposed yet even for only two sequences. Surprisingly enough, when sequences are reduced to permutations, the existing algorithms perform far from the optimum, showing that their performances are not dependent, as they should be, on the structural complexity of the input sequences.
In this paper, we propose to characterize the structure of a sequence by the number of different dominating orders composing it (called the domination number), and to use a recent algorithm for permutations in order to devise a new algorithm for two sequences. Its running time is in , where are the sizes of the two sequences, are their respective domination numbers, is the alphabet size and is the number of solutions to output. This algorithm performs better as and/or reduce, and when the two sequences are reduced to permutations (i.e. when ) it has the same running time as the best algorithms for permutations. It is also the first algorithm for sequences whose running time involves the parameter size of the solution. As a counterpart, when and are of and respectively, the algorithm is less efficient than other approaches.
## 1 Introduction
One of the main assumptions in comparative genomics is that a set of genes occurring in neighboring locations within several genomes represent functionally related genes [galperin2000s, lathe2000gene, tamames2001evolution]. Such clusters of genes are then characterized by a highly conserved gene content, but a possibly different order of genes within different genomes. Common intervals have been defined to model clusters [UnoYagura], and have been used since to detect clusters of functionally related genes [overbeek1999use, tamames1997conserved], to compute similarity measures between genomes [BergeronSim, AngibaudHow] and to predict protein functions [huynen2000predicting, von2003string].
Depending on the representation of genomes in such applications, allowing or not the presence of duplicated genes, comparative genomics requires for finding common intervals either in sequences or in permutations over a given alphabet. Whereas the most general - and thus useful in practice - case is the one involving sequences, the easiest to solve is the one involving permutations. This is why, in some approaches [AngibaudApprox, angibaud2006pseudo], sequences are reduced to permutations by renumbering the copies of the same gene according to evolutionary based hypothesis. Another way to exploit the performances of algorithms for permutations in dealing with sequences is to see each sequence as a combination of several permutations, and to deal with these permutations rather than with the sequences. This is the approach we use here.
In permutations on elements, finding common intervals may be done in time where is the number of permutations and the number of solutions, using several algorithms proposed in the literature [UnoYagura, BergeronK, heber2011common, IR2013]. In sequences (see Table 1), even when only two sequences and of respective sizes and are considered, the best solutions take quadratic time. In a chronological order, the first algorithm is due to Didier [didier2003common] and performs in time and space. Shortly later, Schmidt and Stoye [schmidt2004quadratic] propose an algorithm which needs space, and note that Didier’s algorithm may benefit from an existing result to achieve running time whereas keeping the linear space. Both these algorithms use to define, starting with a given element of it, growing intervals of with fixed leftpoint and variable rightpoint, that are searched for into . Alternative approaches attempt to avoid multiple searches of the same interval of , due to multiple locations, by efficiently computing all intervals in and all intervals in before comparing them. The best running time reached by such an algorithm is in , obtained by merging the fingerprint trees proposed in [kolpakov2008new], where (respectively ) is the number of maximal locations of the intervals in (respectively ), and is the size of the alphabet. The value (and similarly for ) is in and does not exceed .
The running times of all the existing algorithms have at least two main drawbacks: first, they do not involve at all the number of output solutions; second, they insufficiently exploit the particularities of the two sequences and, in the particular case where the sequences are reduced to permutations, need quadratic time instead of the optimal time for two permutations on elements. That means that their performances insufficiently depend both on the inherent complexity of the input sequences, and on the amount of results to output. Unlike the algorithms dealing with permutations, the algorithms for sequences lack of criteria allowing them to decide when the progressive generation of a candidate must be stopped, since it is useless. This is the reason why their running time is independent of the number of output solutions. This is also the reason why when sequences are reduced to permutations the running time is very unsatisfactory.
The most recent optimal algorithm for permutations [IR2013] proposes a general framework for efficiently searching for common intervals and all of their known subclasses in permutations, and has a twofold advantage, not proposed by other algorithms. First, it permits an easy and efficient selection of the common intervals to output based on two types of parameters. Second, assuming one permutation has been renumbered to be the identity permutation, it outputs all common intervals with the same minimum value together and in increasing order of their maximum value. We use here these properties to propose a new algorithm for finding common intervals in two sequences. Our algorithm strongly takes into account the structure of the input sequences, expressed by the number of different dominating orders (which are permutations) composing the sequence ( for permutations). Consequently, it has a complexity depending both on this structure and on the number of output solutions. It runs in optimal time for two permutations on elements, is better than the other algorithms for sequences composed of few dominating orders and, as a counterpart, it performs less well as the number of composing dominating orders grows.
The structure of the paper is as follows. In Section 2 we define the main notions, including that of a dominating order, and give the results allowing us a first simplification of the problem. In Section 3 we propose our approach for finding common intervals in two sequences based on this simplification, for which we describe the general lines. In Sections 4, 5 and 6 we develop each of these general lines and prove correctness and complexity results. Section 7 is the conclusion.
## 2 Preliminaries
Let be a sequence of length over an alphabet . We denote the length of by , the set of elements in by , the element of at position , , by and the subsequence of delimited by positions (included), with , by . An interval of is any set of integers from such that there exist with and . Then is called a location of on . A maximal location of on is any location such that neither nor is a location of .
When is the identity permutation , we denote , which is also . Note that all intervals of are of this form, and that each interval has a unique location on . When is an arbitrary permutation on elements (denoted in this case), we denote by the function which associates with each element of its position in . For a subsequence of , we also say that it is delimited by its elements and located at positions and . These elements are the delimiters of (note the difference between delimiters, which are elements, and their positions).
We now define common intervals of two sequences and of respective sizes and :
###### Definition 1.
[didier2003common, schmidt2004quadratic] A common interval of two sequences and over is a set of integers that is an interval of both and . A -maximal location of is any pair of maximal locations of on (this is ) and respectively on (this is ).
###### Example 1.
Let
The problem we are concerned with is defined below. We assume, without loss of generality, that both sequences contain all the elements of the alphabet, so that .
-Common Intervals Searching
Input: Two sequences T and S of respective lengths n1 and n2 over an alphabet Σ={1,2,…,p}. Find all (T,S)-maximal locations of common intervals of T and S, without redondancy.
To address this problem, assume we add a new element (not in ) at positions 0 and of . Let Succ be the -size array defined for each position with by if and is the smallest with this property (if does not exist, then ). Call the area of the position on the sequence .
With
###### Definition 2.
[didier2003common] The order associated with a position of , , is the sequence of all elements in ordered according to their first occurrence in . We note .
###### Remark 1.
Note that:
may be empty, and this holds iff .
if is not empty, then its first element is .
if is not empty, then contains each element in exactly once, and is thus a permutation on a subset of .
In the subsequent, we consider that a pre-treatment has been performed on , removing every element which is equal to , , such that to guarantee that no empty order exists. In this way, the maximal locations are slightly modified, but this is not essential.
Let respectively be the positions in of the elements defining , i.e. the position in of their first occurrences in . Now, define to be the ordered sequence of these positions.
With
###### Definition 3.
Given a sequence and an interval of it, a maxmin location of on is any location of which is left maximal and right minimal, that is, such that neither nor is a location of on . A -maxmin location of is any pair of maxmin locations of on (this is ) and respectively on (this is ).
It is easy to see that that maxmin locations and maximal locations are in bijection. We make this more precise as follows.
###### Claim 1.
The function associating with each maximal location of an interval in the maxmin location in such that is maximum with the properties and is a bijection. Moreover, if , then may be computed in when and are known.
Proof. It is easy to see that by successively removing from the rightmost element as long as it has a copy on its left, we obtain a unique interval such that is a minmax location of , and is maximum with this property. The inverse operation builds when is given.
Moreover, if , then . Then, assuming and are known and we want to compute , we have two cases. If , then is the position of the last element in and thus is computed as . If , then is the position in of the element preceding , that is, .
In the subsequent, and due to the preceding Claim, we solve the -Common Interval Searching problem by replacing maximal locations with maxmin locations. Using Claim 1, it is also easy to deduce that:
###### Claim 2.
[didier2003common] The intervals of are the sets with . As a consequence, the common intervals of and are the sets with , which are also intervals of .
With these precisions, Didier’s approach [didier2003common] consists then in considering each order and, in total time (reducible to according to [schmidt2004quadratic]), verifying whether the intervals with are also intervals of . Our approach avoids to consider each order by defining dominating orders which contain other orders, with the aim of focalising the search for common intervals on each dominating order rather than spreading it on each of the orders it dominates.
We introduce now the supplementary notions needed by our algorithm.
###### Definition 4.
Let be two integers such that . We say that the order dominates the order if is a contiguous subsequence of . We also say that is dominated by .
Equivalently, is a contiguous subsequence of and the positions on of their common elements are the same.
###### Definition 5.
Let be such that . Order is dominating if it is not dominated by any other order of . The number of dominating orders of is the domination number of .
The set of orders of is provided with an order, defined as iff . For each dominating order of , its strictly dominated orders are the orders with such that is dominated by but is not dominated by any order preceding according to .
###### Example 4.
The orders of
For each dominating order (which is a permutation), we need to record the suborders which correspond to the strictly dominated orders. Only the left and right endpoints of each suborder are recorded, in order to limit the space and time requirements. Then, let the domination function of a dominating order be the partial function defined as follows.
Fd(s):=fifthere is someisuch thatOiis strictly dominated byOdandBd[s..f]=Bi.
For the other values of , is not defined. Note that , since by definition any dominating order strictly dominates itself. See Figure 2.
###### Example 5.
For
We know that, according to Claim 2, the common intervals of and must be searched among the intervals or, if we focus on one dominating order and its strictly dominated orders identified by , among the intervals for which is defined and . We formalize this search as follows.
###### Definition 6.
Let be a permutation on elements, and be a partial function such that and for all values for which is defined. A location of an interval of is valid with respect to if is defined for and .
###### Claim 3.
The -maxmin locations of common intervals of and are in bijection with the triples such that:
is a dominating order of
the location on of the interval is valid with respect to
is a maxmin location of on .
Moreover, the triple associated with satisfies : is the dominating order that strictly dominates , and .
Proof. See Figure 2. By Claim 2, the common intervals of and are the sets with which are intervals of . We note that the sets are not necessarily distinct, but their locations on , given by , are distinct. Then, the -maxmin locations of common intervals are in bijection with the pairs such that is a maxmin location of the interval on , which are themselves in bijection with the pairs such that the dominating order strictly dominates and is valid with respect to . More precisely, .
###### Corollary 1.
Each -maxmin location of a common interval of and is computable in time if the corresponding triple and the sequence are known.
Looking for the -maxmin locations of the common intervals of and thus reduces to finding the -maxmin locations of common intervals for each dominating order and for , whose locations on are valid with respect to the dominating function of . The central problem to solve now is thus the following one (replace by , by and by ):
-Guided Common Intervals Searching
Input: A permutation P on p elements, a sequence S of length n2 on the same set of p elements, a partial function F:{1,2,…,p}→{1,2,…,p} such that F(1)=p and w≤F(w) for all w such that F(w) is defined. Find all (P,S)- maxmin locations of common intervals of P and S whose locations on P are valid with respect to F, without redondancy.
As before, we assume w.l.o.g. that contains all the elements in , so that . Also, we denote . In this paper, we show (see Section 3, Theorem 1) that -Guided Common Intervals Searching may be solved in time and space, where is its number of solutions for and . This running time gives the running time of our general algorithm. However, an improved running time of for solving -Guided Common Intervals Searching would lead to a algorithm for the case of two sequences, improving the complexity of the existing algorithms.
## 3 The approach
The main steps for finding the maxmin locations of all common intervals in two sequences using the reduction to -Guided Common Intervals Searching are given in Algorithm 1. Recall that for and we respectively denote their sizes, and their dominating numbers. The algorithms for computing each step are provided in the next sections.
To make things clear, we note that the dominating orders (steps 1 and 2) are computed but never stored simultaneously, whereas dominated orders are only recorded as parts of their corresponding dominating orders, using the domination functions. The initial algorithm for computing this information, in step 1 (and similarly in step 2), is too time consumming to be reused in steps 3 and 4 when dominating orders are needed. Instead, minimal information from steps 1 and 2 is stored, which allows to recover in steps 3 and 4 the dominating orders, with a more efficient algorithm. In such a way, we keep the space requirements in , and we perform steps 3, 4, 5 in global time , which is the best we may hope.
In order to solve -Guided Common Intervals Searching, our algorithm cuts into dominating orders and then it looks for common intervals in permutations. This is done in steps 2, 4 and 5, as proved in the next theorem.
###### Theorem 1.
Steps 2, 4 and 5 in Algorithm 1 solve -Guided Common Intervals Searching with input , and . Moreover, these steps may be performed in global time and space.
Proof. Claim 3 and Corollary 1 insure that the -maxmin locations of common intervals of and , in this precise order, are in bijection with (and may be easily computed from) the triples such that is a dominating order of , is valid with respect to and is a maxmin location of on . Note that since is a permutation, each location is a maxmin location. Reducing these triples to those for which is valid w.r.t. , as indicated in step 5, we obtain the solutions of -Guided Common Intervals Searching with input , and .
In order to give estimations of the running time and memory space, we refer to results proved in the remaining of this paper. Step 2 takes time and space assuming the orders are not stored (as proved in Section 4, Theorem 3), step 4 needs time and space to successively generate the orders from information provided by step 2 (Section 5, Theorem 4), whereas step 5 takes time and space, where is the number of solutions for -Guided Common Intervals Searching (Section 6, Theorem 6).
With
###### Theorem 2.
Algorithm 1 solves the -Common Intervals Searching problem in time, where is the size of the solution, and space.
Proof. The correctness of the algorithm is insured by Claim 3 and Theorem 1.
We now discuss the running time and memory space, once again referring to results proved in the remaining sections. As proved in Theorem 3 (Section 4), Step 1 (and similarly Step 2) takes -time and space, assuming that the dominating orders are identified by their position on and are not stored (each of them is computed, used to find its dominating function and then discarded). The positions corresponding to dominating orders are stored in decreasing order in a stack . The values of the dominating functions are stored as lists, one for each dominating order , whose elements are the pairs , in decreasing order of the value . This representation needs a global memory space of .
In step 3 the progressive computation of the dominating orders is done in time and space using the sequence and the list of positions of the dominating orders. The algorithm achieving this is presented in Section 5, Theorem 4. For each dominating order of , the orders of are successively computed in global time and space by the same algorithm, and are only temporarily stored. Step 5 is performed for and in time and space, where is the number of output solutions for -Guided Common Intervals Searching (Section 6, Theorem 6).
Then the abovementioned running time of our algorithm easily follows.
To simplify the notations, in the next sections the size of is denoted by and its domination number is denoted . The vector Succ, as well as the vectors Prec and defined similarly later, are assumed to be computed once at the beginning of Algorithm 1.
## 4 Finding the dominating and dominated orders of T
This task is subdivided into two parts. First, the dominating orders are found as well as, for each of them, the set of positions such that strictly dominates . Thus , where is known but is not known yet. In the second part of this section, we compute . Note that in this way we never store any dominated order, but only its position on and on the dominating order strictly dominating it. This is sufficient to retrieve it from when needed.
### 4.1 Find the positions i such that Oi is dominating/dominated
As before, let be the first sequence, with an additional element (new character) at positions 0 and . Recall that we assumed that neighboring elements in are not equal, and that we defined Succ to be the -size array such that, for all with , if and is the smallest with this property (if does not exist, then ).
Given a subsequence of , slicing it into singletons means adding the character at the beginning and the end of , as well as a so-called -separator (denoted ) after each element of which is the letter . And this, for each . Call the resulting sequence on .
###### Example 7.
With
Once is obtained from , successive removals of the separators are performed, and the resulting sequence is still called . Let a slice of be any maximal interval of positions in (recall that ) such that no separator exists in between and with . Note that in this case a -separator exists after and a -separator exists after , because of the maximality of the interval . With as defined above, immediately after has been sliced, every position in forms a slice.
###### Example 8.
With and obtained by slicing into singletons as in the preceding example, let now
Slices are disjoint sets which evolve from singletons to larger and larger disjoint intervals using separator removals. Two operations are needed, defining - as the reader will easily note - a Union-Find structure:
• Remove a -separator, thus merging two neighboring slices into a new slice. This is set union, between sets representing neighboring intervals.
• Find the slice a position belongs to. In the algorithm we propose, this function is denoted by .
In the following, a position is resolved if its order has already been identified, either as a dominating or as a dominated order. Now, by calling Resolve() in Algorithm 2 successively for all (initially non-resolved), we find the dominating orders of and, for each of them, the positions such that is strictly dominated by . Note that the rightmost position of each dominated by is computed by the procedure RightEnd(), given in Section 4.2.
###### Example 9.
With
To prove the correctness of our algorithm, we first need two results.
###### Claim 4.
Order with is dominated by order iff and and .
Proof. Notice that, by definition, the positions in belong to .
”: Properties and are deduced directly from the definitions of an order and of order domination. If the condition is not true, then belongs to but not to (again by the definition of an order), a contradiction. Moreover, if, by contradiction, there is some , occurring respectively in positions and (choose each of them as small as possible with and ), then and , since only the first occurrence of is recorded in . But then and thus is not dominated by , a contradiction.
”: Let . Then the first occurrence of the element in is, by definition, at position . Moreover, by hypothesis and since , we deduce that the first occurrence of the element in is at position . Thus . It remains to show that is contiguous inside . This is easy, since any position in , not in but located between two elements of would imply the existence of an element whose first occurrence in belongs to ; this element would then belong to , and its position to , a contradiction.
###### Claim 5.
Let , and assume is dominating. Then is labeled as ”dominated by ” in Resolve() iff is strictly dominated by .
Proof. Note that may get a label during Resolve() iff is not resolved at the beginning of the procedure, in which case steps 2-3 of Resolve() insure that is labeled as ”dominating”. By hypothesis, we assume this label is correct. Now, is labeled as ”dominated by ” iff
(step 5), and
in step 7 we have that is not already resoved, and are in the same slice in the sequence where all the -separators satisfying and have been removed (step 6).
The latter of the two conditions is equivalent to saying that contains only characters equal to , and , that is, only characters whose first occurrence in belongs to . This is equivalent to (i.e. no character in appears before ) and (all characters in have a first occurrence not later than ). But then the three conditions on the right hand of Claim 4 are fulfilled, and this means is dominated by . Given that step 8 is executed only once for a given position , that is, when is labeled as resolved, the domination is strict.
Now, the correctness of our algorithm is given by the following claim.
###### Claim 6.
Assume temporarily that the procedure is empty. Then calling Resolve() successively for correctly identifies the dominating orders and, for each of them, the positions such that strictly dominates . This algorithm takes time and space.
Proof. We prove by induction on that, at the end of the execution of Resolve(, we have for all with :
is labeled as ”dominating” iff and is dominating
is labeled as ”dominated by ” iff and is dominating and is strictly dominated by .
Say that a position is used if is unresolved when Resolve() is called. We consider two cases.
Case . The position is necessarily used (no position is resolved yet), thus is labeled as ”dominating” (step 3) and no other order will have this label during the execution of Resolve(). Now, is really dominating, as there is no , and property is proved. To prove , recalling that and , we apply Claim 5. Note that since in step 7 is already resolved.
Case . Assume by induction the affirmation we want to prove is true before the call of Resolve(). If is not used, that means is already resolved when Resolve() is called, and nothing is done. Properties are already satisfied due to the position such that dominates .
Assume now that is used. Then is labeled ”dominating” and we have to show that is really dominating. If this was not the case, then would be strictly dominated by some with , and by the inductive hypothesis it would have been labeled as so (property for ). But this contradicts the assumption that is unresolved at the beginning of Resolve(). We deduce that holds. To prove property , notice that it is necessarily true for and the corresponding dominated orders, by the inductive hypothesis and since Resolve() does not relabel any labeled order. To finish the proof of | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170153737068176, "perplexity": 595.7443285537173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00324.warc.gz"} |
http://mathhelpforum.com/calculus/64827-derivatives-graphing.html | 1. ## Derivatives and Graphing
My computer illiterate teacher assigned these primitive and horrible online quizzes... it seems that whatever combination I select I can never get the right answers. The site does a horrible explanation of the concept and never explains why the problem is wrong and only checks the first wrong question, making it impossible to check the others.
Can anyone help?
Derivatives and Graphing
Derivatives and Graphing
Derivatives and Graphing
2. Well, a point of inflection is where the concavity changes, so for that first link you posted the points of inflection would be found at A, C, E, and G.
Local maxima are the maximum points which are B and F.
f'(x) is increasing at point E. You would find this by graphing the derivative of the function and seeing where the slope increases.
I couldn't get the right answers for the question where f(x) decrease
But the rest after that, are pretty easy.
Hopefully this helped; ask if you need more help!
3. Originally Posted by Ineedhelpplz
My computer illiterate teacher assigned these primitive and horrible online quizzes... it seems that whatever combination I select I can never get the right answers. The site does a horrible explanation of the concept and never explains why the problem is wrong and only checks the first wrong question, making it impossible to check the others.
Can anyone help?
Derivatives and Graphing
Derivatives and Graphing
Derivatives and Graphing | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245841860771179, "perplexity": 979.5910339688581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00198.warc.gz"} |
http://en.wikipedia.org/wiki/Carbonate_hardness | # Carbonate hardness
Carbonate hardness, or carbonate alkalinity is a measure of the alkalinity of water caused by the presence of carbonate (CO2−
3
) and bicarbonate (HCO
3
) anions. Carbonate hardness is usually expressed either as parts per million (ppm or mg/L), or in degree KH (dKH) (from the German "Karbonathärte"). One degree KH is equal to 17.848 mg/l (ppm) CaCO
3
, e.g. one degree KH corresponds to the carbonate and bicarbonate ions found in a solution of approximately 17.848 milligrams of calcium carbonate (CaCO
3
) per litre of water (17.848 ppm). Both measurements (mg/L or KH) are usually expressed as mg/L CaCO
3
– meaning the concentration of carbonate expressed as if calcium carbonate were the sole source of carbonate ions.
Carbonate and bicarbonate anions contribute to alkalinity due to their basic nature, hence their ability to neutralize acid. Mathematically, the carbonate anion concentration is counted twice due to its ability to neutralize two protons, while bicarbonate is counted once as it can neutralize one proton. Therefore, bicarbonates that are present in the water are converted to an equivalent concentration of carbonates when determining KH. For example:
An aqueous solution containing 120 mg NaHCO3 (baking soda) per litre of water will contain 1.4285 mmol/L of bicarbonate, since the molar mass of baking soda is 84.007 g/mol. This is equivalent in carbonate hardness to a solution containing 0.71423 mmol/L of (calcium) carbonate, or 71.485 mg/L of calcium carbonate (molar mass 100.09 g/mol). Since one degree KH = 17.848 mg/L CaCO3, this solution has a KH of 4.0052 degrees.
$\text{CT (mEq/L)} = [\text{HCO}_3^-] + 2*[\text{CO}_3^{2-}]$
For water with a pH below 8.5, the CO32- will be less than 1% of the HCO3-.
In a solution where only CO2 affects the pH, carbonate hardness can be used to calculate the concentration of dissolved CO2 in the solution with the formula CO2 = 3 * KH * 10(7-pH), where KH is degrees of carbonate hardness and CO2 is given in ppm.
The term carbonate hardness is also sometimes used as a synonym for temporary hardness, in which case it refers to that portion of hard water that can be removed by processes such as boiling or lime softening, and then separation of water from the resulting precipitate.[1] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858101487159729, "perplexity": 4315.867048252162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997859240.8/warc/CC-MAIN-20140722025739-00237-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/4197-plz-help.html | 1. ## plz help
1>Is sqrt(x^2)=x an identity (true for all values of x)?
2> For the equation x-sqrt(x)=0 , perform the following:
a) Solve for all values of x that satisfies the equation.
b) Graph the functions and on the same graph (by plotting points if necessary). Show the points of intersection of these two graphs.
c) How does the graph relate to part a?
2. Originally Posted by bobby77
1>Is sqrt(x^2)=x an identity (true for all values of x)?
No, it isn't true if x is less than 0
3. Originally Posted by bobby77
1>Is sqrt(x^2)=x an identity (true for all values of x)?
the square root is per definition a positive number or zero. x can be a negative number. So the answer is NO.
$\displaystyle \sqrt{(-3)^2}\neq (-3)$
Originally Posted by bobby77
2> For the equation x-sqrt(x)=0 , perform the following:
a) Solve for all values of x that satisfies the equation.
factorize the lhs of this equation:
$\displaystyle x-\sqrt{(x)}=0 \Longrightarrow \sqrt{x}(\sqrt{x}-1)=0$
Thus: $\displaystyle \sqrt{x}=0\ \vee\ \sqrt{x}=1$. Therefore x = 0 or x = 1.
Originally Posted by bobby77
b) Graph the functions and on the same graph (by plotting points if necessary). Show the points of intersection of these two graphs.
c) How does the graph relate to part a?
I am not certain what you mean by functions (plr!). So I made a diagram of g(x)=x and w(x)=sqrt(x). the x-values of the intercepts are the solutions of the equation.
Bye
EB
Attached Thumbnails
4. Originally Posted by bobby77
1>Is sqrt(x^2)=x an identity (true for all values of x)?
Let me add that $\displaystyle \sqrt{x^2}=|x|$
Prove: If $\displaystyle x\geq 0$ then $\displaystyle \sqrt{x^2}=x$ because $\displaystyle x^2=x^2$, and $\displaystyle |x|=x$ so that is true.
If $\displaystyle x<0$ then $\displaystyle \sqrt{x^2}=-x$ because $\displaystyle (-x)^2=x^2$ and $\displaystyle -x\geq 0$, and $\displaystyle |x|=-x$ so that is true.
Thus, $\displaystyle \sqrt{x^2}=|x|$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034671187400818, "perplexity": 636.8697958596338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936833.6/warc/CC-MAIN-20180419091546-20180419111546-00787.warc.gz"} |
https://ediss.uni-goettingen.de/handle/11858/00-1735-0000-002E-E5A6-5 | # Investigation of the Structure and Dynamics of Multiferroic Systems by Inelastic Neutron Scattering and Complementary Methods
## Files in this item
The following license files are associated with this item: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837174773216248, "perplexity": 3194.962952546148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00270.warc.gz"} |
http://xavieranguera.com/phdthesis/node112.html | # Speech/Non-Speech Detection Block
Experiments for the speech/non-speech module were obtained for the SDM case to make it directly comparable with the baseline system results shown in the previous section. Although in this case two slightly different development and test sets were used. The development set consisted on the RT02 + RT04s datasets (16 meeting excerpts) and the test set was the RT05s set (with exception of the NIST meeting with faulty transcriptions). Forced alignments were used to evaluate the DER, MISS and FA errors.
In the development of the proposed hybrid speech/non-speech detector there are three main parameters that need to be set. These are the minimum duration for the speech/non-speech segments in both the energy block and the models block, and the complexity of the models in the models block.
The development set was used to first estimate the minimum duration of the speech and non-speech segments in the energy-based detector. In figure 6.1 one can see the MISS and FA scores for various durations (in # frames). While for a final speech/non-speech system one would choose the value that gives the minimum total error, in this case the goal is to obtain enough non-speech data to train the non-speech models in the second step. It is very important to choose the value with smaller MISS so that the non-speech model is as pure as possible. This is so because the speech model is usually assigned more Gaussian mixtures in the modeling step, therefore a bigger FA rate does not influence it as much. It can be observed how in the range between duration 1000 and 8000 the MISS rate remains quite flat, which indicates how robust the system is to variations in the data. In any new dataset, if it does not contain a minimum value for the MISS rate at the same value are in the development set, it will most probably still be a very plausible solution. A duration = 2400 (150ms duration) is chosen with MISS = 0.3% and FA=9.5% (total 9.7%).
The same procedure is followed to select the minimum duration for the speech and non-speech segments decoded using the model-based decoder, using the minimum duration determined by the previous analysis of the energy-based detector. In figure 6.2 one can see the FA and MISS error rates for different minimum segment sizes (the same for speech and non speech); such curve is almost identical when using different # mixtures for the speech model, a complexity of 2 Gaussian mixtures for the speech model and 1 for silence is chosen. In contrast to the energy-based system, this second step does output a final result to be used in the diarization system, therefore it is a need to find the minimum segment duration that minimizes the total percent error. An minimum error of 5.6% was achieved using a minimum duration of 0.7 seconds. If the parameters in the energy-based detector that minimize the overall speech/non-speech error had been chosen (which is at 8000 frames, 0.5 seconds) instead of the current ones, the obtained scores would have had a minimum error of 6.0% after the cluster-based decoder step.
Table 6.3: Speech/non-speech errors on development and test data
sp/nsp system RT02+RT04s RT05s MISS FA total MISS FA total All-speech system 0.0% 11.4% 11.4% 0.0% 13.2% 13.2% Pre-trained models 1.9% 3.2% 5.1% 1.9% 4.6% 6.5% hybrid (1st part) 0.4% 9.7% 10.1% 0.1% 10.4% 10.5% hybrid system(all) 2.4% 3.2% 5.6% 2.8% 2.1% 4.9%
In table 6.3 results are presented for the development and evaluation sets using the selected parameters, taking into account only the MISS and FA errors from the proposed module. Used as comparison, the all-speech'' system shows the total percentage of data labelled as non-speech in the reference (ground truth) files. After obtaining the forced alignment from the STT system, there existed many non-speech segments with a very small duration due to the strict application of the 0.3s minimum pause duration rule to the forced alignment segmentations. The second row shows the speech/non-speech results using SRI speech/non-speech system (Stolcke et al., 2005) which is was developed using training data coming from various meeting sources and its parameters optimized using the development data presented here and the forced alignment reference files. If tuned using the hand annotated reference files provided by NIST for each data set, it obtains a much bigger FA rate, possibly due to the fact that it is more complicated in hand annotated data to follow the 0.3s silence rule. The third and forth rows belong to the results for the presented algorithm. The third row shows the errors in the intermediate stage of the algorithm, after the energy-based decoding. These are not comparable with the other systems as the optimization in here is done regarding the MISS error, and not the TOTAL error. The forth row shows the result of the final output from both systems together.
Although the speech/non-speech error rate obtained for the development set is worse than what is obtained using the pre-trained system, it is almost a 25% relative better in the evaluation set. This changes when considering the final DER. In order to test the usability of such speech/non-speech output for the speaker diarization of meetings data the baseline system was used interposing either of the three speech/non-speech modules shown in table 6.3.
Table 6.4: DER using different speech/non-speech systems
sp/nsp system Development evaluation All-speech 27.50% 25.17% Pre-trained models 19.24% 15.53% hybrid system 16.51% 13.97%
It is seen in 6.4 that the use of any speech/non-speech detection algorithm improves the performance of the speaker diarization system. Both systems perform much better than just using the diarization system alone. This is due to the agglomerative clustering technique, which starts with a large amount of speaker clusters and tries to converge to an optimum number of clusters via cluster-pair comparisons. As non-speech data is distributed among all clusters, the more non-speech they contain, the less discriminative the comparison is, leading to more errors.
In both the development and evaluation sets the final DER of the proposed speech/non-speech system outperforms by a 14% relative (development) and a 10% relative (evaluation) the system using pre-trained models. It can be seen how the DER on the development set is much better that the pretrained system, even though the proposed system has a worse speech/non-speech error. This indicates that the proposed system obtains a set of speech/non-speech segments that are more tightly coupled with the diarization system.
user 2008-12-08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271676898002625, "perplexity": 1142.1316743532745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119838.12/warc/CC-MAIN-20170423031159-00061-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/192573/continuity-on-open-sets | # continuity on open sets
Let $f:A\rightarrow\Bbb R$, $A\subset\Bbb R$ and any $c \in \Bbb R$
If $E^-=\{x \in A :f(x)< c\}$ and $E^+=\{x\in A:f(x)>c\}$ are open sets, then $f:A\rightarrow \Bbb R$ is continuous.
-
What are you asking? – Cameron Buie Sep 7 '12 at 23:30
## 2 Answers
Let $a,b \in \mathbb{R}$ such that $a < b$.
$E^{+}_a = \{x \in A : f(x) > a\} = f^{-1}((a, \infty))$
and
$E^-_b = \{x \in A : f(x) < b\} = f^{-1}((-\infty, b))$
are open by the assumption. Hence
$f^{-1}((a,b)) = f^{-1}((a, \infty) \cap (-\infty, b)) = f^{-1}((a,\infty)) \cap f^{-1}((-\infty, b))$
is open. All open subset $U$ of $\mathbb{R}$ is a union of open sets of the form $(a,b)$. $f^{-1}(U)$ is open. The inverse image of any open set under $f$ is open. Hence $f$ is continuous.
-
Let $a, b \in \mathbb{R}$ such that $a < b$. Then $f^{-1}((a, b))$ is open by the assumption. Any open subset $U$ of $\mathbb{R}$ is a union of subsets of the form $(a, b)$. Hence $f^{-1}(U)$ is open. Hence $f$ is continuous.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850808382034302, "perplexity": 162.76913130830673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153736.68/warc/CC-MAIN-20160205193913-00274-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://swmath.org/?term=supersymmetric%20gauge%20theory | • # LanHEP
• Referenced in 20 articles [sw00502]
• automatic generation of Feynman rules in field theory Version 3.0. The LanHEP program version ... derivative and strength tensor for gauge fields. Supersymmetric theories can be described using the superpotential...
• # Spheno
• Referenced in 41 articles [sw09544]
• supersymmetric particle spectrum within a high scale theory, such as minimal supergravity, gauge mediated supersymmetry ... calculate decay widths and branching ratios of supersymmetric particles as well as of Higgs bosons...
• # SARAH
• Referenced in 36 articles [sw06472]
• addition, the tadpole equations are calculated, gauge fixing terms can be given and ghost interactions ... integrated out and non-supersymmetric limits of the theory can be chosen. CP and flavor...
• # SUSY LATTICE
• Referenced in 4 articles [sw16830]
• four-dimensional 𝒩=4 supersymmetric Yang-Mills theory with gauge group SU (N). The lattice ... large-scale framework despite the different target theory. Many routines are adapted from an existing ... object oriented code for simulating supersymmetric Yang-Mills theories”, ibid ... basic workflow for non-experts in lattice gauge theory. We discuss the parallel performance...
• # PyR@TE
• Referenced in 8 articles [sw16617]
• group equations for a general gauge field theory have been known for quite some time ... once the user specifies the gauge group and the particle content of the model ... renormalization group equations for several non-supersymmetric extensions of the Standard Model and found some... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107593655586243, "perplexity": 2415.0833176023875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308604.91/warc/CC-MAIN-20191215145836-20191215173836-00296.warc.gz"} |
http://www.edurite.com/kbase/graphical-representation-of-motion | #### • Class 11 Physics Demo
Explore Related Concepts
# graphical representation of motion
From Wikipedia
Axis-angle representation
The axis-angle representation of a rotation, also known as the exponential coordinates of a rotation, parameterizes a rotation by two values: a unit vector indicating the direction of a directed axis (straight line), and an angle describing the magnitude of the rotation about the axis. The rotation occurs in the sense prescribed by the right-hand rule.
This representation evolves from Euler's rotation theorem, which implies that any rotation or sequence of rotations of a rigid body in a three-dimensional space is equivalent to a pure rotation about a single fixed axis.
The axis-angle representation is equivalent to the more concise rotation vector, or Euler vector representation. In this case, both the axis and the angle are represented by a non-normalized vector codirectional with the axis whose magnitude is the rotation angle.
Rodrigues' rotation formula can be used to apply to a vector a rotation represented by an axis and an angle.
## Uses
The axis-angle representation is convenient when dealing with rigid body dynamics. It is useful to both characterize rotations, and also for converting between different representations of rigid body motion, such as homogeneous transformations and twists.
### Example
Say you are standing on the ground and you pick the direction of gravity to be the negative z direction. Then if you turn to your left, you will travel \tfrac{\pi}{2} radians (or 90 degrees) about the z axis. In axis-angle representation, this would be
\langle \mathrm{axis}, \mathrm{angle} \rangle = \left( \begin{bmatrix} a_x \\ a_y \\ a_z \end{bmatrix},\theta \right) = \left( \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix},\frac{\pi}{2}\right)
This can be represented as a rotation vector with a magnitude of \tfrac{\pi}{2} pointing in the z direction.
\begin{bmatrix} 0 \\ 0 \\ \frac{\pi}{2} \end{bmatrix}
## Rotating a vector
Rodrigues' rotation formula (named after Olinde Rodrigues) is an efficient algorithm for rotating a vector in space, given a rotation axis and an angle of rotation. In other words, the Rodrigues formula provides an algorithm to compute the exponential map from so(3) to SO(3) without computing the full matrix exponent (the rotation matrix).
If v is a vector in \mathbb{R}^3 and ω is a unit vector describing an axis of rotation about which we want to rotate v by an angle θ (in a right-handed sense), the Rodrigues formula to obtain the rotated vector is:
\mathbf{v}_\mathrm{rot} = \mathbf{v} \cos\theta + (\mathbf{\omega} \times \mathbf{v})\sin\theta + \mathbf{\omega} (\mathbf{\omega} \cdot \mathbf{v}) (1 - \cos\theta).
This is more efficient than converting ω and θ into a rotation matrix, and using the rotation matrix to compute the rotated vector.
## Relationship to other representations
There are many ways to represent a rotation. It is useful to understand how different representations relate to one another, and how to convert between them.
### Exponential map from so(3) to SO(3)
The exponential map is used as a transformation from axis-angle representation of rotations to rotation matrices.
\exp\colon so(3) \to SO(3)
Essentially, by using a Taylor expansion you can derive a closed form relationship between these two representations. Given an axis, \omega \in \Bbb{R}^{3} having length 1, and an angle, \theta \in \Bbb{R}, an equivalent rotation matrix is given by the following:
R = \exp(\hat{\omega} \theta) = \sum_{k=0}^\infty\frac{(\hat{\omega}\theta)^k}{k!} = I + \hat{\omega} \theta + \frac{1}{2}(\hat{\omega}\theta)^2 + \frac{1}{6}(\hat{\omega}\theta)^3 + \cdots
R = I + \hat{\omega}\left(\theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - \cdots\right) + \hat{\omega}^2 \left(\frac{\theta^2}{2!} - \frac{\theta^4}{4!} + \frac{\theta^6}{6!} - \cdots\right)
R = I + \hat{\omega} \sin(\theta) + \hat{\omega}^2 (1-\cos(\theta))
where R is a 3x3 rotation matrix and the hat operator gives the antisymmetric matrix equivalent of the cross product. This can be easily derived from Rodrigues' rotation formula.
### Log map from SO(3) to so(3)
To retrieve the axis-angle representation of a rotation matrix calculate the angle of rotation:
\theta = \arccos\left( \frac{\mathrm{trace}(R) - 1}{2} \right)
and then use it to find the normalized axis:
\omega = \frac{1}{2 \sin(\theta)} \begin{bmatrix} R(3,2)-R(2,3) \\ R(1,3)-R(3,1) \\ R(2,1)-R(1,2) \end{bmatrix}
Note, also that the Matrix logarithm of the rotation matrix R is:
\log R = \left\{ \begin{matrix}
0 & \mathrm{if} \; \theta = 0 \\ \frac{\theta}{2 \sin(\theta)} (R - R^\top) & \mathrm{if} \; \theta \ne 0 \; \mathrm{and} \; \theta \in (-\pi, \pi) \end{matrix}\right. Except when R has eigenvalues equal to -1 where the log is not unique. However, even in the case where \theta = \pi the Frobenius norm of the log is:
\| \log(R) \|_F = \sqrt{2} | \theta |
Note that given rotation matrices A and B:
d_g(A,B) := \| \log(A^\top B)\|_F
is the geodesic distance on the 3D manifold of rotation matrices.
### Unit Quaternions
To transform from axis-angle coordinates to unit quaternions use the following expression:
Q = \left(\cos\left(\frac{\theta}{2}\right), \omega \sin\left(\frac{\theta}{2}\right)\right)
Perspective (graphical)
Perspective (from Latin perspicere, to see through) in the graphic arts, such as drawing, is an approximate representation, on a flat surface (such as paper), of an image as it is seen by the eye. The two most characteristic features of perspective are that objects are drawn: Smaller as their
Question:I heard all the advantage of graphic representation of equation. what are the disadvantage of it?
Answers:limited range and limited accuracy. If you graph y = x on graph paper where x runs between 10 and +10, then the value of y cannot be determined if x = 11. Furthermore, the accuracy is limited to the resolution of the graph paper. So you could read y = 4.02 for x = 2 One more disadvantage: difficult to read, need training on how to read graphs. And another, graph paper is fragile, some water and your graph is gone. And another: you can't email it. .
Question:I have a project where I need a graphic representation, basically a picture or a graph, of something that has to do with the Cuban Missile Crisis. I need to make it myself, so I cannot copy and paste something. I already made a time line of the events that occurred, so what else could I do?
Answers:Don't forget to present the fact that Kennedy left fidel kastro ( a sworn enemy of the USA ) alive and well, in charge just 90 miles south of the USA border. For almost 50 years fidel has done its best to damage the USA. Had not been by Mikhail Gorbachev, who unwillingly pulverized comunism in Russia, fidel would be knocking at the door of the white house and telling bush to beat it, because it was taking charge
Question: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918633103370667, "perplexity": 964.9098715642513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992543.60/warc/CC-MAIN-20150728002312-00040-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/particle-physics-higgs-mechanism-spontaneous-symmetry-breaking-gauge-invariance.609143/ | # Homework Help: [particle physics] Higgs mechanism: spontaneous symmetry breaking & gauge invariance
1. May 26, 2012
### nonequilibrium
1. The problem statement, all variables and given/known data
Suppose that there is a gauge group with 24 indepenent symmetries and we find a set of 20 real scalar fields such that the scalar potential has minima that are invariant under only 8 of these symmetries. Using the Brout-Englert-Higss mechanism, how many physical fields are there that are
- massive spin 1
- massless spin 1
- Goldstone scalars
- Higgs scalars
2. Relevant equations
N.A.
3. The attempt at a solution
I'm not sure since I only saw an example worked out where there was a gauge "group" with 1 symmetry and we had 2 real scalar fields and the 1 symmetry was broken, and it ended up giving one massive spin 1, zero massless spin 1, zero Goldstone scalars and 1 Higgs scalar. I'm not sure how to generalize this to a more general case.
But let's give it a try: if we assume that for each broken symmetry, a gauge boson gets mass, we end up with "16 massive spin 1" (since 24 - 8 symmetries are broken). Hence "8 massless spin 1" remain. If I now presume that each gauge boson getting a mass is accompanied with the eating of one Goldstone scalar (which seems sensible from the perspective of the gauge boson gaining one degree of freedom), 16 Goldstone scalars have been eaten, and presuming that no (physical) Goldstone scalars can remain (?) (i.e. "0 Goldstone scalars"), we conclude that from the 20 real scalar fields, "4 Higgs scalars" survive.
Is the answer and/or some of the reasoning correct? Maybe I'm making it too complicated... (for reference we're using Griffiths' Introduction to Elementary Particles, although note that the question is not in the book itself). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566444754600525, "perplexity": 1030.1909461267192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832330.93/warc/CC-MAIN-20181219130756-20181219152756-00159.warc.gz"} |
http://mathoverflow.net/questions/58673/finite-quotients-of-fundamental-groups-in-positive-characteristic?sort=oldest | # finite quotients of fundamental groups in positive characteristic
For affine smooth curves over $k=\bar{k}$ of char. $p,$ Abhyankar's conjecture (proved by Raynaud and Harbater) tells us exactly which finite groups can be realized as quotients of their fundamental groups.
What about complete smooth curves, or more generally higher dimensional varieties? Are there results or conjectural criteria (or necessary conditions) for finite quotients of their $\pi_1?$ (Definitely, not too much was known around 1990; see Serre's Bourbaki article on this.)
In particular, let $G$ be the automorphism group of the supersingular elliptic curve in char. $p=2$ or $3$ (see supersingular elliptic curve in char. 2 or 3 for various descriptions of its structure). Is there (and if yes, how to construct) a projective smooth variety in char. $p$ having $G$ as a quotient of its $\pi_1?$ Certainly there are lots of affine smooth curves with this property (e.g. $\mathbb G_m$), and I wonder if for some of them, the covering is unramified at infinity (so that we win!).
-
I think you can get $G$ as a quotient of the fundamental group of any curve of genus $g>1$, since such groups have only one (topological) relation. – S. Carnahan Mar 16 '11 at 19:59
That's my hope too. But is there any reference for an Abhyankar-type conjectural statement for complete curves? – shenghao Mar 16 '11 at 20:19
In fact, I don't know if we know now that $\pi_1$ of projective smooth varieties (or just curves) in char. $p$ are of finite presentation in general, although in char. 0 it is the case. At least at the time when SGA1 was written this was not known; cf. SGA1, Exp.X, 2.8. – shenghao Mar 16 '11 at 20:51
A naive Abhyankar-type statement would claim that a finite group $G$ is a quotient of $\pi_1(X)$ is $G/p(G)$ is such a quotient in characteristic zero, where $p(G)$ is the characteristic subgroup of $G$ generated by its $p$-Sylow subgroups. Unfortunately, it fails miserably already for $X$ the projective line. – ACL Mar 17 '11 at 9:36
For a supersingular elliptic $E$ over an algebraically closed field of characteristic two or three there exists a smooth curve $C$ of higher genus such that $Aut_0(E)$ is a finite quotient of $\pi_1(C)$.
In this paper, it is explained how to realize groups which have the property that their maximal $p$-Sylow subgroup ($p$ being the characteristic) is normal. The automorphism groups of supersingular elliptic curves satisfy this property. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081775546073914, "perplexity": 336.2832623171841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446943.4/warc/CC-MAIN-20141017005726-00289-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/vertical-spring-elevator-question.230620/ | # Vertical spring - elevator question
1. Apr 22, 2008
### Volcano
A mass is attached to a spring supported from the ceiling of an elevator. We pull down on the mass and let it to vibrate. If the elevator starts to accelerate(fixed accelerate) upward,
1) How the maximum velocity changes?
2) How the amplitude changes?
3) How the total energy changes?
I think the amplitude and maximum velocity does not change. Because the acceleration doesn't change the net force but only slide down the equilibrium point. Am i right?
2. Apr 22, 2008
### Hootenanny
Staff Emeritus
I would agree with your choice with respect to the amplitude. However, in terms of the maximum velocity, it depends on your frame of reference, what are you measuring the velocity relative to.
P.S. We have https://www.physicsforums.com/forumdisplay.php?f=152" for all your textbook questions.
Last edited by a moderator: Apr 23, 2017
3. Apr 22, 2008
### Crazy Tosser
___
|M|
===
\_\
/_/
\_\
_|_|
|....| /\
|....| .|
|....| .| moving upward
|__.|
4. Apr 22, 2008
### Hootenanny
Staff Emeritus
Your diagram is wrong, the mass is hanging down from the ceiling, inside the elevator, but thanks for your contribution anyway...
5. Apr 22, 2008
### Hootenanny
Staff Emeritus
Edit: It wouldn't actually make any difference to the answer, but it's best not to confuse the matter
6. Apr 22, 2008
### Crazy Tosser
oops D=
__________________
$$| \amalg| \cdot \cdot \cdot \cdot \mp \cdot \cdot \cdot \cdot | \amalg |$$
$$| \amalg| \cdot \cdot \cdot \cdot \bigcap \cdot \cdot \cdot \cdot | \amalg |$$
$$| \amalg| \cdot \cdot \cdot \cdot |M| \cdot \cdot \cdot | \amalg |$$
$$| \amalg| \cdot \cdot \cdot \cdot \bigsqcup \cdot \cdot \cdot \cdot | \amalg |$$
$$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg | \Uparrow Moving Up$$
$$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg |$$
$$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg |$$
$$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg |$$
$$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg |$$
__________________
Last edited: Apr 22, 2008
7. Apr 23, 2008
### Volcano
it is releative to elevator. But honestly i can not explain with equations. My choice is completely instinctive. By te way, it would be nice to see the pictures with post. Latex is hard for figures.
8. Apr 23, 2008
### Hootenanny
Staff Emeritus
Then you are correct, if your measuring the velocity of the mass with respect to the elevator. Obviously, if the velocity is measured relative to some other 'fixed' point outside the elevator then this will not be the case.
9. Apr 23, 2008
### Volcano
I want to understand the effects of adding force and adding mass while it is vibrate. As you are approved, additional force on motion is not change the amplitude and maximum velocity. Now I wonder, how the mass change the amplitude and max velocity?
Now there is not an elevator. The same spring and mass attached to the ceiling of a door instead of an elevator and vibrating. While the mass in bottom position, an additional mass attached to other one suddenly. What happens now? I think, as previous problem, the equilibrium point slides down. The amplitude will not change because net force was not change. But maximum velocity will reduce because period will increase and distance was not change. Am I right now?
10. Apr 23, 2008
### Hootenanny
Staff Emeritus
I agree.
11. Apr 23, 2008
### Volcano
I mean period proportional with mass. If mass increase period will too. Now, if amplitude the same as before then the distance for quarter period is the same too. So I think, distance the same, if time increased then average velocity must reduce.
As I understood you agree with about maximum velocity reduce. But I can not calculate these. Any suggestion?
Similar Discussions: Vertical spring - elevator question | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653540253639221, "perplexity": 1733.3180592989906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891485.97/warc/CC-MAIN-20180122153557-20180122173557-00744.warc.gz"} |
https://www.physicsforums.com/threads/a-equation-that-im-stuck-on.155866/ | # A equation that im stuck on
1. Feb 12, 2007
### thomas49th
1. The problem statement, all variables and given/known data
2. Relevant equations
As in the image above
3. The attempt at a solution
I had a go but i rubbed it off the sheet.
Can someone take me through this step by step on how to solve it
Thanks
2. Feb 12, 2007
### HallsofIvy
Staff Emeritus
What is your question? You are told exactly what to do!
You are told that the equation must be of the form
S= at2+ bt+ c.
You are given t and S values for 6 different values of t.
Putting the given x and t values into the equation above for 3 different values of t (t= 0, 2, 3, as they suggest, will do but the choice is yours) will give you 3 equations to solve for a, b, and c. (Taking t= 0 will get an especially easy equation!)
3. Feb 12, 2007
### thomas49th
so
t=0: 0 = 0 + 0 + 0 (c will have to be 0 if a and b are)
t=1: 10 =a+b+c
t=2: 30 =4a+2b+c
is that right?
Last edited: Feb 12, 2007
4. Feb 12, 2007
### jing
but where does it say a and b are 0?
t=0: 0 = a.0 + b.0 +c and hence c=0
the other equations are correct
5. Feb 12, 2007
### thomas49th
if t = 0
then a x t = 0
b x t = 0
so c must be 0 if the equation equals 0
6. Feb 12, 2007
### jing
Correct. Now use that you know c=0 in the other two equations
7. Feb 12, 2007
### thomas49th
Huh? I thought the other 2 equations were right?
t=0: 0 = 0a + 0b + 0
t=1: 10 =a+b+c
t=2: 30 =4a+2b+c
cant I right them as that?
8. Feb 12, 2007
### jing
Yes they are correct but you now know that c = 0
so a+b+c=a+b
and
4a+2b+c=4a + 2b | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095256090164185, "perplexity": 1674.3335709230162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.enotes.com/homework-help/tank-circuit-uses-0-09-uh-inductor-0-4-uf-452356 | # A tank circuit uses a 0.09 uH inductor and a 0.4-uF capacitor. The resistance of the inductor is 0.3 ohms. Would the quality of the inductor be 158 and the bandwidth be 5.3?
The quality factor Q of a component is by DEFINITION the rapport between the power stored and the power loss in that particular component at resonance. Higher the quality factor is, higher the amplitude of oscillations are.
Q = Energy Stored/Energy Loss = Power Stored/Power Loss =Ps/Pl
at the resonant frequency of
`F_r = 1/(2*pi*sqrt(L*C)) =1/(2*pi*sqrt(9*10^(-8)*4*10^(-7)))=`
`=838.82 KHz`
If we write the quality factor as
`Q = F_r/(Delta(F))`
we can find the bandwidth of the circuit as
`Delta(F) = F_r/Q`
For a SERIES circuit (which is the case here because the inductor resistance is always considered in series with the inductance) the current I through all components is the same and the stored an loss power are
`Ps = I^2*X(L) = I^2*omega*L`
`Pl = I^2*R`
which gives
`Q = (omega_r*L)/R = (2*pi*Fr*L)/R = (2*pi*838820*9*10^(-8))/0.3= 1.5811`
The bandwidth of the circuit is
`Delta(F) = F_r/Q =838820/1.5811=530516 Hz = 530.5 KHz`
For the circuit to have a quality factor Q =150 the values of the components need to be
L =0.09 miliH (not microH)
C =0.4 microF
but in this case the bandwidth will be `Delta(F) = 53 Hz` | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979353666305542, "perplexity": 1923.1688047280786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00685.warc.gz"} |
https://mathoverflow.net/questions/266365/git-quotients-for-linear-representations-of-sl2-mathbb-c | # GIT quotients for linear representations of $SL(2,\mathbb C)$
Let $V$ be the standard two-dimensional representation of $SL(2,\mathbb C)$ and let ${\rm Sym}^2V$ be its symmetric square. Let $n$ be a positive integer and consider the following two representations 1) $V^{\oplus n}$ and 2) $({\rm Sym}^2V)^{\oplus n}$ of $SL(2,\mathbb C)$.
Question 1) Is there some explicit description of the GIT quotient $V^{\oplus n}// SL(2,\mathbb C)$? In particular, is it true that the ring of invariant polynomials is generated by $\frac{n(n-1)}{2}$ quadratic polynomials $vol(v_i,v_j)$ $i\ne j$, where $v=(v_1,\cdots ,v_n)=v\in V^{\oplus n}$ and $vol$ is a volume preserved by $SL(2,\mathbb C)$?
Question 2). Is there some explicit description of the GIT quotient $({\rm Sym}^2V)^{\oplus n}// SL(2,\mathbb C)$? In particular, is it true that the ring of invariant polynomials is generated by $\frac{n(n+1)}{2}$ quadratic polynomials $(v_i,v_j)$ , where $v=(v_1,\cdots ,v_n)=v\in ({\rm Sym}^2V)^{\oplus n}$ and $(.,.)$ is a symmetric bilinear form on $({\rm Sym}^2V)$ preserved by $SL(2,\mathbb C)$?
• Mumford's description of the geometric quotient of $V^{\oplus n}$ by the standard action of $\text{SL}(V)$ in Chapter 3 of GIT is so very explicit that it does not actually use any of the general theory of "Geometric Invariant Theory"! That is important, because Mumford wanted to construct $\mathcal{M}_g$ as a scheme over $\text{Spec}(\mathbb{Z})$, whereas GIT (at that time) only worked over a fixed field. So Mumford uses his explicit quotient construction of $V^{\oplus n}$ over any base as a step in constructing $\mathcal{M}_g$ as a quasi-projective scheme over $\text{Spec}(\mathbb{Z})$. – Jason Starr Apr 4 '17 at 15:06
## 3 Answers
Both questions are extensively dealt with in Weyl's book "Classical invariant theory" who investigated the invariant of classical groups on multiple copies of their defining representations. Determining a set of generators is called a "First Fundamental Theorem" (FFT) while the relations are given in a "Second Fundamental Theorem".
Question 1: This is best regarded as the action of $Sp(2n)$ on $V=\mathbb C^{2n}$ for $n=1$. Then, indeed, the ring of invariants is generated by all pairings $f_{ij}:=\omega(v_i,v_j)$. These are neatly organized in a $2n\times 2n$ skew-symmetric matrix. The relations are generated by all "principal" Pfaffian minors of size $(2n+2)\times (2n+2)$. For $n=2$ these are quadratic polynomials in 3 variables called the "Plücker relations". The quotient consists of all skew-symmetric matrices of rank $\le 2$ (which is, of course, the affine cone over a Grassmannian).
Question 2: In this case one is dealing with the group $SO(n)$ acting on $\mathbb C^n$ for $n=3$. Here things are a bit more complicated since $SO(n)$ is not strictly a classical group. The better problem is to look at the group $O(n)$ instead. In this case, the invariants are indeed generated by all pairings $p_{ij}=(v_i,v_j)$ which can be organized into a $n\times n$ symmetric matrix. The relations are generated by all $(n+1)\times(n+1)$-principal minors. In your case, the quotient would be the set of symmetric matrices of rank $\le3$. Since you are dealing with $SO(n)$ instead of $O(n)$ things are more complicated. In this case there are additional generating invariants namely all determinants of the form $\det(v_{i_1},\ldots,v_{i_n})$. For $n=3$ the quotient is the subset of $S^2\mathbb C^n\oplus\wedge^3\mathbb C^n$ given be two sets of relations: the rank conditions and the condition that the square of the determinant can be expressend as a Gram matrix $\det((v_{i_\mu},v_{i_\nu}))$.
• Thank you! I imagine this is contained in your answer, but is it possible to spell out a bit more explicitly what is the GIT quotient a cone over, in the second case? (if it is again a cone over something) – aglearner Apr 4 '17 at 20:20
• It is not a space one encounters in a Linear Algebra course. Varieties like that have been studied. Search for "determinantal varieties". It carries a $GL(n,\mathbb C)$-action which makes it into a spherical variety with probably 3 orbits. It is a singular with rational singularities (in particular normal and Cohen-Macaulay). – Friedrich Knop Apr 5 '17 at 5:41
• Friedrich, many thanks! I have not realized that this is a spherical variety. I am not worried too much that this variety is not from a linear algebra course :). But I am not able to understand the last three lines of your answer. What are "the rank conditions" and the condition that "the square of the determinant can be expressed as a Gram matrix" ? – aglearner Apr 5 '17 at 9:06
• In fact, I understand that there is a morphism from $\rm Sym^2(V)^{\oplus n}//SL(2,\mathbb C)$ to the cone over the Grassmanian $G(3,n)$. For a generic point the fiber is $6$-dimensional. But I don't see how to get an action with one open orbit on $\rm Sym^2(V)^{\oplus n}//SL(2,\mathbb C)$... So I don't understand what is the spherical variety in this case... – aglearner Apr 5 '17 at 11:38
For question 1, the answer is most easily described by thinking about $V^{\oplus n}$ as $Hom(\mathbb C^2, \mathbb C^n)$. From this it follows that the projective GIT quotient $$V^{\oplus n} //_{det} GL_2$$ is isomorphic to the Grassmannian $\mathbb G(2,n)$. From this, it follows that the invariant ring $\mathbb C[V^{\oplus n}]^{SL_2}$ is homogeneous coordinate ring of $\mathbb G(2,n)$. Your quadratic polynomials are the generators of this coordinate ring (and the relations are given by the Plucker relations).
• I suppose the FFT is hidden in the isomorphism with the Grassmannian. – Abdelmalek Abdesselam Apr 4 '17 at 17:58
The answers are:
Question 1: yes.
Question 2: no.
Explicit linear generators follow from the first fundamental theorem (FFT) for $SL_2$. You can see my two answers to this MO question for an explicit proof of the FFT for $SL_k$. It is best to use a graphical language to represent these generators as in my article "On the volume conjecture for classical spin networks". J. Knot Theory Ramifications 21 (2012), no. 3, 1250022. This type of graphical representation is very old as you can see in this MO answer. Then the fun begins, namely trying to find polynomial rather than linear generators. Essentially this results from the Grassmann-Plücker relation. For $SL_2$ and forms of degree 1 or 2 this is easy to do by hand. For quadratics, the GP relation can be used to break cycles (the only thing produced by the FFT). In fact the article by Kempe in the second MO answer I mentioned does exactly that, with explanatory pictures. The first invariant which is not expressible by the ones you gave is for three quadratics corresponding to a 3-cycle containing each one of them. This is also the Wronskian of the three quadratics.
For quadratics an explicit system of generators which basically adds these Wronskians for each triple of forms is given in Section 256 "The quadratic types" in the book by Grace and Young.
The basic identity from that book needed for breaking cycles of length at least four is in classical symbolic notation: $$2(ab)(bc)(cd)(de)=(bc)(cd)(db)(ae)-(cd)^2(ab)(be)-(bc)^2(ad)(de)+(bd)^2(ac)(ae)$$ where $a=(a_1,a_2)$ etc. and $(ab)$ is the determinant of the matrix with first row $a$ and second row $b$ etc.
Using self-duality of $SL_2$ representations, the LHS can be a interpreted as a product of four $2\times 2$ matrices. This is basically the Amitsur-Levitzki Theorem for $2\times 2$ matrices.
I don't know how explicit you want to be but you can look at the article "Defining Relations for the Algebra of Invariants of 2×2 Matrices" by Drensky for more details. He treats the case of generic matrices while you are interested in matrices coming from symmetric bilinear forms by self-dualization. For generic matrices, traces of words of length 1, 2, 3 form a minimal system of $n+\frac{n(n-1)}{2}+\frac{n(n-1)(n-2)}{6}$ algebra generators. Drensky finds the polynomial relations between these generators. Here, with quadratic binary forms, the words of length 1 disappear.
• Thank you! I guess the article of Drensky indeed addresses the second half of the question. I would be happy if there was some "geometric" description of this GIT quotient akin to the cone over Grassmannian $G(2,n)$ appearing in the part 1) of the question. (But it might be that such a description does not exist...) – aglearner Apr 5 '17 at 3:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052690267562866, "perplexity": 156.80137491687472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00148.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/190570-matrix-nilpotencies.html | # Math Help - matrix nilpotencies
1. ## matrix nilpotencies
Hi everyone. Do you know if any matrix exists with a nilpotency of 0?
A nilpotent matrix N is a square matrix defined as N^k=0 for k>=0....but isn't any square matrix to the zero power the identity matrix....so there is no nilpotent matrix with nilpotency 0??
is my thinking correct?
2. ## Re: matrix nilpotencies
Originally Posted by cp05
....so there is no nilpotent matrix with nilpotency 0?? is my thinking correct?
Yes, you are right.
3. ## Re: matrix nilpotencies
yes. nilpotency is only defined for positive powers of k. A^0 is by definition the identity matrix, for any non-zero matrix A. the zero matrix is a special case, just like the number 0^0 is a special case, although most sources (wolframalpha/mathematica, for example) define it as I (which, oddly enough, is a conflict when one considers 1x1 matrices). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9236567616462708, "perplexity": 1734.813523200859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://ifwisdomwereteachable.wordpress.com/2013/06/30/the-central-amenability-constant-of-a-finite-group-part-3-of-n/ | OK, back to the story of the central amenability constant. I’ll take the opportunity to re-tread some of the ground from the first post.
## 1. Review/recap
Given a finite group G, ${{\mathbb C} G}$ denotes the usual complex group algebra: we think of it as the vector space ${{\mathbb C}^G}$ equipped with a suitable multiplication. This has a canonical basis as a vector space, indexed by group elements: we denote the basis vector corresponding to an element x of G by ${\delta_x}$. Thus for any function ${\psi:G\rightarrow{\mathbb C}}$, we have ${\sum_{x\in G} \psi(x)\delta_x}$.
(Aside: this is not really the correct “natural” way to think of the group algebra if one generalizes from finite groups to infinite groups; one has to be more careful about whether one is thinking “covariantly or contravariantly”. ${{\mathbb C}^G}$ is naturally a contravariant object as G varies, but the group algebra should be covariant as G varies. However, our approach allows us to view characters on G as elements of the group algebra, which is a very convenient elision.)
The centre of ${{\mathbb C} G}$, henceforth denoted by ${{\rm Z\mathbb C} G}$, is commutative and spanned by its minimal idempotents, which are all of the form
$\displaystyle p_\phi = \frac{\phi(e)}{|G|}\phi \equiv \frac{\phi(e)}{|G|}\sum_{x\in G} \phi(x)\delta_x$
for some irreducible character ${\phi:G\rightarrow{\mathbb C}}$. Moreover, ${\phi\mapsto p_\phi}$ is a bijection between the set of irreducible characters and the set of minimal idempotents in ${{\rm Z\mathbb C} G}$.
We define
$\displaystyle {\bf m}_G = \sum_{\phi\in {\rm Irr}(G)} p_\phi \otimes p_\phi \in {\rm Z\mathbb C} G \otimes {\rm Z\mathbb C} G \equiv {\rm Z\mathbb C} (G\times G )$
and, equipping ${{\mathbb C}(G\times G )}$ with the natural ${\ell^1}$-norm, defined by
$\displaystyle \Vert f\Vert = \sum_{(x,y) \in G\times G } |f(x,y)|,$
we define ${{\rm AM}_{\rm Z}(G)}$ to be ${\Vert {\bf m}_G \Vert}$. Explicitly, if we use the convention that the value of a class function ${\psi}$ on any element of a conjugacy class C is denoted by ${\psi(C)}$, we have
$\displaystyle {\rm AM}_{\rm Z}(G) = \sum_{C,D\in{\rm Conj}(G)} |C|\ |D| \left\vert \sum_{\phi\in {\rm Irr}(G)} \frac{1}{|G|^2} \phi(e)^2\phi(C)\phi(D) \right\vert \;,$
the formula stated in the first post of this series.
## 2. Moving onwards
Remark 1
As I am writing these things up, it occurs to me that “philosophically speaking”, perhaps one should regard ${{\bf m}_G}$ as an element of the group algebra ${{\mathbb C}(G\times G^{\rm op})}$, where Gop denotes the group whose underlying set is that of G but equipped with the reverse multiplication. It is easily checked that a function on ${G\times G}$ is central as an element of ${{\mathbb C}(G\times G )}$ if and only if it is central as an element of the algebra ${{\mathbb C}(G\times G^{\rm op})}$, so we can get away with the definition chosen here. Nevertheless, I have a suspicion that the ${G\times G^{\rm op}}$ picture is somehow the “right” one to adopt, if one wants to put the study of ${{\bf m}_G}$ into a wider algebraic context.
${{\bf m}_G}$ is a non-zero idempotent in a Banach algebra, so it follows from submultiplicativity of the norm that ${{\rm AM}_{\rm Z}(G)=\Vert {\bf m}_G \Vert \geq 1}$. When do we have equality?
Theorem 2 (Azimifard–Samei–Spronk) ${{\rm AM}_{\rm Z}(G)=1}$ if and only if G is abelian.
The proof of necessity (that is, the “only if” direction) will go in the next post. In the remainder of this post, I will give two proofs of sufficiency (that is, the “if” direction).
In the paper of Azimifard–Samei–Spronk (MR 2490229; see also arXiv 0805.3685) where I first learned of ${{\rm AM}_{\rm Z}(G)}$, this direction is glossed over quickly, since it follows from more general facts in the theory of amenable Banach algebras. I will return later, in Section 2.2, to an exposition of how this works for the case in hand. First, let us see how we can approach the problem more directly.
#### 2.1. Proof of sufficiency: direct version
Suppose G is abelian, and let ${n=|G|}$. Then G has exactly n irreducible characters, all of which are linear (i.e. one-dimensional representations, a.k.a. multiplicative functionals). Denoting these characters by ${\phi_1,\dots,\phi_n}$, we have
$\displaystyle {\bf m}_G = \sum_{j=1}^n \frac{1}{n}\phi_j \otimes \frac{1}{n}\phi_j$
so that
$\displaystyle {\rm AM}_{\rm Z}(G) = \sum_{x,y\in G} \left\vert \sum_{j=1}^n \frac{1}{n^2}\phi_j(x)\phi_j(y)\right\vert$
This sum can be evaluated explicitly using some Fourier analysis — or, in the present context, the Schur column orthogonality relations. To make this a bit more transparent, recall that ${\phi(y^{-1})=\overline{\phi(y)}}$ for all characters ${\phi}$ and all y in G. Hence by a change of variables in the previous equation, we get
$\displaystyle {\rm AM}_{\rm Z}(G) = \frac{1}{n^2} \sum_{x,y\in G} \left\vert \sum_{j=1}^n \phi_j(x)\overline{\phi_j(y)} \right\vert$
For a fixed element x in G, the n-tuple ${(\phi_1(x), \dots, \phi_n(x) )}$ is a column in the character table of G. We know by general character theory for finite groups that distinct columns of the character table, viewed as column vectors with complex entries, are orthogonal with respect to the standard inner product. Hence most terms in the expression above vanish, and we are left with
\displaystyle \begin{aligned} {\rm AM}_{\rm Z}(G) & = \frac{1}{n^2} \sum_{x\in G} \left\vert \sum_{j=1}^n \phi_j(x)\overline{\phi_j(x)} \right\vert \\ & = \frac{1}{n^2} \sum_{x\in G} \sum_{j=1}^n \vert\phi_j(x)\vert^2 \end{aligned}
which equals ${1}$, since each ${\phi_j}$ takes values in ${\mathbb T}$. This completes the proof.
#### 2.2. Proof of sufficiency: slick version
The following argument is an expanded version of the one that is outlined, or alluded to, in the paper of Azimifard–Samei–Spronk. It is part of the folklore in Banach algebras — for given values of “folk” — but really the argument goes back to the study of “separable algebras” in the sense of ring theory.
Lemma 3 Let A be an associative, commutative algebra, with identity element 1A. Let ${\Delta: A\otimes A \rightarrow A}$ be the linear map defined by ${\Delta(a\otimes b)=ab}$. Then there is at most one element m in ${A\otimes A}$ that simultaneously satisfies ${\Delta(m)}$=1A and ${a\cdot m = m\cdot a}$ for all a in A.
Proof: Let us first omit the assumption that A is commutative, and work merely with an associative algebra that has an identity.
Define the following multiplication on ${A\otimes A}$:
$\displaystyle (a\otimes b) \odot (c\otimes d) := ac \otimes db .$
Then ${(A\otimes A, \odot)}$ is an associative algebra — the so-called enveloping algebra of A. If m satisfies the conditions mentioned in the lemma, then
$\displaystyle (a\otimes b) \odot m = a\cdot m \cdot b = \Delta(ab)\cdot m \;;$
and so, by taking linear combinations, ${w\odot m = \Delta(w)\cdot m}$ for every w in ${A\otimes A}$. If n is another element of ${A\otimes A}$ satisfying the conditions of the lemma, we therefore have n${\odot}$m=m, and by symmetry, m${\odot}$n=n.
Now we use the assumption that A is commutative. From this assumption, we see that ${(A\otimes A,\odot)}$ is also commutative. Therefore
$\displaystyle m = n\odot m = m\odot n = n$
as required. $\Box$
Now let G be a finite group and let A= ${{\rm Z\mathbb C} G}$. Because A is spanned by its minimal idempotents ${p_\phi}$, and because minimal idempotents in a commutative algebra are mutually orthogonal, ${{\bf m}_G = \sum_\phi p_\phi \otimes p_\phi}$ satisfies the two conditions mentioned in Lemma 3. On the other hand, if G is abelian, consider
$\displaystyle {\bf n}_G := \frac{1}{|G|} \sum_{x\in G} \delta_x \otimes \delta_{x^{-1}} \in {\mathbb C} G = {\rm Z\mathbb C} G = A.$
Clearly $\Delta({\bf n}_G)$=1A, and a direct calculation shows that ${\delta_g\cdot {\bf n}_G = {\bf n}_G\cdot \delta_g}$ for all g in G, so by linearity ${{\bf n}_G}$ also satisfies both conditions mentioned in Lemma 3. Applying the lemma tells us that ${{\bf m}_G= {\bf n}_G}$, and in particular
$\displaystyle {\rm AM}_{\rm Z}(G) = \Vert {\bf n}_G \Vert = 1$
as required. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 70, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9764791131019592, "perplexity": 209.67353813799153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463425.63/warc/CC-MAIN-20150226074103-00233-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.ecrts.org/suggestions-for-authors/ | # Suggestions for authors
Below are suggestions for structuring and writing your paper that are likely to be appreciated by the reviewers. We hope that this will help you satisfy the evaluation criteria of the conference. Note that these are not mandatory rules.
Structure of the paper. It is usually a good idea to
• clearly formulate, explain and motivate in the introduction the research problem, together with a short paragraph titled “Contributions of the paper”, in which you briefly summarize the innovative technical contributions of the paper.
• have a “Related work” section that shows why the addressed problem has not been completely solved before, underlines the key differences between the proposed approach and those that have been published previously (including prior work of your own), and makes clear where you have built on existing results.
• spend a paragraph or even a section called “System model” (or something similar) on presenting the system model, describing accurately notation and nomenclature, and discussing the assumptions and limitations of your model.
• present mathematical proofs of correctness if your paper contains theoretical work.
• add an “Experimental evaluation” and/or “Case study” section to provide evidence of scientific advancement if the paper contains new algorithms, system design or methodology, or applications that improve on existing state-of-the art (or even regarding a completely new field).
• add a short paragraph in the conclusions to briefly summarize the main innovative technical contributions of the paper.
Notations. Make sure that any notation used is clearly defined and distinct (do not use symbols that can easily be confused with one another). The best place to define terminology and notation is together with the description of the system model. This gives reviewers a single place to refer back to where they can find any symbols they need to look up again. If you have a large amount of notation in your paper, consider providing a table of notation. Make sure you define all notation and acronyms before they are used.
Figures. Make sure that all of your figures, diagrams and graphs are legible when printed out in black and white. Avoid the temptation to make your figures the size of a postage stamp or thumb nail in order to fit your content into the page limits and make sure all text is legible and not too small. Ensure the different lines on the graphs are clearly distinguishable by using markers that are obviously different and where necessary using different line types (e.g. dashed, dotted).
Experiments. A reader of your paper should be able to reproduce your experiments and obtain the same results. Hence, it is necessary to describe the experimental setup, including details of case study or benchmark data (or where it can be obtained) and how synthetic data (if used) has been generated. If you are reporting statistical data, then make sure you present measures pertaining to the quality of the results obtained, for example confidence intervals, or variance. To aid in the reproducibility of results, consider also making your evaluation code available.
Acknowledgments. These instructions as well as some descriptions of our evaluation criteria originate in a list of FAQs started by Giuseppe Lipari for ECRTS’06 with contributions by many others since. The document was heavily edited in 2019 to separate between suggestions and requirements.
Comments are closed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563811779022217, "perplexity": 746.7053765693433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00534.warc.gz"} |
http://math.stackexchange.com/users/42344/ilovemath?tab=summary | This account is temporarily suspended for voting irregularities. The suspension period ends on Feb 25 at 22:12.
ILoveMath
Reputation
Top tag
Next privilege 5 Rep.
Participate in meta
2 10 42
Impact
~135k people reached
### Questions (269)
16 Reflections on math education 10 Number theory fun problem 8 Must the (continuous) image of a null set be null? 7 Formality and mathematics 7 Consider $f_n(x) = \sum_{k=0}^{n} {x^k}$. Does $f_n$ converge pointwise on $[0,1]$?
### Reputation (1)
+5 $R[x]$ has a subring isomorphic to $R$. -2 What does $\frac{1}{n}$ converge to? +5 Example of a non-hausdorff space +5 Loop is contractible iff it extends to a map of disk
18 Proving $a^ab^b + a^bb^a \le 1$, given $a + b = 1$ 18 What's the formula to solve summation of logarithms? 8 If $g^2 = e$ for all $g \in G$, then $G$ is abelian 8 Finding the derivative of $y^x = e^y$ 8 how do I prove that $1 > 0$ in an ordered field?
### Tags (113)
106 calculus × 139 26 trigonometry × 16 31 algebra-precalculus × 22 24 integration × 25 31 sequences-and-series × 18 18 number-theory × 8 29 logarithms × 5 18 summation 27 real-analysis × 112 18 contest-math
### Accounts (8)
Academia 188 rep 15 Philosophy 113 rep 3 Skeptics 101 rep 1 French Language 101 rep Writers 101 rep | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8950905799865723, "perplexity": 2786.138341567858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145751.1/warc/CC-MAIN-20160205193905-00231-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://tex.meta.stackexchange.com/tags/markdown/hot | # Tag Info
### Double backslashes disappear from code
Update: This was almost certainly my fault. On December 20, 2013 I moved the TeX (and meta) databases to new homes. I'm not sure how the original problem occurred, but TeX and its meta had a very odd ...
• 101
### The CommonMark diary
Reading some feedback on meta.stackexchange.com I got the impression that this community feels quite tense about the upcoming CommonMark migration. I feel that some extra levels of transparency shared ...
• 101
### Double backslashes disappear from code
Based on \\ corruption, I've created a new query TeX.SX \\ corruption (based on user ID) which I hope will help anyone to find their own posts affected by this bug! P.S. The database is updated ...
• 121k
### The CommonMark diary
For users who are curious about how the automated edits look like, and who want to review them for potential issues: visit the profile of the Community user (ID -1), and navigate to 'all actions' &...
• 783
Accepted
### Are questions using markdown with LaTeX allowed?
Issues using a variety of software - Pandoc included - are on topic here as long as the issue relates to (La)TeX in some way. As mentioned in the TeX - LaTeX Tour page: Ask about... Formats like ...
• 572k
Accepted
### How to quote a left quote inline?
You want two backticks \code\{=1`
• 245k
Accepted
### Quoting codes in a list leaves a </li>
As you can see on the revision page which shows a live rendering of this question, this is fixed now. It's still visible in your question because the version displayed here is saved on submission; ...
• 101
### Is there no way at all to typeset equations using TeX on the TeX SE at all?
This issue has been raised several times, and the central concern remains the same. When people need to show what they are seeing from TeX, we all need to see exactly what they see. Using some ...
• 245k
Accepted
### Problem writing the sequence \@ in inline <code>...</code>-blocks
The markdown backtick syntax does not only the formatting like the HTML <code> block, but it also escapes special characters, so a \ or a < or > have no special meaning. Inside a <code&...
• 68.2k
Accepted
### Something messed with my answers backslashes and newlines
The linked question has now been fixed (along with ~11,000 similarly corrupted postings) see Community effort in fixing the double backslashes issue For a summary of how this got fixed in the end.
• 680k
### Double backslashes disappear from code
This and the linked question ‘double backslash + newline’ collapses to ‘single backslash’ when I hit ‘edit’ Are two manifestations of the same issue, see Community effort in fixing the double ...
• 680k
### Double backslashes disappear from code
I've looked into it and I couldn't find anything specific. The thing is, we don't ever modify markdown w/o recording history. So when you read about rebakes and other stuff we do, those are affecting ...
• 101
Accepted
### Typo in Markdown Help
Indeed. This seems like a markdown "typo." However, it is a network-wide issue and should therefore be addressed at that level. There is a similar question related to this posted on the main network ...
• 572k
Accepted
### Failure to show image that already exists in imgur
Here are some of the issues with the markdown provided in the linked question: The use of the HTML <img> tag is incorrect. The correct usage would be <img src="https://i.stack.imgur.com/...
• 572k | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106889367103577, "perplexity": 3485.13729350283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00768.warc.gz"} |
https://www.groundai.com/project/turbulent-rayleigh-benard-convection-in-spherical-shells/ | [
# [
## Abstract
We simulate numerically Boussinesq convection in non-rotating spherical shells for a fluid with a unity Prandtl number and Rayleigh numbers up to . In this geometry, curvature and radial variations of the gravitational acceleration yield asymmetric boundary layers. A systematic parameter study for various radius ratios (from to ) and gravity profiles allows us to explore the dependence of the asymmetry on these parameters. We find that the average plume spacing is comparable between the spherical inner and outer bounding surfaces. An estimate of the average plume separation allows us to accurately predict the boundary layer asymmetry for the various spherical shell configurations explored here. The mean temperature and horizontal velocity profiles are in good agreement with classical Prandtl-Blasius laminar boundary layer profiles, provided the boundary layers are analysed in a dynamical frame that fluctuates with the local and instantaneous boundary layer thicknesses. The scaling properties of the Nusselt and Reynolds numbers are investigated by separating the bulk and boundary layer contributions to the thermal and viscous dissipation rates using numerical models with and a gravity proportional to . We show that our spherical models are consistent with the predictions of Grossmann & Lohse’s (2000) theory and that and scalings are in good agreement with plane layer results.
B
Turbulent Rayleigh-Bénard convection in spherical shells]Turbulent Rayleigh-Bénard convection in spherical shells T. Gastine, J. Wicht and J. M. Aurnou]Thomas Gastine1,\nsJohannes Wicht,\nsJonathan M. Aurnou 2015 \volume650 \pagerange119–126
énard convection, boundary layers, geophysical and geological flows
## 1 Introduction
Thermal convection is ubiquitous in geophysical and astrophysical fluid dynamics and rules, for example, turbulent flows in the interiors of planets and stars. The so-called Rayleigh-Bénard (hereafter RB) convection is probably the simplest paradigm to study heat transport phenomena in these natural systems. In this configuration, convection is driven in a planar fluid layer cooled from above and heated from below (figure 1a). The fluid is confined between two rigid impenetrable walls maintained at constant temperatures. The key issue in RB convection is to understand the turbulent transport mechanisms of heat and momentum across the layer. In particular, how does the heat transport, characterised by the Nusselt number , and the flow amplitude, characterised by the Reynolds number , depend on the various control parameters of the system, namely the Rayleigh number , the Prandtl number and the cartesian aspect ratio ? In general, quantifies the fluid layer width over its height in classical planar or cylindrical RB cells. In spherical shells, we rather employ the ratio of the inner to the outer radius to characterise the geometry of the fluid layer.
Laboratory experiments of RB convection are classically performed in rectangular or in cylindrical tanks with planar upper and lower bounding surfaces where the temperature contrast is imposed (see figure 1b). In such a system, the global dynamics are strongly influenced by the flow properties in the thermal and kinematic boundary layers that form in the vicinity of the walls. The characterisation of the structure of these boundary layers is crucial for a better understanding of the transport processes. The marginal stability theory by Malkus (1954) is the earliest boundary layer model and relies on the assumption that the thermal boundary layers adapt their thicknesses to maintain a critical boundary layer Rayleigh number, which implies . Assuming that the boundary layers are sheared, Shraiman & Siggia (1990) later derived a theoretical model that yields scalings of the form and (see also Siggia, 1994). These asymptotic laws were generally consistent with most of the experimental results obtained in the 1990s up to . Within the typical experimental resolution of one percent, simple power laws of the form were found to provide an adequate representation with exponents ranging from 0.28 to 0.31, in relatively good agreement with the Shraiman & Siggia model (e.g. Castaing et al., 1989; Chavanne et al., 1997; Niemela et al., 2000). However, later high-precision experiments by Xu et al. (2000) revealed that the dependence of upon cannot be accurately described by such simple power laws. In particular, the local slope of the function has been found to increase slowly with . The effective exponent of roughly ranges from values close to near to when (e.g. Funfschilling et al., 2005; Cheng et al., 2015).
Grossmann & Lohse (2000, 2004) derived a competing theory capable of capturing this complex dynamics (hereafter GL). This scaling theory is built on the assumption of laminar boundary layers of Prandtl-Blasius (PB) type (Prandtl, 1905; Blasius, 1908). According to the GL theory, the flows are classified in four different regimes in the phase space according to the relative contribution of the bulk and boundary layer viscous and thermal dissipation rates. The theory predicts non-power-law behaviours for and in good agreement with the dependence and observed in recent experiments and numerical simulations of RB convection in planar or cylindrical geometry (see for recent reviews Ahlers et al., 2009; Chillà & Schumacher, 2012).
Benefiting from the interplay between experiments and direct numerical simulations (DNS), turbulent RB convection in planar and cylindrical cells has received a lot of interest in the past two decades. However, the actual geometry of several fundamental astrophysical and geophysical flows is essentially three-dimensional within concentric spherical upper and lower bounding surfaces under the influence of a radial buoyancy force that strongly depends on radius. The direct applicability of the results derived in the planar geometry to spherical shell convection is thus questionable.
As shown in figure 1(c), convection in spherical shells mainly differs from the traditional plane layer configuration because of the introduction of curvature and the absence of side walls. These specific features of thermal convection in spherical shells yield significant dynamical differences with plane layers. For instance, the heat flux conservation through spherical surfaces implies that the temperature gradient is larger at the lower boundary than at the upper one to compensate for the smaller area of the bottom surface. This yields a much larger temperature drop at the inner boundary than at the outer one. In addition, this pronounced asymmetry in the temperature profile is accompanied by a difference between the thicknesses of the inner and the outer thermal boundary layers. Following Malkus’s marginal stability arguments, Jarvis (1993) and Vangelov & Jarvis (1994) hypothesised that the thermal boundary layers in curvilinear geometries adjust their thickness to maintain the same critical boundary layer Rayleigh number at both boundaries. This criterion is however in poor agreement with the results from numerical models (e.g. Deschamps et al., 2010). The exact dependence of the boundary layer asymmetry on the radius ratio and the gravity distribution thus remains an open question in thermal convection in spherical shells (Bercovici et al., 1989; Jarvis et al., 1995; Sotin & Labrosse, 1999; Shahnas et al., 2008; O’Farrell et al., 2013). This open issue sheds some light on the possible dynamical influence of asymmetries between the hot and cold surfaces that originate due to both the boundary curvature and the radial dependence of buoyancy in spherical shells.
Ground-based laboratory experiments involving spherical geometry and a radial buoyancy forcing are limited by the fact that gravity is vertically downwards instead of radially inwards (Scanlan et al., 1970; Feldman & Colonius, 2013). A possible way to circumvent this limitation is to conduct experiments under microgravity to suppress the vertically downward buoyancy force. Such an experiment was realised by Hart et al. (1986) who designed a hemispherical shell that flew on board of the space shuttle Challenger in May 1985. The radial buoyancy force was modelled by imposing an electric field across the shell. The temperature dependence of the fluid’s dielectric properties then produced an effective radial gravity that decreases with the fifth power of the radius (i.e. ). More recently, a similar experiment named “GeoFlow” was run on the International Space Station, where much longer flight times are possible (Futterer et al., 2010, 2013). This later experiment was designed to mimic the physical conditions in the Earth mantle. It was therefore mainly dedicated to the observation of plume-like structures in a high Prandtl number regime () for . Unfortunately, this limitation to relatively small Rayleigh numbers makes the GeoFlow experiment quite restricted regarding asymptotic scaling behaviours in spherical shells.
To compensate the lack of laboratory experiments, three dimensional numerical models of convection in spherical shells have been developed since the 1980s (e.g. Zebib et al., 1980; Bercovici et al., 1989, 1992; Jarvis et al., 1995; Tilgner, 1996; Tilgner & Busse, 1997; King et al., 2010; Choblet, 2012). The vast majority of the numerical models of non-rotating convection in spherical shells has been developed with Earth’s mantle in mind. These models therefore assume an infinite Prandtl number and most of them further include a strong dependence of viscosity on temperature to mimic the complex rheology of the mantle. Several recent studies of isoviscous convection with infinite Prandtl number in spherical shells have nevertheless been dedicated to the analysis of the scaling properties of the Nusselt number. For instance, Deschamps et al. (2010) measured convective heat transfer in various radius ratios ranging from to and reported for , while Wolstencroft et al. (2009) computed numerical models with Earth’s mantle geometry () up to and found . These studies also checked the possible influence of internal heating and reported quite similar scalings.
Most of the numerical models of convection in spherical shells have thus focused on the very specific dynamical regime of the infinite Prandtl number. The most recent attempt to derive the scaling properties of and in non-rotating spherical shells with finite Prandtl numbers is the study of Tilgner (1996). He studied convection in self-graviting spherical shells (i.e. ) with spanning the range and . This study was thus restricted to low Rayleigh numbers, relatively close to the onset of convection, which prevents the derivation of asymptotic scalings for and in spherical shells.
The objectives of the present work are twofold: (i) to study the scaling properties of and in spherical shells with finite Prandtl number; (ii) to better characterise the inherent asymmetric boundary layers in thermal convection in spherical shells. We therefore conduct two systematic parameter studies of turbulent RB convection in spherical shells with by means of three dimensional DNS. In the first set of models, we vary both the radius ratio (from to ) and the radial gravity profile (considering ) in a moderate parameter regime (i.e. for the majority of the cases) to study the influence of these properties on the boundary layer asymmetry. We then consider a second set of models with and up to . These DNS are used to check the applicability of the GL theory to thermal convection in spherical shells. We therefore numerically test the different basic prerequisites of the GL theory: we first analyse the nature of the boundary layers before deriving the individual scaling properties for the different contributions to the viscous and thermal dissipation rates.
The paper is organised as follows. In § 2, we present the governing equations and the numerical models. We then focus on the asymmetry of the thermal boundary layers in § 3. In § 4, we analyse the nature of the boundary layers and show that the boundary layer profiles are in agreement with the Prandtl-Blasius theory (Prandtl, 1905; Blasius, 1908). In § 5, we investigate the scaling properties of the viscous and thermal dissipation rates before calculating the and scalings in § 6. We conclude with a summary of our findings in § 7.
## 2 Model formulation
### 2.1 Governing hydrodynamical equations
We consider RB convection of a Boussinesq fluid contained in a spherical shell of outer radius and inner radius . The boundaries are impermeable, no slip and at constant temperatures and . We adopt a dimensionless formulation using the shell gap as the reference lengthscale and the viscous dissipation time as the reference timescale. Temperature is given in units of , the imposed temperature contrast over the shell. Velocity and pressure are expressed in units of and , respectively. Gravity is non-dimensionalised using its reference value at the outer boundary . The dimensionless equations for the velocity , the pressure and the temperature are given by
\boldmath∇⋅\boldmathu% \boldmathu=0, (1)
∂\boldmathu∂t+\boldmathu⋅\boldmath∇\boldmathu=−\boldmath∇p+RaPrgT\boldmather+\boldmathΔ\boldmathu, (2)
∂T∂t+\boldmathu⋅\boldmath∇T=1PrΔT, (3)
where is the unit vector in the radial direction and is the gravity. Several gravity profiles have been classically considered to model convection in spherical shells. For instance, self-graviting spherical shells with a constant density correspond to (e.g Tilgner, 1996), while RB convection models with infinite Prandtl number usually assume a constant gravity in the perspective of modelling Earth’s mantle (e.g. Bercovici et al., 1989). The assumption of a centrally-condensed mass has also been frequently assumed when modelling rotating convection (e.g. Gilman & Glatzmaier, 1981; Jones et al., 2011) and yields . Finally, the artificial central force field of the microgravity experiments takes effectively the form of (Hart et al., 1986; Feudel et al., 2011; Futterer et al., 2013). To explore the possible impact of these various radial distribution of buoyancy on RB convection in spherical shells, we consider different models with the four following gravity profiles: . Particular attention will be paid to the cases with , which is the only radial function compatible with an exact analysis of the dissipation rates (see below, § 2.3).
The dimensionless set of equations (1-3) is governed by the Rayleigh number , the Prandtl number and the radius ratio of the spherical shell defined by
Ra=αgoΔTd3νκ,Pr=νκ,η=riro, (4)
where and are the viscous and thermal diffusivities and is the thermal expansivity.
### 2.2 Diagnostic parameters
To quantify the impact of the different control parameters on the transport of heat and momentum, we analyse several diagnostic properties. We adopt the following notations regarding different averaging procedures. Overbars correspond to a time average
¯¯¯f=1τ∫t0+τt0fdt,
where is the time averaging interval. Spatial averaging over the whole volume of the spherical shell are denoted by triangular brackets , while correspond to an average over a spherical surface:
⟨f⟩=1V∫Vf(r,θ,ϕ)dV;⟨f⟩s=14π∫π0∫2π0f(r,θ,ϕ)sinθdθdϕ,
where is the volume of the spherical shell, is the radius, the colatitude and the longitude.
The convective heat transport is characterised by the Nusselt number , the ratio of the total heat flux to the heat carried by conduction. In spherical shells with isothermal boundaries, the conductive temperature profile is the solution of
ddr(r2dTcdr)=0,Tc(ri)=1,Tc(ro)=0,
which yields
Tc(r)=η(1−η)21r−η1−η. (5)
For the sake of clarity, we adopt in the following the notation for the time-averaged and horizontally-averaged radial temperature profile:
ϑ(r)=¯¯¯¯¯¯¯¯¯¯⟨T⟩s.
The Nusselt number then reads
Nu=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨urT⟩s−1Prdϑdr−1PrdTcdr=−ηdϑdr(r=ri)=−1ηdϑdr(r=ro). (6)
The typical rms flow velocity is given by the Reynolds number
Re=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯√⟨u2⟩=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯√⟨u2r+u2θ+u2ϕ⟩, (7)
while the radial profile for the time and horizontally-averaged horizontal velocity is defined by
(8)
### 2.3 Exact dissipation relationships in spherical shells
The mean buoyancy power averaged over the whole volume of a spherical shell is expressed by
P=RaPr¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨gurT⟩=4πVRaPr∫rorigr2¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨urT⟩sdr,
Using the Nusselt number definition (6) and the conductive temperature profile (5) then leads to
P=4πVRaPr2(∫rorigr2dϑdrdr−Nuη(1−η)2∫rorigdr).
The first term in the parentheses becomes identical to the imposed temperature drop for a gravity :
∫rorigr2dϑdrdr=r2o[ϑ(ro)−ϑ(ri)]=−r2o,
and thus yields an analytical relation between and . For any other gravity model, we have to consider the actual spherically-symmetric radial temperature profile . Christensen & Aubert (2006) solve this problem by approximating by the diffusive solution (5) and obtain an approximate relation between and . This motivates our particular focus on the cases which allows us to conduct an exact analysis of the dissipation rates and therefore check the applicability of the GL theory to convection in spherical shells.
Noting that for , one finally obtains the exact relation for the viscous dissipation rate :
ϵU=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨(\boldmath∇×\boldmathu\boldmathu)2⟩=P=31+η+η2RaPr2(Nu−1). (9)
The thermal dissipation rate can be obtained by multiplying the temperature equation (3) by and integrate it over the whole volume of the spherical shell. This yields
ϵT=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨(∇T)2⟩=3η1+η+η2Nu. (10)
These two exact relations (9-10) can be used to validate the spatial resolutions of the numerical models with . To do so, we introduce and , the ratios of the two sides of Eqs (9-10):
χϵU =(1+η+η2)Pr23Ra(Nu−1)ϵU, (11) χϵT =(1+η+η2)3ηNuϵT.
### 2.4 Setting up a parameter study
#### Numerical technique
The numerical simulations have been carried out with the magnetohydrodynamics code MagIC (Wicht, 2002). MagIC has been validated via several benchmark tests for convection and dynamo action (Christensen et al., 2001; Jones et al., 2011). To solve the system of equations (1-3), the solenoidal velocity field is decomposed into a poloidal and a toroidal contribution
\boldmathu\boldmathu=\boldmath∇×(\boldmath∇×W\boldmather)+\boldmath∇×Z%\boldmath$er$,
where and are the poloidal and toroidal potentials. , , and are then expanded in spherical harmonic functions up to degree in the angular variables and and in Chebyshev polynomials up to degree in the radial direction. The combined equations governing and are obtained by taking the radial component and the horizontal part of the divergence of (2). The equation for is obtained by taking the radial component of the curl of (2). The equations are time-stepped by advancing the nonlinear terms using an explicit second-order Adams-Bashforth scheme, while the linear terms are time-advanced using an implicit Crank-Nicolson algorithm. At each time step, all the nonlinear products are calculated in the physical space (, , ) and transformed back into the spectral space (, , ). For more detailed descriptions of the numerical method and the associated spectral transforms, the reader is referred to (Gilman & Glatzmaier, 1981; Tilgner & Busse, 1997; Christensen & Wicht, 2007).
#### Parameter choices
One of the main focuses of this study is to investigate the global scaling properties of RB convection in spherical shell geometries. This is achieved via measurements of the Nusselt and Reynolds numbers. In particular, we aim to test the applicability of the GL theory to spherical shells. As demonstrated before, only the particular choice of a gravity profile of the form allows the exactness of the relation (9). Our main set of simulations is thus built assuming . The radius ratio is kept to and the Prandtl number to to allow future comparisons with the rotating convection models by Gastine & Wicht (2012) and Gastine et al. (2013) who adopted the same configuration. We consider 35 numerical cases spanning the range . Table 1 summarises the main diagnostic quantities for this dataset of numerical simulations and shows that our models basically lie within the ranges and .
Another important issue in convection in spherical shells concerns the determination of the average bulk temperature and the possible boundary layer asymmetry between the inner and the outer boundaries (e.g. Jarvis, 1993; Tilgner, 1996). To better understand the influence of curvature and the radial distribution of buoyancy, we thus compute a second set of numerical models. This additional dataset consists of 113 additional simulations with various radius ratios and gravity profiles, spanning the range with . To limit the numerical cost of this second dataset, these cases have been run at moderate Rayleigh number and typically span the range for the majority of the cases. Table 2, given in the Appendix, summarises the main diagnostic quantities for this second dataset of numerical simulations.
#### Resolution checks
Attention must be paid to the numerical resolutions of the DNS of RB convection (e.g. Shishkina et al., 2010). Especially, underresolving the fine structure of the turbulent flow leads to an overestimate of the Nusselt number, which then falsifies the possible scaling analysis (Amati et al., 2005). One of the most reliable ways to validate the truncations employed in our numerical models consists of comparing the obtained viscous and thermal dissipation rates with the average Nusselt number (Stevens et al., 2010; Lakkaraju et al., 2012; King et al., 2012). The ratios and , defined in (11), are found to be very close to unity for all the cases of Table 1, which supports the adequacy of the employed numerical resolutions. To further highlight the possible impact of inadequate spatial resolutions, two underresolved numerical models for the two highest Rayleigh numbers have also been included in Table 1 (lines in italics). Because of the insufficient number of grid points in the boundary layers, the viscous dissipation rates are significantly higher than expected in the statistically stationary state. This leads to overestimated Nusselt numbers by similar percent differences ().
Table 1 shows that the typical resolutions span the range from () to (). The two highest Rayleigh numbers have been computed assuming a two-fold azimuthal symmetry to ease the numerical computations. A comparison of test runs with or without the two-fold azimuthal symmetry at lower Rayleigh numbers () showed no significant statistical differences. This enforced symmetry is thus not considered to be influential. The total computational time for the two datasets of numerical models represents roughly 5 million Intel Ivy Bridge CPU hours.
## 3 Asymmetric boundary layers in spherical shells
### 3.1 Definitions
Several different approaches are traditionally considered to define the thermal boundary layer thickness . They either rely on the horizontally-averaged mean radial temperature profile or on the temperature fluctuation defined as
(12)
Among the possible estimates based on , the slope method (e.g. Verzicco & Camussi, 1999; Breuer et al., 2004; Liu & Ecke, 2011) defines as the depth where the linear fit to near the boundaries intersects the linear fit to the temperature profile at mid-depth. Alternatively, exhibits sharp local maxima close to the walls. The radial distance separating those peaks from the corresponding nearest boundary can be used to define the thermal boundary layer thicknesses (e.g. Tilgner, 1996; King et al., 2013). Figure 2(a) shows that both definitions of actually yield nearly indistinguishable boundary layer thicknesses. We therefore adopt the slope method to define the thermal boundary layers.
There are also several ways to define the viscous boundary layers. Figure 2(b) shows the vertical profile of the root-mean-square horizontal velocity . This profile exhibits strong increases close to the boundaries that are accompanied by well-defined peaks. Following Tilgner (1996) and Kerr & Herring (2000), the first way to define the kinematic boundary layer is thus to measure the distance between the walls and these local maxima. This commonly-used definition gives () for the inner (outer) spherical boundary. Another possible method to estimate the viscous boundary layer follows a similar strategy as the slope method that we adopted for the thermal boundary layers (Breuer et al., 2004). () is defined as the distance from the inner (outer) wall where the linear fit to near the inner (outer) boundary intersects the horizontal line passing through the maximum horizontal velocity.
Figure 2(b) reveals that these two definitions lead to very distinct viscous boundary layer thicknesses. In particular, the definition based on the position of the local maxima of yields much thicker boundary layers than the tangent intersection method, i.e. . The discrepancies of these two definitions are further discussed in § 4.
### 3.2 Asymmetric thermal boundary layers and mean bulk temperature
Figure 2 also reveals a pronounced asymmetry in the mean temperature profiles with a much larger temperature drop at the inner boundary than at the outer boundary. As a consequence, the mean temperature of the spherical shell is much below . Determining how the mean temperature depends on the radius ratio has been an ongoing open question in mantle convection studies with infinite Prandtl number (e.g. Bercovici et al., 1989; Jarvis, 1993; Vangelov & Jarvis, 1994; Jarvis et al., 1995; Sotin & Labrosse, 1999; Shahnas et al., 2008; Deschamps et al., 2010; O’Farrell et al., 2013). To analyse this issue in numerical models with , we have performed a systematic parameter study varying both the radius ratio of the spherical shell and the gravity profile (see Table 2). Figure 3 shows some selected radial profiles of the mean temperature for various radius ratios (panel a) and gravity profiles (panel b) for cases with similar . For small values of , the large difference between the inner and the outer surfaces lead to a strong asymmetry in the temperature distribution: nearly 90% of the total temperature drop occurs at the inner boundary when . In thinner spherical shells, the mean temperature gradually approaches a more symmetric distribution to finally reach when (no curvature). Figure 3(b) also reveals that a change in the gravity profile has a direct impact on the mean temperature profile. This shows that both the shell geometry and the radial distribution of buoyancy affect the temperature of the fluid bulk in RB convection in spherical shells.
To analytically access the asymmetries in thickness and temperature drop observed in figure 3, we first assume that the heat is purely transported by conduction in the thin thermal boundary layers. The heat flux conservation through spherical surfaces (6) then yields
ΔToλoT=η2ΔTiλiT, (13)
where the thermal boundary layers are assumed to correspond to a linear conduction profile with a temperature drop () over a thickness (). As shown in Figs. 2-3, the fluid bulk is isothermal and forms the majority of the fluid by volume. We can thus further assume that the temperature drops occur only in the thin boundary layers, which leads to
ΔTo+ΔTi=1. (14)
Equations (13) and (14) are nevertheless not sufficient to determine the three unknowns , , and an additional physical assumption is required.
A hypothesis frequently used in mantle convection models with infinite Prandtl number in spherical geometry (Jarvis, 1993; Vangelov & Jarvis, 1994) is to further assume that both thermal boundary layers are marginally stable such that the local boundary layer Rayleigh numbers and are equal:
Raiλ=Raoλ→αgiΔTiλiT3νκ=αgoΔToλoT3νκ. (15)
This means that both thermal boundary layers adjust their thickness and temperature drop to yield (e.g., Malkus, 1954). The temperature drops at both boundaries and the ratio of the thermal boundary layer thicknesses can then be derived using Eqs. (13-14)
ΔTi=11+η3/2χ1/4g,ΔTo≃Tm=η3/2χ1/4g1+η3/2χ1/4g,λoTλiT=χ1/4gη1/2, (16)
where
χg=g(ri)g(ro), (17)
is the ratio of the gravitational acceleration between the inner and the outer boundaries. Figure 4(a) reveals that the marginal stability hypothesis is not fulfilled when different radius ratios and gravity profiles are considered. This is particularly obvious for small radius ratios where is more than 10 times larger than . This discrepancy tends to vanish when , when curvature and gravity variations become unimportant. As a consequence, there is a significant mismatch between the predicted mean bulk temperature from (16) and the actual values (figure 4b). Deschamps et al. (2010) also reported a similar deviation from (16) in their spherical shell models with infinite Prandtl number. They suggest that assuming instead might help to improve the agreement with the data. This however cannot account for the additional dependence on the gravity profile visible in figure 4. We finally note that for the database of numerical simulations explored here, which suggests that the thermal boundary layers are stable in all our simulations.
Alternatively Wu & Libchaber (1991) and Zhang et al. (1997) proposed that the thermal boundary layers adapt their thicknesses such that the mean hot and cold temperature fluctuations at mid-depth are equal. Their experiments with Helium indeed revealed that the statistical distribution of the temperature at mid-depth was symmetrical. They further assumed that the thermal fluctuations in the center can be identified with the boundary layer temperature scales and , which characterise the temperature scale of the thermal boundary layers in a different way than the relative temperature drops and . This second hypothesis yields
θi=θo→νκαgiλiT3=νκαgoλoT3, (18)
and the corresponding temperature drops and boundary layer thicknesses ratio
ΔTi=11+η2χ1/3g,ΔTo=Tm=η2χ1/3g1+η2χ1/3g,λoTλiT=χ1/3g. (19)
Figure 5(a) shows for different radius ratios and gravity profiles, while figure 5(b) shows a comparison between the predicted mean bulk temperature and the actual values. Besides the cases with which are in relatively good agreement with the predicted scalings, the identity of the boundary layer temperature scales is in general not fulfilled for the other gravity profiles. The actual mean bulk temperature is thus poorly described by (19). We note that previous findings by Ahlers et al. (2006) already reported that the theory by Wu & Libchaber’s does also not hold when the transport properties depend on temperature (i.e. non-Oberbeck-Boussinesq convection).
### 3.3 Conservation of the average plume density in spherical shells
As demonstrated in the previous section, none of the hypotheses classically employed accurately account for the temperature drops and the boundary layer asymmetry observed in spherical shells. We must therefore find a dynamical quantity that could be possibly identified between the two boundary layers.
Figure 6 shows visualisations of the thermal boundary layers for three selected numerical models with different radius ratios and gravity profiles. The isocontours displayed in panels (a-c) reveal the intricate plume structure. Long and thin sheet-like structures form the main network of plumes. During their migration along the spherical surfaces, these sheet-like plumes can collide and convolute with each other to give rise to mushroom-type plumes (see Zhou & Xia, 2010b; Chillà & Schumacher, 2012). During this morphological evolution, mushroom-type plumes acquire a strong radial vorticity component. These mushroom-type plumes are particularly visible at the connection points of the sheet plumes network at the inner thermal boundary layer (red isosurface in figure 6a-c). Figure 6(d-f) shows the corresponding equatorial and radial cuts of the temperature fluctuation . These panels further highlight the plume asymmetry between the inner and the outer thermal boundary layers. For instance, the case with and (top panels) features an outer boundary layer approximately 4.5 times thicker than the inner one. Accordingly, the mushroom-like plumes that depart from the outer boundary layer are significantly thicker than the ones emitted from the inner boundary. This discrepancy tends to vanish in the thin shell case (, bottom panels) in which curvature and gravity variations play a less significant role ( in that case).
Puthenveettil & Arakeri (2005) and Zhou & Xia (2010b) performed statistical analysis of the geometrical properties of thermal plumes in experimental RB convection. By tracking a large number of plumes, their analysis revealed that both the plume separation and the width of the sheet-like plumes follow a log-normal probability density function (PDF).
To further assess how the average plume properties of the inner and outer thermal boundary layers compare with each other in spherical geometry, we adopt a simpler strategy by only focussing on the statistics of the plume density. The plume density per surface unit at a given radius is expressed by
ρp∼N4πr2, (20)
where is the number of plumes, approximated here by the ratio of the spherical surface area to the mean inter-plume area :
N∼4πr2¯S. (21)
This inter-plume area can be further related to the average plume separation via .
An accurate evaluation of the inter-plume area for each thermal boundary layer however requires to separate the plumes from the background fluid. Most of the criteria employed to determine the location of the plume boundaries are based on thresholds of certain physical quantities (see Shishkina & Wagner, 2008, for a review of the different plume extraction techniques). This encompasses threshold values on the temperature fluctuations (Zhou & Xia, 2002), on the vertical velocity (Ching et al., 2004) or on the thermal dissipation rate (Shishkina & Wagner, 2005). The choice of the threshold value however remains an open problem. Alternatively, Vipin & Puthenveettil (2013) show that the sign of the horizontal divergence of the velocity might provide a simple and threshold-free criterion to separate the plumes from the background fluid
\boldmath∇H⋅\boldmathu=1rsinθ∂∂θ(sinθuθ)+1rsinθ∂uϕ∂ϕ=−1r2∂∂r(r2ur).
Fluid regions with indeed correspond to local regions of positive vertical acceleration, expected inside the plumes, while the fluid regions with characterise the inter-plume area.
To analyse the statistics of , we thus consider here several criteria based either on a threshold value of the temperature fluctuations or on the sign of the horizontal divergence. This means that a given inter-plume area at the inner (outer) thermal boundary layer is either defined as an enclosed region surrounded by hot (cold) sheet-like plumes carrying a temperature perturbation ; or by an enclosed region with . To further estimate the possible impact of the chosen threshold value on , we vary between and . This yields
S(r)≡r2∮Tsinθdθdϕ, (22)
where the physical criterion () to extract the plume boundaries at the inner (outer) boundary layer is given by
Ti={T′(riλ,θ,ϕ)≤t,t∈[σ(riλ),σ(riλ/2),σ(riλ/4)],\boldmath∇H⋅\boldmathu≥0, (23) To={T′(roλ,θ,ϕ)≥t,t∈[σ(riλ),σ(riλ/2),σ(riλ/4)],\boldmath∇H⋅\boldmathu≥0,
where () for the inner (outer) thermal boundary layer.
Figure 7 shows an example of such a characterisation procedure for the inner thermal boundary layer of a numerical model with , , . Panel (b) illustrates a plume extraction process when using to determine the location of the plumes: the black area correspond to the inter-plume spacing while the white area correspond to the complementary plume network location. The fainter emerging sheet-like plumes are filtered out and only the remaining “skeleton” of the plume network is selected by this extraction process. The choice of is however arbitrary and can influence the evaluation of the number of plumes. The insets displayed in panels (c-e) illustrate the sensitivity of the plume extraction process on the criterion employed to detect the plumes. In particular, using the threshold based on the largest temperature fluctuations can lead to the fragmentation of the detected plume lanes into several isolated smaller regions. As a consequence, several neighbouring inter-plume areas can possibly be artificially connected when using this criterion. In contrast, using the sign of the horizontal divergence to estimate the plumes location yields much broader sheet-like plumes. As visible on panel (e), the plume boundaries frequently correspond to local maxima of the thermal dissipation rate (Shishkina & Wagner, 2008).
For each criterion given in (23), we then calculate the area of each bounded black surface visible in figure 7(b) to construct the statistical distribution of the inter-plume area for both thermal boundary layers. Figure 8 compares the resulting PDFs obtained by combining several snapshots for a numerical model with , and . Besides the criterion which yields PDFs that are slightly shifted towards smaller inter-plume spacing areas, the statistical distributions are found to be relatively insensitive to the detection criterion (23). We therefore restrict the following comparison to the criterion only.
Figure 9 shows the PDFs for the three numerical models of figure 6. For the two cases with and (panels b-c), the statistical distributions for both thermal boundary layers nearly overlap. This means that the inter-plume area is similar at both spherical shell surfaces. In contrast, for the case with (panel a), the two PDFs are offset relative to each other. However, the peaks of the distributions remain relatively close, meaning that once again the inner and the outer thermal boundary layers share a similar average inter-plume area. Puthenveettil & Arakeri (2005) and Zhou & Xia (2010b) demonstrated that the thermal plume statistics in turbulent RB convection follow a log-normal distribution (see also Shishkina & Wagner, 2008; Puthenveettil et al., 2011). The large number of plumes in the cases with and (figure 6b-c) would allow a characterisation of the nature of the statistical distributions. However, this would be much more difficult in the case (figure 6a) in which the plume density is significantly weaker. As a consequence, no further attempt has been made to characterise the exact nature of the PDFs visible in figure 9, although the universality of the log-normal statistics reported by Puthenveettil & Arakeri (2005) and Zhou & Xia (2010b) likely indicates that the same statistical distribution should hold here too.
The inter-plume area statistics therefore reveals that the inner and the outer thermal boundary layers exhibit a similar average plume density, independently of the spherical shell geometry and the gravity profile. Assuming would allow us to close the system of equations (13-14) and thus finally estimate , and . This however requires us to determine an analytical expression of the average inter-plume area or equivalently of the mean plume separation that depends on the boundary layer thickness and the temperature drop.
Using the boundary layer equations for natural convection (Rotem & Claassen, 1969), Puthenveettil et al. (2011) demonstrated that the thermal boundary layer thickness follows
λi,oT(x)∼x(Rai,ox)1/5, (24)
where is the distance along the horizontal direction and is a Rayleigh number based on the lengthscale and on the boundary layer temperature jumps . As shown on figure 10, using (Puthenveettil & Arakeri, 2005; Puthenveettil et al., 2011) then allows to establish the following relation for the average plume spacing
λT¯ℓ∼1Ra1/5ℓ. (25)
which yields
¯ℓi∼√αgiΔTiλiT5νκ,¯ℓo∼√αgoΔToλoT5νκ, (26)
for both thermal boundary layers. We note that an equivalent expression for the average plume spacing can be derived from a simple mechanical description of the equilibrium between production and coalescence of plumes in each boundary layer (see Parmentier & Sotin, 2000; King et al., 2013).
Equation (26) is however expected to be only valid at the scaling level. The vertical lines in figure 9 therefore correspond to the estimated average inter-plume area for both thermal boundary layers using (26) and . The predicted average inter-plume area is in good agreement with the peaks of the statistical distributions for the three cases discussed here. The expression (26) therefore provides a reasonable estimate of the average plume separation (Puthenveettil & Arakeri, 2005; Puthenveettil et al., 2011; Gunasegarane & Puthenveettil, 2014). The comparable observed plume density at both thermal boundary layers thus yields
ρip=ρop→αgiΔTiλiT5νκ=αgoΔToλoT5νκ. (27)
Using Eqs. (13-14) then allows us to finally estimate the temperature jumps and the ratio of the thermal boundary layer thicknesses in our dimensionless units:
ΔTi=11+η5/3χ1/6g,ΔTo=Tm=η5/3χ1/6g1+η5/3χ1/6g,λoTλiT=χ1/6gη1/3. (28)
Figure 11 shows the ratios , and the temperature jumps and . In contrast to the previous criteria, either coming from the marginal stability of the boundary layer (16, figure 4) or from the identity of the temperature fluctuations at mid-shell (28, figure 5), the ratio of the average plume separation now falls much closer to the unity line. Some deviations are nevertheless still visible for spherical shells with and (orange circles). The comparable average plume density between both boundary layers allows us to accurately predict the asymmetry of the thermal boundary layers and the corresponding temperature drops for the vast majority of the numerical cases explored here (solid lines in panels b-d).
As we consider a fluid with , the viscous boundary layers should show a comparable degree of asymmetry as the thermal boundary layers. (28) thus implies
λoUλiU=λoTλiT=χ1/6gη1/3. (29)
Figure 12 shows the ratio of the viscous boundary layer thicknesses for the different setups explored in this study. The observed asymmetry between the two spherical shell surfaces is in a good agreement with (29) (solid black lines).
### 3.4 Thermal boundary layer scalings
Using (28) and the definition of the Nusselt number (6), we can derive the following scaling relations for the thermal boundary layer thicknesses:
λiT=η1+η5/3χ1/6g1Nu,λoT=η2/3χ1/6g1+ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637791514396667, "perplexity": 967.6427979233613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00214.warc.gz"} |
https://en.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Electric_Potential_Energy | # A-level Physics (Advancing Physics)/Electric Potential Energy
Just as an object at a distance r from a sphere has gravitational potential energy, a charge at a distance r from another charge has electrical potential energy εelec. This is given by the formula:
${\displaystyle \epsilon _{elec}=V_{elec}q}$,
where Velec is the potential difference between the two charges Q and q. In a uniform field, voltage is given by:
${\displaystyle V_{elec}=E_{elec}d}$,
where d is distance, and Eelec is electric field strength. Combining these two formulae, we get:
${\displaystyle \epsilon _{elec}=qE_{elec}d}$
For the field around a point charge, the situation is different. By the same method, we get:
${\displaystyle \epsilon _{elec}={\frac {-kQq}{r}}}$
If a charge loses electric potential energy, it must gain some other sort of energy. You should also note that force is the rate of change of energy with respect to distance, and that, therefore:
${\displaystyle \epsilon _{elec}=\int {F\;dr}}$
## The Electronvolt
The electronvolt (eV) is a unit of energy equal to the charge of a proton or a positron. Its definition is the kinetic energy gained by an electron which has been accelerated through a potential difference of 1V:
1 eV = 1.6 x 10−19 J
For example: If a proton has an energy of 5MeV then in Joules it will be = 5 x 106 x 1.6 x 10−19 = 8 x 10−13 J.
Using eV is an advantage when high energy particles are involved as in case of particle accelerators.
## Summary of Electric Fields
You should now know (if you did the electric fields section in the right order) about four attributes of electric fields: force, field strength, potential energy and potential. These can be summarised by the following table:
Force ${\displaystyle F_{elec}={\frac {-kQq}{r^{2}}}}$ → integrate → with respect to r Potential Energy ${\displaystyle \epsilon _{elec}={\frac {-kQq}{r}}}$ ↓ per. unit charge ↓ Field Strength ${\displaystyle E_{elec}={\frac {-kQ}{r^{2}}}}$ → integrate → with respect to r Potential ${\displaystyle V_{elec}={\frac {-kQ}{r}}}$
This table is very similar to that for gravitational fields. The only difference is that field strength and potential are per. unit charge, instead of per. unit mass. This means that field strength is not the same as acceleration. Remember that integrate means 'find the area under the graph' and differentiate (the reverse process) means 'find the gradient of the graph'.
## Questions
k = 8.99 x 109 Nm2C−2
1. Convert 5 x 10−13 J to MeV.
2. Convert 0.9 GeV to J.
3. What is the potential energy of an electron at the negatively charged plate of a uniform electric field when the potential difference between the two plates is 100V?
4. What is the potential energy of a 2C charge 2 cm from a 0.5C charge?
5. What is represented by the gradient of a graph of electric potential energy against distance from some charge? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471825361251831, "perplexity": 560.7196062476639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385378.96/warc/CC-MAIN-20210308143535-20210308173535-00572.warc.gz"} |
http://perimeterinstitute.ca/fr/video-library/collection/cosmology-gravitation?page=1 | Le contenu de cette page n’est pas disponible en français. Veuillez nous en excuser.
# Cosmology & Gravitation
This series consists of talks in the areas of Cosmology, Gravitation and Particle Physics.
## Seminar Series Events/Videos
Currently there are no upcoming talks in this series.
## Aspects of field theory with higher derivatives
Mardi oct 24, 2017
Speaker(s):
I will discuss related aspects of field theories with higher-derivative Lagrangians but second-order equations of motion, with a focus on the Lovelock and Horndeski classes that have found use in modifications to general relativity. In the first half I will investigate when restricting to such terms is and is not well-justified from an effective field theory perspective. In the second half I will discuss how non-perturbative effects, like domain walls and quantum tunneling, are modified in the presence of these kinetic terms
Collection/Series:
Scientific Areas:
## Primordial gravity waves from tidal imprints in large-scale structure
Mardi oct 17, 2017
Speaker(s):
I will describe a tidal effect whereby the decay of primordial gravity waves leaves a permanent shear in the large-scale structure of the Universe. Future large-scale structure surveys - especially radio surveys of high-redshift hydrogen gas - could measure this shear and its spatial dependence to form a map of the initial gravity-wave field. The three dimensional nature of this probe makes it sensitive to the helicity of the gravity waves, allowing for searches for early-Universe gravitational parity violation.
Collection/Series:
Scientific Areas:
## Isotropising an anisotropic cyclic cosmology
Mardi oct 10, 2017
Speaker(s):
Standard models of cosmology use inflation as a mechanism to resolve the isotropy and homogeneity problem of the universe as well as the flatness problem. However, due to various well known problems with the inflationary paradigm, there has been an ongoing search for alternatives. Perhaps the most famous among these is the cyclic universe scenario or scenarios which incorporate bounces. As these scenarios have a contracting phase in the evolution of the universe, it is reasonable to ask whether the problems of homogeneity and isotropy can still be resolved in these scenarios.
Collection/Series:
Scientific Areas:
## How gravity modifies thermodynamics: Maximal temperature and Poincare recurrence theorem
Mardi sep 26, 2017
Speaker(s):
Thermodynamics is a closed field of research. The laws of thermodynamics, established in the nineteenth century, are still standing unchallenged. However, they do not include gravity. Inclusion of gravity into the thermodynamical system can significantly modify the expected behavior of the system. We will demonstrate that gravity dynamically induces a maximal temperature that can be reached in a gas of particles. We will also show how gravity can significantly change the Poincare recurrence theorem, and sometimes even prevent the recurrence from happening.
Collection/Series:
Scientific Areas:
## Dynamical chaos as a tool for characterizing multi-planet systems
Mardi sep 19, 2017
Speaker(s):
Many of the multi-planet systems discovered around other stars are maximally packed. This implies that simulations with masses or orbital parameters too far from the actual values will destabilize on short timescales; thus, long-term dynamics allows one to constrain the orbital architectures of many closely packed multi-planet systems. I will present a recent such application in the TRAPPIST-1 system, with 7 Earth-sized planets in the longest resonant chain discovered to date. In this case the complicated resonant phase space structure allows for strong constraints.
Collection/Series:
Scientific Areas:
## How Black Holes Dine above the Eddington "Limit" without Overeating or Excessive Belching
Mardi sep 12, 2017
Speaker(s):
The study of super-Eddington accretion is essential to our understanding of the growth of super-massive black holes in the early universe, the accretion of tidally disrupted stars, and the nature of ultraluminous X-ray sources. Unfortunately, this mode of accretion is particularly difficult to model because of the multidimensionality of the flow, the importance magnetohydrodynamic turbulence, and the dominant dynamical role played by radiation forces. However, recent increases in computing power and advances in algorithms are facilitating major improvements in our ability to model radiat
Collection/Series:
Scientific Areas:
## HIRAX: The Hydrogen Intensity and Real-time Analysis eXperiment
Lundi sep 11, 2017
Speaker(s):
The 21cm transition of atomic hydrogen is rapidly becoming one of our most powerful tools for probing the evolution of the universe. The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) is a planned 1,024-element array to be built in South Africa that will study the (possible) evolution of dark energy from z=0.8 to 2.5.
Collection/Series:
Scientific Areas:
## Uber-Gravity and H0 tension
Mardi aoû 29, 2017
Speaker(s):
Recently, the idea of taking ensemble average over gravity models has been introduced. Based on this idea, we study the ensemble average over (effectively) all the gravity models dubbing the name uber-gravity which is a fixed point in the model space. The uber-gravity has interesting universal properties, independent from the choice of basis: i) it mimics Einstein-Hilbert gravity for high-curvature regime, ii) it predicts stronger gravitational force for an intermediate-curvature regime, iii) surprisingly, for low-curvature regime, i.e.
Collection/Series:
Scientific Areas:
## Universality classes of inflation as phases of condensed matter: slow-roll, solids, gaugids etc.
Mardi aoû 22, 2017
Speaker(s):
Universality classes of inflation as phases of condensed matter: slow-roll, solids, gaugids etc.
Collection/Series:
Scientific Areas:
## Baryon Asymmetry and Gravitational Waves from Pseudoscalar Inflation
Jeudi aoû 10, 2017
Speaker(s):
In models of inflation driven by an axion-like pseudoscalar field, the inflaton, a, may couple to the standard model hypercharge gauge field via a Chern-Simons-type interaction, L ⊃ a F F̃. This coupling results in the explosive production of hypermagnetic fields during inflation, which has two interesting consequences: (1) The primordial hypermagnetic field is maximally helical. It is therefore capable of sourcing the generation of nonzero baryon number around the electroweak phase transition (via the chiral anomaly in the standard model).
Collection/Series:
Scientific Areas:
## RECENT PUBLIC LECTURE
### Pauline Gagnon: Improbable Feats and Useless Discoveries
Speaker: Pauline Gagnon | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175169825553894, "perplexity": 2414.548353810369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00155.warc.gz"} |
https://admin.clutchprep.com/chemistry/practice-problems/98746/write-the-chemical-formula-for-the-anion-present-in-the-aqueous-solution-of-agno | # Problem: Write the chemical formula for the anion present in the aqueous solution of AgNO3.
###### FREE Expert Solution
AgNO3
silver → most common charge = +1 → Ag+
###### Problem Details
Write the chemical formula for the anion present in the aqueous solution of AgNO3. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430801868438721, "perplexity": 4983.116565183915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655908294.32/warc/CC-MAIN-20200710113143-20200710143143-00509.warc.gz"} |
https://www.physicsforums.com/threads/density-of-states-confusion.271317/ | # Density of States Confusion
1. Nov 12, 2008
### Vanush
"the density of states (DOS) of a system describes the number of states at each energy level that are available to be occupied. "
But I thought there can't be more than 1 electron in a state? How does DoS have any meaning when dealing with eleectrons?
2. Nov 12, 2008
### nicksauce
My understanding is as follows:
The density of states, g(E), tells you the number of possible states at each energy. Since these states are degenerate, you can have one electron in different states at the same energy.
The expected number of electrons in a given energy state, f(E), is calculated using Fermi-Dirac statistics.
http://en.wikipedia.org/wiki/Fermi-Dirac_statistics
This can be no more than 1 because of the Pauli exclusion principle.
So then the total number of electrons at a given energy would be f(E)g(E).
3. Nov 12, 2008
### weejee
More precisely, g(E) = (# of states between E and E+dE) / (dE)
In a finite system, it is always a series of delta functions.
As the system size gets bigger so that we can assume that it is in the thermodynamic limit, we smooth out the delta functions to get a continuous version of g(E).
4. Nov 13, 2008
### Vanush
Why do electron states split into bands in solids if states exist for electrons that have the same energy level | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468716382980347, "perplexity": 535.7530460163164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00060-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://terrytao.wordpress.com/2016/02/27/finite-time-blowup-for-a-supercritical-defocusing-nonlinear-wave-system/ | I’ve just uploaded to the arXiv my paper Finite time blowup for a supercritical defocusing nonlinear wave system, submitted to Analysis and PDE. This paper was inspired by a question asked of me by Sergiu Klainerman recently, regarding whether there were any analogues of my blowup example for Navier-Stokes type equations in the setting of nonlinear wave equations.
Recall that the defocusing nonlinear wave (NLW) equation reads
$\displaystyle \Box u = |u|^{p-1} u \ \ \ \ \ (1)$
where ${u: {\bf R}^{1+d} \rightarrow {\bf R}}$ is the unknown scalar field, ${\Box = -\partial_t^2 + \Delta}$ is the d’Alambertian operator, and ${p>1}$ is an exponent. We can generalise this equation to the defocusing nonlinear wave system
$\displaystyle \Box u = (\nabla F)(u) \ \ \ \ \ (2)$
where ${u: {\bf R}^{1+d} \rightarrow {\bf R}^m}$ is now a system of scalar fields, and ${F: {\bf R}^m \rightarrow {\bf R}}$ is a potential which is homogeneous of degree ${p+1}$ and strictly positive away from the origin; the scalar equation corresponds to the case where ${m=1}$ and ${F(u) = \frac{1}{p+1} |u|^{p+1}}$. We will be interested in smooth solutions ${u}$ to (2). It is only natural to restrict to the smooth category when the potential ${F}$ is also smooth; unfortunately, if one requires ${F}$ to be homogeneous of order ${p+1}$ all the way down to the origin, then ${F}$ cannot be smooth unless it is identically zero or ${p+1}$ is an odd integer. This is too restrictive for us, so we will only require that ${F}$ be homogeneous away from the origin (e.g. outside the unit ball). In any event it is the behaviour of ${F(u)}$ for large ${u}$ which will be decisive in understanding regularity or blowup for the equation (2).
Formally, solutions to the equation (2) enjoy a conserved energy
$\displaystyle E[u] = \int_{{\bf R}^d} \frac{1}{2} \|\partial_t u \|^2 + \frac{1}{2} \| \nabla_x u \|^2 + F(u)\ dx.$
Using this conserved energy, it is possible to establish global regularity for the Cauchy problem (2) in the energy-subcritical case when ${d \leq 2}$, or when ${d \geq 3}$ and ${p < 1+\frac{4}{d-2}}$. This means that for any smooth initial position ${u_0: {\bf R}^d \rightarrow {\bf R}^m}$ and initial velocity ${u_1: {\bf R}^d \rightarrow {\bf R}^m}$, there exists a (unique) smooth global solution ${u: {\bf R}^{1+d} \rightarrow {\bf R}^m}$ to the equation (2) with ${u(0,x) = u_0(x)}$ and ${\partial_t u(0,x) = u_1(x)}$. These classical global regularity results (essentially due to Jörgens) were famously extended to the energy-critical case when ${d \geq 3}$ and ${p = 1 + \frac{4}{d-2}}$ by Grillakis, Struwe, and Shatah-Struwe (though for various technical reasons, the global regularity component of these results was limited to the range ${3 \leq d \leq 7}$). A key tool used in the energy-critical theory is the Morawetz estimate
$\displaystyle \int_0^T \int_{{\bf R}^d} \frac{|u(t,x)|^{p+1}}{|x|}\ dx dt \lesssim E[u]$
which can be proven by manipulating the properties of the stress-energy tensor
$\displaystyle T_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle - \frac{1}{2} \eta_{\alpha \beta} (\langle \partial^\gamma u, \partial_\gamma u \rangle + F(u))$
(with the usual summation conventions involving the Minkowski metric ${\eta_{\alpha \beta} dx^\alpha dx^\beta = -dt^2 + |dx|^2}$) and in particular exploiting the divergence-free nature of this tensor: ${\partial^\beta T_{\alpha \beta}}$ See for instance the text of Shatah-Struwe, or my own PDE book, for more details. The energy-critical regularity results have also been extended to slightly supercritical settings in which the potential grows by a logarithmic factor or so faster than the critical rate; see the results of myself and of Roy.
This leaves the question of global regularity for the energy supercritical case when ${d \geq 3}$ and ${p > 1+\frac{4}{d-2}}$. On the one hand, global smooth solutions are known for small data (if ${F}$ vanishes to sufficiently high order at the origin, see e.g. the work of Lindblad and Sogge), and global weak solutions for large data were constructed long ago by Segal. On the other hand, the solution map, if it exists, is known to be extremely unstable, particularly at high frequencies; see for instance this paper of Lebeau, this paper of Christ, Colliander, and myself, this paper of Brenner and Kumlin, or this paper of Ibrahim, Majdoub, and Masmoudi for various formulations of this instability. In the case of the focusing NLW ${-\partial_{tt} u + \Delta u = - |u|^{p-1} u}$, one can easily create solutions that blow up in finite time by ODE constructions, for instance one can take ${u(t,x) = c (1-t)^{-\frac{2}{p-1}}}$ with ${c = (\frac{2(p+1)}{(p-1)^2})^{\frac{1}{p-1}}}$, which blows up as ${t}$ approaches ${1}$. However the situation in the defocusing supercritical case is less clear. The strongest positive results are of Kenig-Merle and Killip-Visan, which show (under some additional technical hypotheses) that global regularity for such equations holds under the additional assumption that the critical Sobolev norm of the solution stays bounded. Roughly speaking, this shows that “Type II blowup” cannot occur for (2).
Our main result is that finite time blowup can in fact occur, at least for three-dimensional systems where the number ${m}$ of degrees of freedom is sufficiently large:
Theorem 1 Let ${d=3}$, ${p > 5}$, and ${m \geq 76}$. Then there exists a smooth potential ${F: {\bf R}^m \rightarrow {\bf R}}$, positive and homogeneous of degree ${p+1}$ away from the origin, and a solution to (2) with smooth initial data that develops a singularity in finite time.
The rather large lower bound of ${76}$ on ${m}$ here is primarily due to our use of the Nash embedding theorem (which is the first time I have actually had to use this theorem in an application!). It can certainly be lowered, but unfortunately our methods do not seem to be able to bring ${m}$ all the way down to ${1}$, so we do not directly exhibit finite time blowup for the scalar supercritical defocusing NLW. Nevertheless, this result presents a barrier to any attempt to prove global regularity for that equation, in that it must somehow use a property of the scalar equation which is not available for systems. It is likely that the methods can be adapted to higher dimensions than three, but we take advantage of some special structure to the equations in three dimensions (related to the strong Huygens principle) which does not seem to be available in higher dimensions.
The blowup will in fact be of discrete self-similar type in a backwards light cone, thus ${u}$ will obey a relation of the form
$\displaystyle u(e^S t, e^S x) = e^{-\frac{2}{p-1} S} u(t,x)$
for some fixed ${S>0}$ (the exponent ${-\frac{2}{p-1}}$ is mandated by dimensional analysis considerations). It would be natural to consider continuously self-similar solutions (in which the above relation holds for all ${S}$, not just one ${S}$). And rough self-similar solutions have been constructed in the literature by perturbative methods (see this paper of Planchon, or this paper of Ribaud and Youssfi). However, it turns out that continuously self-similar solutions to a defocusing equation have to obey an additional monotonicity formula which causes them to not exist in three spatial dimensions; this argument is given in my paper. So we have to work just with discretely self-similar solutions.
Because of the discrete self-similarity, the finite time blowup solution will be “locally Type II” in the sense that scale-invariant norms inside the backwards light cone stay bounded as one approaches the singularity. But it will not be “globally Type II” in that scale-invariant norms stay bounded outside the light cone as well; indeed energy will leak from the light cone at every scale. This is consistent with the results of Kenig-Merle and Killip-Visan which preclude “globally Type II” blowup solutions to these equations in many cases.
We now sketch the arguments used to prove this theorem. Usually when studying the NLW, we think of the potential ${F}$ (and the initial data ${u_0,u_1}$) as being given in advance, and then try to solve for ${u}$ as an unknown field. However, in this problem we have the freedom to select ${F}$. So we can look at this problem from a “backwards” direction: we first choose the field ${u}$, and then fit the potential ${F}$ (and the initial data) to match that field.
Now, one cannot write down a completely arbitrary field ${u}$ and hope to find a potential ${F}$ obeying (2), as there are some constraints coming from the homogeneity of ${F}$. Namely, from the Euler identity
$\displaystyle \langle u, (\nabla F)(u) \rangle = (p+1) F(u)$
we see that ${F(u)}$ can be recovered from (2) by the formula
$\displaystyle F(u) = \frac{1}{p+1} \langle u, \Box u \rangle \ \ \ \ \ (3)$
so the defocusing nature of ${F}$ imposes a constraint
$\displaystyle \langle u, \Box u \rangle > 0.$
Furthermore, taking a derivative of (3) we obtain another constraining equation
$\displaystyle \langle \partial_\alpha u, \Box u \rangle = \frac{1}{p+1} \partial_\alpha \langle u, \Box u \rangle$
that does not explicitly involve the potential ${F}$. Actually, one can write this equation in the more familiar form
$\displaystyle \partial^\beta T_{\alpha \beta} = 0$
where ${T_{\alpha \beta}}$ is the stress-energy tensor
$\displaystyle T_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle - \frac{1}{2} \eta_{\alpha \beta} (\langle \partial^\gamma u, \partial_\gamma u \rangle + \frac{1}{p+1} \langle u, \Box u \rangle),$
now written in a manner that does not explicitly involve ${F}$.
With this reformulation, this suggests a strategy for locating ${u}$: first one selects a stress-energy tensor ${T_{\alpha \beta}}$ that is divergence-free and obeys suitable positive definiteness and self-similarity properties, and then locates a self-similar map ${u}$ from the backwards light cone to ${{\bf R}^m}$ that has that stress-energy tensor (one also needs the map ${u}$ (or more precisely the direction component ${u/\|u\|}$ of that map) injective up to the discrete self-similarity, in order to define ${F(u)}$ consistently). If the stress-energy tensor was replaced by the simpler “energy tensor”
$\displaystyle E_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle$
then the question of constructing an (injective) map ${u}$ with the specified energy tensor is precisely the embedding problem that was famously solved by Nash (viewing ${E_{\alpha \beta}}$ as a Riemannian metric on the domain of ${u}$, which in this case is a backwards light cone quotiented by a discrete self-similarity to make it compact). It turns out that one can adapt the Nash embedding theorem to also work with the stress-energy tensor as well (as long as one also specifies the mass density ${M = \|u\|^2}$, and as long as a certain positive definiteness property, related to the positive semi-definiteness of Gram matrices, is obeyed). Here is where the dimension ${76}$ shows up:
Proposition 2 Let ${M}$ be a smooth compact Riemannian ${4}$-manifold, and let ${m \geq 76}$. Then ${M}$ smoothly isometrically embeds into the sphere ${S^{m-1}}$.
Proof: The Nash embedding theorem (in the form given in this ICM lecture of Gunther) shows that ${M}$ can be smoothly isometrically embedded into ${{\bf R}^{19}}$, and thus in ${[-R,R]^{19}}$ for some large ${R}$. Using an irrational slope, the interval ${[-R,R]}$ can be smoothly isometrically embedded into the ${2}$-torus ${\frac{1}{\sqrt{38}} (S^1 \times S^1)}$, and so ${[-R,R]^{19}}$ and hence ${M}$ can be smoothly embedded in ${\frac{1}{\sqrt{38}} (S^1)^{38}}$. But from Pythagoras’ theorem, ${\frac{1}{\sqrt{38}} (S^1)^{38}}$ can be identified with a subset of ${S^{m-1}}$ for any ${m \geq 76}$, and the claim follows. $\Box$
One can presumably improve upon the bound ${76}$ by being more efficient with the embeddings (e.g. by modifying the proof of Nash embedding to embed directly into a round sphere), but I did not try to optimise the bound here.
The remaining task is to construct the stress-energy tensor ${T_{\alpha \beta}}$. One can reduce to tensors that are invariant with respect to rotations around the spatial origin, but this still leaves a fair amount of degrees of freedom (it turns out that there are four fields that need to be specified, which are denoted ${M, E_{tt}, E_{tr}, E_{rr}}$ in my paper). However a small miracle occurs in three spatial dimensions, in that the divergence-free condition involves only two of the four degrees of freedom (or three out of four, depending on whether one considers a function that is even or odd in ${r}$ to only be half a degree of freedom). This is easiest to illustrate with the scalar NLW (1). Assuming spherical symmetry, this equation becomes
$\displaystyle - \partial_{tt} u + \partial_{rr} u + \frac{2}{r} \partial_r u = |u|^{p-1} u.$
Making the substitution ${\phi := ru}$, we can eliminate the lower order term ${\frac{2}{r} \partial_r}$ completely to obtain
$\displaystyle - \partial_{tt} \phi + \partial_{rr} \phi= \frac{1}{r^{p-1}} |\phi|^{p-1} \phi.$
(This can be compared with the situation in higher dimensions, in which an undesirable zeroth order term ${\frac{(d-1)(d-3)}{r^2} \phi}$ shows up.) In particular, if one introduces the null energy density
$\displaystyle e_+ := \frac{1}{2} |\partial_t \phi + \partial_r \phi|^2$
and the potential energy density
$\displaystyle V := \frac{|\phi|^{p+1}}{(p+1) r^{p-1}}$
then one can verify the equation
$\displaystyle (\partial_t - \partial_r) e_+ + (\partial_t + \partial_r) V = - \frac{p-1}{r} V$
which can be viewed as a transport equation for ${e_+}$ with forcing term depending on ${V}$ (or vice versa), and is thus quite easy to solve explicitly by choosing one of these fields and then solving for the other. As it turns out, once one is in the supercritical regime ${p>5}$, one can solve this equation while giving ${e_+}$ and ${V}$ the right homogeneity (they have to be homogeneous of order ${-\frac{4}{p-1}}$, which is greater than ${-1}$ in the supercritical case) and positivity properties, and from this it is possible to prescribe all the other fields one needs to satisfy the conclusions of the main theorem. (It turns out that ${e_+}$ and ${V}$ will be concentrated near the boundary of the light cone, so this is how the solution ${u}$ will concentrate also.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 133, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566774368286133, "perplexity": 182.58614102848696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00202-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://ai.stackexchange.com/tags/function-approximation/hot | # Tag Info
17
There are multiple papers on the topic because there have been multiple attempts to prove that neural networks are universal (i.e. they can approximate any continuous function) from slightly different perspectives and using slightly different assumptions (e.g. assuming that certain activation functions are used). Note that these proofs tell you that neural ...
10
Here's an intuitive description answer: Function approximation can be done with any parameterizable function. Consider the problem of a $Q(s,a)$ space where $s$ is the positive reals, $a$ is $0$ or $1$, and the true Q-function is $Q(s, 0) = s^2$, and $Q(s, 1)= 2s^2$, for all states. If your function approximator is $Q(s, a) = m*s + n*a + b$, there exists no ...
9
Any supervised learning (SL) problem can be cast as an equivalent reinforcement learning (RL) one. Suppose you have the training dataset $\mathcal{D} = \{ (x_i, y_i \}_{i=1}^N$, where $x_i$ is an observation and $y_i$ the corresponding label. Then let $x_i$ be a state and let $f(x_i) = \hat{y}_i$, where $f$ is your (current) model, be an action. So, the ...
6
Before anything, the function you have wrote for the network lacks the bias variables (I'm sure you used bias to get those beautiful images, otherwise your tanh network had to start from zero). Generally I would say it's impossible to have a good approximation of sinus with just 3 neurons, but if you want to consider one period of sinus, then you can do ...
5
The problem you discuss extends past the machine but to the man behind the machine (or woman). ML can be broken down into 3 components, the model, the data, and the learning procedure. This by the way extends to us as well. The model is our brain, the data is our experience and sensory input, and the learning procedure is there but unknown (for now $<$...
5
Let us suppose we have a network without any functions in between. Each layer consists of a linear function. i.e layer_output = Weights.layer_input + bias Consider a 2 layer neural network, the outputs from layer one will be: x2 = W1*x1 + b1 Now we pass the same input to the second layer, which will be x3 = W2x*2 + b2 Also x2 = W1*x1 + b1 Substituting ...
5
As far as I'm aware, it is still somewhat of an open problem to get a really clear, formal understanding of exactly why / when we get a lack of convergence -- or, worse, sometimes a danger of divergence. It is typically attributed to the "deadly triad" (see 11.3 of the second edition of Sutton and Barto's book), the combination of: Function approximation, ...
5
Nonlinear relations between input and output can be achieved by using a nonlinear activation function on the value of each neuron, before it's passed on to the neurons in the next layer.
5
Inherently, no. The MLP is just a data structure. It represents a function, but a standard MLP is just representing an input-output mapping, and there's no recursive structure to it. On the other hand, possibly your source is referring to the common algorithms that operate over MLPs, specifically forward propagation for prediction and back propagation for ...
4
One of the important qualifications of the Universal approximation theorem is that the neural network approximation may be computationally infeasible. "A feedforward network with a single layer is sufficient to represent any function, but the layer may be infeasibly large and may fail to learn and generalize correctly." - Ian Goodfellow, DLB I can't ...
4
First, you need to consider what are the "parameters" of this "optimization algorithm" that you want to "optimize". Let's take the most simple case, a SGD without momentum. The update rule for this optimizer is: $$w_{t+1} \leftarrow w_{t} - a \cdot \nabla_{w_{t}} J(w_t) = w_{t} - a \cdot g_t$$ where $w_t$ are the weights at iteration $t$, $J$ is the cost ...
3
"Modern" Guarantees for Feed-Forward Neural Networks My answer will complement nbro's above, which gave a very nice overview of universal approximation theorems for different types of commonly used architectures, by focusing on recent developments specifically for feed-forward networks. I'll try an emphasis depth over breadth (sometimes called ...
3
Sure, you can define plenty of things we don't generally need to regard as recursive as so. An MLP is just a series of functions applied to its input. This can be loosely formulated as $$o_n = f(o_{n-1})$$ Where $o_n$ is the output of layer $n$. But this clearly doesn't reveal, much does it?
3
Of course, it's possible to define a problem where there is no relationship between input $x$ and output $y$. In general, if the mutual information between $x$ and $y$ is zero (i.e. $x$ and $y$ are statistically independent) then the best prediction you can do is independent of $x$. The task of machine learning is to learn a distribution $q(y|x)$ that is as ...
3
You can indeed fit a polynomial to your labelled data, which is known as polynomial regression (which can e.g. be done with the function numpy.polyfit). One apparent limitation of polynomial regression is that, in practice, you need to assume that your data follows some specific polynomial of some degree $n$, i.e. you assume that your data has the form of ...
3
You can choose those states, but is the agent aware of the state it is in? From the text, it seems that the agent cannot distinguish between the three states. Its observation function is completely uninformative. This is why a stochastic policy is what is needed. This is common for POMDPs, whereas for regular MDPs we can always find a deterministic policy ...
3
First I will address the issue of Tabular methods. These do not use SGD at all. Although the updates are very similar to an SGD update there is no gradient here and so we are not using SGD. Many Tabular methods are proven to converge, for instance the paper by Chris Watkins titled "Q-Learning" introduces and proves that Q-learning converges. Also ...
3
The notion of a state in reinforcement learning is (more or less) the same as the notion of a context in contextual bandits. The main difference is that, in reinforcement learning, an action $a_t$ in state $s_t$ not only affects the reward $r_r$ that the agent will get but it will also affect the next state $s_{t+1}$ the agent will end up in, while, in ...
3
Conceptually, in general, how is the context being handled in CB, compared to states in RL? In terms of its place in the description of Contextual Bandits and Reinforcement Learning, context in CB is an exact analog for state in RL. The framework for RL is a strict generalisation of CB, and can be made similar or the same in a few separate ways: If the ...
2
To answer this, it's helpful to consider the notion of a neural network architecture – in this context, we can think of the architecture as being the network depth (i.e. number of layers), width (i.e. number of nodes in a layer), and some other structural aspects, such as recurrent layers, convolution layers, pool layers, etc. Theory In terms of the ...
2
There are a variety of possible things that could be wrong, but let me give you some potentially useful information. Neural networks with ReLU activation functions are Turing complete for a computation with on order as many steps as the network contains nodes - for a recurrent network (an RNN), that means the same level of turing completeness as any finite ...
2
I have found some clues in Maei's thesis (2011): “Gradient Temporal-Difference Learning Algorithms.” According to the thesis: GTD2 is a method that minimizes the projected Bellman error (MSPBE). GTD2 is convergent in non-linear function approximation case (and off-policy). GTD2 converges to a TD-fixed point (same point as semi-gradient TD). GTD2 is slower ...
2
There are three problems Limited capacity Neural Network (explained by John) Non-stationary Target Non-stationary distribution Non-stationary Target In tabular Q-learning, when we update a Q-value, other Q-values in the table don't get affected by this. But in neural networks, one update to the weights aiming to alter one Q-value ends up affecting other Q-...
2
Andrej Karpathy's blog has a tutorial on getting a neural network to learn pong with reinforcement learning. His commentary on the current state of the field is interesting. He also provides a whole bunch of links (David Silver's course catches my eye). Here is a working link to the lecture videos. Here are demos of DeepMinds game playing. Get links to the ...
2
To check if a function is linear is easy: if you can train one fully connected layer, without activations, of the right dimensions (for a function $\mathbb{R}^n \rightarrow \mathbb{R}^m$ you need $nm$ weights aka the matrix corresponding to the linear application), with enough data, to 100% accuracy... then it is linear. The estimated function is explicit: ...
2
In my humble opinion, it seems like it is important to have them separated, if having a certain card can influence the result in some way that is not its prime value, instead of not only using the sum. But it depends on the game and its rules. For example: If having 5 cards of hearts in the set of 15 cards makes you win the game, then if you only represent ...
2
By itself, I'm not sure it's possible to know. It's possible the slides were old. Or, the intended purpose was to mention how as sigmoid ranges from 0 to 1. Mostly, it looks like it was intended to bring up gradient descent. But it could also be an entry point to the discussion of other methods such as ReLU. Either that or perhaps some sort of norming ...
2
We usually optimize with respect to something. For example, you can train a neural network to locate cats in an image. This operation of locating cats in an image can be thought of as a function: given an image, a neural network can be trained to return the position of the cat in the image. In this sense, we can optimize a neural network with respect to this ...
2
It is not so much the problem of using Reinforcement Learning to train the neural networks, it is the assumptions made about the data given to standard Neural Networks. They are not capable of handling strongly correlated data which is one of the motivations for introducing Recurrent Neural Networks, as they can handle this correlated data well.
2
First of all, neural networks are not (just) defined by the fact that they are typically trained with gradient descent and back-propagation. In fact, there are other ways of training neural networks, such as evolutionary algorithms and the Hebb's rule (e.g. Hopfield networks are typically associated with this Hebbian learning rule). The first difference ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656224012374878, "perplexity": 441.68717800463014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00495.warc.gz"} |
http://groupprops.subwiki.org/wiki/Corollary_of_Timmesfeld's_replacement_theorem_for_abelian_subgroups | # Corollary of Timmesfeld's replacement theorem for abelian subgroups
This article defines a subgroup property: a property that can be evaluated to true/false given a group and a subgroup thereof, invariant under subgroup equivalence. View a complete list of subgroup properties[SHOW MORE]
Suppose is a group of prime power order. Let denote the set of abelian subgroups of maximum order in . If , and is an -invariant abelian subgroup of . Then, is an abelian subgroup of maximum order. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349406123161316, "perplexity": 773.618275796816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544679.86/warc/CC-MAIN-20161202170904-00194-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://jeopardylabs.com/play/algebra-jeopardy6 | Systems
Special Systems
Linear Inequalities
Exponents and polynomials
Potpourri
### 100
The solution of each system y=5x-10 y=3x+8 (Solve by substitution or elimination)
What is (9,35)?
### 100
A system that has exactly one solution. The graph of this system consists of 2 intersecting lines.
What is an independent system?
### 100
The ordered pair (7,3) is or is not a solution to the inequality y
What is a solution?
### 100
The solution of this problem when simplified 8 to the zero power
What is 1?
### 100
Scientific notation is a method of writing numbers that are very large or very small. The second part of scientific notation is written to this power.
What is 10?
### 200
The solution to each system 2x+y=2 -4x+4y=12
What is (-1,2)?
### 200
When graphing 2 linear equations with the same slope but different y-intercepts, the lines are intersecting, coincident, or parallel...
What is parallel?
### 200
When the inequality is written y< or y>, the points on the boundary line are not solutions of the inequality. The line on the graph is (solid, dashed, there is no boundary line)
What is dashed?
Simplify a^-7b^2
What is b^2/a^7?
### 200
The coefficient of the first term of a polynomial in standard form.
What is the leading coefficeint?
### 300
The solution to each system 3x+2y=6 -x+y=-2
What is (2,0)?
### 300
Look at the slope and y-intercept of the following equations: y=2x-2 y=x+1 Are these two lines parallel, intersecting or coincident (the same line)
What is intersecting?
### 300
When the inequality is written as y is less than or equal to, the points (above, below) the line are solutions of the inequality.
What is below the boundary line?
### 300
When dividing two numbers that have exponents in them, you (add, subtract, multiply) the exponents.
What is subtract?
### 300
The degree of the term of the polynomial with the greatest degree.
What is the degree of a polynomial?
### 400
The solution to each system 3x-y=-2 -2x+y=3
What is (1,5)?
### 400
The slopes and y-intercepts of coincident lines are... (same, different, neither of these answers)
What is the same?
### 400
When two linear inequalities are graphed, the coordinates in the (unshaded area, overlapping shaded area) are solutions to the systems.
What is overlapping shaded area?
### 400
The simplification of this equation of poynomials 4x^3+ 8x^2+2x+3x^3+x^2 +4x=
What is 7x^ + 9x^2 + 6x?
### 400
The product of any number and a whole number.
What is a multiple?
### 500
A method used to solve systems of equations by solving an equation for one variable and substituting the resulting expression into the other equation.
What is substitution?
### 500
The number of solutions to the following systems: y=-2x+4 2x+y=4
What is an infinite amount of solutions?
### 500
If a boundary line is solid, the points on the boundary lines (are or are not) solutions.
What is are solutions?
### 500
When writing a polynomial equation in standard form, the exponents of the variables are written in (ascending,descending) order.
What is descending?
### 500
A number that is multiplied by another number to get a product.
What is a factor? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363250136375427, "perplexity": 2249.022299458928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860570.57/warc/CC-MAIN-20180618144750-20180618164750-00216.warc.gz"} |
http://support.sas.com/documentation/cdl/en/statug/67523/HTML/default/statug_genmod_details20.htm | # The GENMOD Procedure
### F Statistics
Suppose that is the deviance resulting from fitting a generalized linear model and that is the deviance from fitting a submodel. Then, under appropriate regularity conditions, the asymptotic distribution of is chi-square with r degrees of freedom, where r is the difference in the number of parameters between the two models and is the dispersion parameter. If is unknown, and is an estimate of based on the deviance or Pearson’s chi-square divided by degrees of freedom, then, under regularity conditions, has an asymptotic chi-square distribution with degrees of freedom. Here, n is the number of observations and p is the number of parameters in the model that is used to estimate . Thus, the asymptotic distribution of
is the F distribution with r and degrees of freedom, assuming that and are approximately independent.
This F statistic is computed for the Type 1 analysis, Type 3 analysis, and hypothesis tests specified in CONTRAST statements when the dispersion parameter is estimated by either the deviance or Pearson’s chi-square divided by degrees of freedom, as specified by the DSCALE or PSCALE option in the MODEL statement. In the case of a Type 1 analysis, model 0 is the higher-order model obtained by including one additional effect in model 1. For a Type 3 analysis and hypothesis tests, model 0 is the full specified model and model 1 is the submodel obtained from constraining the Type III contrast or the user-specified contrast to be 0. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475160241127014, "perplexity": 429.1220852678693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891105.83/warc/CC-MAIN-20180122054202-20180122074202-00586.warc.gz"} |
http://ia.cr/cryptodb/data/author.php?authorkey=4148 | ## CryptoDB
### Xiutao Feng
#### Publications
Year
Venue
Title
2017
TOSC
Many block ciphers use permutations defined over the finite field F22k with low differential uniformity, high nonlinearity, and high algebraic degree to provide confusion. Due to the lack of knowledge about the existence of almost perfect nonlinear (APN) permutations over F22k, which have lowest possible differential uniformity, when k > 3, constructions of differentially 4-uniform permutations are usually considered. However, it is also very difficult to construct such permutations together with high nonlinearity; there are very few known families of such functions, which can have the best known nonlinearity and a high algebraic degree. At Crypto’16, Perrin et al. introduced a structure named butterfly, which leads to permutations over F22k with differential uniformity at most 4 and very high algebraic degree when k is odd. It is posed as an open problem in Perrin et al.’s paper and solved by Canteaut et al. that the nonlinearity is equal to 22k−1−2k. In this paper, we extend Perrin et al.’s work and study the functions constructed from butterflies with exponent e = 2i + 1. It turns out that these functions over F22k with odd k have differential uniformity at most 4 and algebraic degree k +1. Moreover, we prove that for any integer i and odd k such that gcd(i, k) = 1, the nonlinearity equality holds, which also gives another solution to the open problem proposed by Perrin et al. This greatly expands the list of differentially 4-uniform permutations with good nonlinearity and hence provides more candidates for the design of block ciphers.
2011
FSE
2010
ASIACRYPT
#### Coauthors
Dengguo Feng (1)
Shihui Fu (1)
Jun Liu (1)
Chuankun Wu (2)
Baofeng Wu (1)
Chunfang Zhou (1)
Zhaocun Zhou (1) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329655170440674, "perplexity": 1135.4568121412096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00292.warc.gz"} |
https://www.jiskha.com/questions/1429344/the-g-value-for-formation-of-gaseous-water-at-298-k-and-1-atm-is-278-kj-mol-what-is-the | # AP Chemistry
The G value for formation of gaseous water at 298 K and 1 atm is -278 kJ/mol. What is the nature of the spontaneity of formation of gaseous water at these conditions?
1. 👍
2. 👎
3. 👁
1. dG is negative. The reaction is spontaneous.
dG = -; rxn spontaneous
dG = 0; rxn about 50/50
dG = +; rxn not spontaneous in the direction shown but is spontaneous for the reverse rxn.
1. 👍
2. 👎
## Similar Questions
1. ### Chemistry
Write a balanced equation for the combustion of gaseous ethylene (C2H4), an important natural plant hormone, in which it combines with gaseous oxygen to form gaseous carbon dioxide and gaseous water
2. ### Chemistry
At 298 K, the Henry\'s law constant for oxygen is 0.00130 M/atm. Air is 21.0% oxygen.At 298 K, what is the solubility of oxygen in water exposed to air at 1.00 atm? At 298 K, what is the solubility of oxygen in water exposed to
3. ### Chemistry
Write a balanced equation for the combustion of gaseous propane (C3H8), a minority component of natural gas, in which it combines with gaseous oxygen to form gaseous carbon dioxide and gaseous water. I answered C3H8 + 5O2 = 3CO2 +
4. ### Chemistry
For a gaseous reaction, standard conditions are 298 K and a partial pressure of 1 atm for all species. For the reaction 2NO(g) + O2(g) --> 2NO2(g) the standard change in Gibbs free energy is ΔG° = -69.0 kJ/mol. What is ΔG for
1. ### chemistry
The standard enthalpy of combustion of C2H6O(l) is -1,367 kJ mol-1 at 298 K. What is the standard enthalpy of formation of C2H6O(l) at 298 K? Give your answer in kJ mol-1, rounded to the nearest kilojoule. Do not include units as
2. ### Chemistry
The ΔHof of gaseous dimethyl ether (CH3OCH3) is –185.4 kJ/mol; the vapour pressure is 1.00 atm at –23.7oC and 0.526 atm at –37.8oC. a) Calculate ΔHovap of dimethyl ether. b) Calculate ΔHof of liquid dimethyl ether Any
3. ### Chemistry 1202
3CH4(g)->C3H8(g)+2H2(g) Calculate change in G at 298 k if the reaction mixture consists of 41atm of CH4 , 0.010 atm of CH3, and 2.3×10−2 atm of H2.
4. ### Chemisty
Calculate the Volume occupied by 1.5 moles of an ideal gas at 25 degrees Celsius and a pressure of 0.80 atm. (R= 0.08206 L atm/(mol*K). I've tried using the ideal gas law: PV=nRT but i can't seem to get where I am getting lost.
1. ### A.P Chemistry
A rigid 5.00 L cylinder contains 24.5g of N2(g) and 28.0g of O2(g). (a). Calculate the total pressure, in atm, of the gas mixture in the cylinder at 298 K (b) The temperature of the gas mixture in the cylinder is decreased to 280
2. ### Chemistry 111
Gaseous ethane will react with gaseous oxygen to produce gaseous carbon dioxide and gaseous water . Suppose 17. g of ethane is mixed with 107. g of oxygen. Calculate the minimum mass of ethane that could be left over by the
3. ### Chemistry
Write a balanced equation for the incomplete combustion of gaseous pentane (C5H12) which combines gaseous oxygen to form carbon dioxide, gaseous water and carbon monoxide.
4. ### Chemistry
Fish breathe the dissolved air in water through their gills. Assuming the partial pressures of oxygen and nitrogen in the air are to be .20 atm and .80 atm respectively, calculate the mole fractions of oxygen and nitrogen in the | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896584510803223, "perplexity": 4436.404376268933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00371.warc.gz"} |
http://cpr-quantph.blogspot.com/2013/06/13063991-jonathan-welch-et-al.html | ## Efficient Quantum Circuits for Diagonal Unitaries Without Ancillas [PDF]
Jonathan Welch, Daniel Greenbaum, Sarah Mostame, Alán Aspuru-Guzik
The accurate evaluation of diagonal unitary operators is often the most resource-intensive element of quantum algorithms such as real-space quantum simulation and Grover search. Efficient circuits have been demonstrated in some cases but generally require ancilla registers, which can dominate the qubit resources. In this paper, we point out a correspondence between Walsh functions and a basis for diagonal operators that gives a simple way to construct efficient circuits for diagonal unitaries without ancillas. This correspondence reduces the problem of constructing the minimal-depth circuit within a given error tolerance, for an arbitrary diagonal unitary $e^{if(\hat{x})}$ in the $|x>$ basis, to that of finding the minimal-length Walsh-series approximation to the function $f(x)$. We apply this approach to the quantum simulation of the classical Eckart barrier problem of quantum chemistry, demonstrating that high-fidelity quantum simulations can be achieved with few qubits and low depth.
View original: http://arxiv.org/abs/1306.3991 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962814211845398, "perplexity": 1212.762897035175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945552.45/warc/CC-MAIN-20180422080558-20180422100558-00557.warc.gz"} |
http://math.stackexchange.com/questions/289560/how-does-one-prove-a-b-c-%e2%8a%86-a-c-b-c | How does one prove (A - B) - C ⊆ (A - C) - (B - C)
When proving this I'm not sure how to 'take out' the C on the RHS of the equation.
The LHS is
(x ∈ A) ∧ !(x ∈ B) ∧ !(x ∈ C)
The RHS is
(x ∈ A) ∧ !(x ∈ C) ∧ !(x ∈ B) ∧ !(x ∈ C)
How does how prove LHS is a subset of RHS?
-
Take an element in the LHS and prove it must be in the RHS. – Patrick Li Jan 29 '13 at 6:34
Just show that each $x\in(A\setminus B)\setminus C$ belongs to $(A\setminus C)\setminus(B\setminus C)$.
Suppose that $x\in(A\setminus B)\setminus C$; then $x\in A\setminus B$, and $x\notin C$. Since $x\in A\setminus B$, we know further that $x\in A$ and $x\notin B$.
Now put the pieces back together. First, $x\in A$ and $x\notin C$, so $x\in A\setminus C$. Moreover, $x\notin B$, so certainly $x\notin B\setminus C$, since $B\setminus C$ is a subset of $B$. But that means that $x\in A\setminus C$ and $x\notin B\setminus C$, which is exactly what’s required to say that $x\in(A\setminus C)\setminus(B\setminus C)$.
Since $x$ was an arbitrary element of $(A\setminus B)\setminus C$, this shows that every element of $(A\setminus B)\setminus C$ belongs to $(A\setminus C)\setminus(B\setminus C)$ and hence that $(A\setminus B)\setminus C\subseteq(A\setminus C)\setminus(B\setminus C)$.
(I call this approach element-chasing. It’s one of the most straightforward ways to prove that one set is a subset of another.)
-
Note that according to set theory Theorems, we have $A-B=A\cap B'$ where in $B'$ is a complement of $B$ recpect to our universal set $U$. So we have then:
$$D=(A-B)-C=(A\cap B')\cap C'$$ so if $x\in D$ then $x\in A\cap B'$ and $x\in C'$ then $x\in A, x\in B', x\in C'$ so $$x\in A,x\in C'\longrightarrow x\in(A-C)\\x\in B', x\in C\longrightarrow x\in B',x\notin C\longrightarrow x\in(B'\cup C)\longrightarrow x\in(B\cap C')'$$ therfore $x\in(A-C)$ and $x\in(B\cap C')'$ which leads us to $$x\in(A-C)\cap (B\cap C')'$$. THis is wht you are looking for.
-
Nicely done, Babak! +1 – amWhy Jan 29 '13 at 15:57
@amWhy: :-) ... – Babak S. Jan 29 '13 at 19:23
Note that $(A\setminus C)\setminus (B\setminus C)$ is obtained by removing from $A\setminus C$ the part that is in $B\setminus C$. So we are removing a set that is a subset of $B$. It follows that $(A\setminus C)\setminus (B\setminus C)\subseteq (A\setminus C)\setminus B$.
But $(A\setminus C)\setminus B= (A\setminus B)\setminus C$, since each is obtained by removing from $A$ the part of $A$ that is in $B$ or $C$.
-
The LHS is as you have written
$$x \in A \land x \notin B \land x \notin C \tag{1}.$$
However, your RHS is wrong, it should be $$x \in A \land x \notin C \land \neg (x \in B \land x \notin C)$$ which is equivalent to $$x \in A \land x \notin C \land (x \notin B \lor x \in C).$$ Still, if you look closely, then you can see that we have $x \notin C$ there, so $(x \notin B \lor x \in C)$ simplifies to $x \notin B$. Finally we have
$$x \in A \land x \notin C \land x \notin B \tag{2}$$
which is the same as (1) because commutativity of conjunction. Note that we in fact proved equality
$$(A-B)-C = (A-C)-(B-C).$$
I hope this helps ;-)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737640619277954, "perplexity": 98.85111892994367}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823634.2/warc/CC-MAIN-20140820021343-00077-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.chemgapedia.de/vsengine/tra/vsc/en/ch/2/tra/pericyclische_reaktionen.tra/Vlu/vsc/en/ch/2/vlu/pericyclische_reaktionen/peri_aroma.vlu/Page/vsc/en/ch/2/oc/reaktionen/formale_systematik/pericyclische_reaktionen/aromatizitaet/beispiele.vscml.html | # Pericyclic Reactions: Aromaticity of Transition States
## Aromaticity: Examples
Fig.1
Reaction equation
Fig.2
Orbital model
A total of 6 p orbitals from butadiene and ethylene take part in the reaction. Two σ bonds and one π bond are formed from three π bonds during the reaction. The mechanism is formally described by three arrows indicating the flow of electrons with each arrow representing two electrons. Since the number of sign inversions in the transition state is even (in this case zero), the system is a Hückel system. The cyclic transition state with 4n+2 electrons (6 electrons in this case) is aromatic and, therefore, allowed.
Fig.3
Reaction equation
Fig.4
Orbital model
The basis set consists of two p orbitals, one $sp3$ and one s orbital. Four electrons take part in the reaction indicated by two arrows that show the flow of electrons. The number of sign inversions in the transition state is even (zero in this case), therefore, the system is Hückel-antiaromatic, i.e., disallowed.
Fig.5
Reaction equation
Fig.6
Orbital model
A methyl group is being transferred in this example. The participating p orbital from the methyl group can be shown to indicate conjugation (red line) passing through the origin of the p orbital. This does not count as sign inversion. Therefore, the number of sign inversions is odd (one in this case), and we are dealing with a Möbius system. The system is aromatic because four electrons are involved and the reaction is allowed.
Migration of the methyl group proceeds with inversion, similar to the $SN2$ reaction. This can be observed experimentally only when the migrating group is chiral, i.e., contains four different substituents. The following simple rule involving sterochemistry can be set up:
Inversion takes place if conjugation at a reaction center passes through the origin of the orbital.
Page 5 of 7 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587680459022522, "perplexity": 1374.2393502864322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821189.10/warc/CC-MAIN-20171017125144-20171017145144-00608.warc.gz"} |
https://bodheeprep.com/cat-quant-questions-solutions/4 | # CAT Quant Questions with Video Solutions
Note: These Quant questions have been selected from 1000+ CAT Quant Practice Problems with video solutions of Bodhee Prep’s Online CAT Quant Course
Question 16:
Let $N = 1! \times 2! \times 3! \times ..... \times 99! \times 100!$, and if $\frac{N}{{p!}}$ is a perfect square for some positive integer $p \le 100$, then find the value of p.
Topic: factorials
Question 17:
If ${x^2} + {y^2} = 1$ , find the maximum value of ${x^2} + 4xy - {y^2}$
Topic: maxima minima
[1] $1$
[2] $\sqrt 2$
[3] $\sqrt 5$
[4] $4$
Question 18:
The compound interest on a certain amount for two years is Rs. 291.2 and the simple interest on the same amount is Rs. 280. If the rate of interest is same in both the cases, find the Principal amount
Topic: sici
[1] 1200
[2] 1400
[3] 1700
[4] 1750
Question 19:
In the diagram given below, the circle and the square have the same center O and equal areas. The circle has radius 1 and intersects one side of the square at P and Q. What is the length of PQ?
Topic: circles
[1] 1
[2] 3/2
[3] $\sqrt {4 - \pi }$
[4] $\sqrt {\pi - 1}$
Question 20:
What is the remainder when ${{x}^{276}}+12$ is divided by ${{x}^{2}}+x+1$ given that the remainder is a positive integer?
Topic: remainders
### CAT Quant Practice Sets [Video Explanations]
CAT Quant Questions Set 01
CAT Quant Questions Set 02
CAT Quant Questions Set 03
CAT Quant Questions Set 05
CAT Quant Questions Set 06
#### CAT Quant Online Course
• 1000+ Practice Problems
• Detailed Theory of Every Topics
• Online Live Sessions for Doubt Clearing
• All Problems with Video Solutions
### 8 thoughts on “CAT Quant Questions with Video Solutions”
1. MAHESH AGGARWAL says:
do you have exclusive PACKAGE OF video solutions of last 10-15 years CAT EXAMS? I AM INTERESTED IN JUST THAT. I AM HELPING A GIRL APPEARING FOR CAT 2019 EXAM
We have already included all the good questions from CAT and other MBA entrance exams in our course.
All these questions are with Video explanations
• Pavani says:
If f is 3
F(3)
6+3+2 is 11
2. Rajaraman says:
Can you tell the name of the theorem that you said in the first quetion
3. Abinash says:
Sir, for question no. 23:- we can do as x+y=2-z
=> cubing both sides:- x3+y3+z3=8-(2-z)(6z+3xy)
=>as given that x3+y3+z3=8, then (2-z)(6z+3xy)=0 => z=2(considering an integer value for easy output) ,now putting z value in every eqn given :- x+y=0
x2+y2=2
x3+y3=0
from the above three eqns we find that if one of x or y is +ve then another ll be -ve but both ll be of same magnitude i.e. (+-)1….thus x4+y4+z4=18
4. Siddharth says:
Set 1 Question 5.
I want to know the below logic would be wrong.
Distance is constant. If the Speed increases by 10km/hr, the time decreases by 4 hours.
So to decrease time by 2 hours, Speed can be increased by 5km/hr.
20 + 5= 25 kmph.
I understand something might be wrong with this logic but could someone help pinpoint that?
• shaswat says:
the assumption of “to decrease time by 2 hours, speed can be increase by 5km/hr” is wrong. it would be right if you know the initial time and consider from the start but since the journey is already going on, u can’t do that. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440561294555664, "perplexity": 2963.0028963762647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213454.52/warc/CC-MAIN-20200924034208-20200924064208-00080.warc.gz"} |
https://www.physicsforums.com/threads/chemistry-exam-qn.123325/ | # Chemistry exam qn
1. Jun 9, 2006
### Ukitake Jyuushirou
someone could juz tell me roughly how to work out this set of qn plz? i have been thinking and doing alot of workings but none of the ans is remotely close to the ans... :(
1) the reaction below releases 56.6kj of heat at 298k for each mole of NO2 formed at a constant pressure of 1 atm. what is the standard enthalpy of formation of NO2 given the standard enthalpy of NO is 90.4kj mol
2NO + O2 ---> 2NO2
2) a 200g of copper at 100 degrees celsius is dropped into 1000g of water at 25 degrees celsius. what is the final temp of the system?
specific heat of water is 4.18J and copper is 0.400 J
3) if the equilibrium constant for A + B <===> C is 0.123, the equilibrium constant when 2C <===> 2A + 2B is?
2. Jun 9, 2006
### Hootenanny
Staff Emeritus
Question Two
HINT: Energy lost by copper is equal to the energy gained by the water. Try setting up simultaneous equations.
Question Three
HINT: You have increased the concentration of all the reactants equally.
3. Jun 10, 2006
### Saketh
For Question One, use the fact that $$\Delta H = \Sigma (\Delta H_{products})-\Sigma (\Delta H_{reactants})$$. You know that since oxygen is a pure element, its heat of formation is zero. You know that $$\Delta H$$, and you know $$\Sigma (\Delta H_{reactants})$$. You have to find $$\Sigma (\Delta H_{products})$$.
For Question Two, you know that $$q = mc(T_{f} - T_{i})$$. Find the heat that the copper is holding. Now that you know $$q$$, you also know that it is all transferred to the water, so write another q-equation, but this time you are solving for T of the water. Find the equilibrium temperature of the water and copper system - that is your $$T_{f}$$ for the copper. You know the $$T_{i}$$ for both the copper and the water, so all you do now is plug and chug.
For Question Three, write the $$K_{eq}$$ equation for A + B <===> C, then write it for 2C <===> 2A + 2B. Remember that when you flip the reactants and the products, you have to take the reciprocal of $$K_{eq}$$, and that when you multiply the coefficients all by a number $$N$$, you have to raise all of the terms in the $$K_{eq}$$ equation to that power $$N$$. So, for example: The equilibrium constant for a reaction A + B + C <===> D + E + F is $$\frac{[D][E][F]}{[A][C]}$$. For 3D + 3E + 3F <===> 3A + 3B + 3C, it is $$\frac{[A]^{3}^{3}[C]^{3}}{[D]^{3}[E]^{3}[F]^{3}}$$.
Last edited: Jun 10, 2006
Similar Discussions: Chemistry exam qn | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185817241668701, "perplexity": 796.3617616251872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00159.warc.gz"} |
https://ghc.haskell.org/trac/ghc/wiki/TypeFunctions/Ambiguity?version=2 | Version 2 (modified by simonpj, 6 years ago) (diff)
--
# Ambiguity
The question of ambiguity in Haskell is a tricky one. This wiki page is a summary of thoughts and definitions, in the hope of gaining clarity. I'm using a wiki because it's easy to edit, and many people can contribute, even though you can't typeset nice rules.
[Started Jan 2010.] Please edit to improve.
## Terminology
A type system is usually specified by
• A specification, in the form of some declarative typing rules. These rules often involve "guessing types". Here is a typical example, for variables:
(f : forall a1,..,an. C => tau) \in G
theta = [t1/a1, ..., tn/an] -- Substitution, guessing ti
Q |= theta( C )
------------------------- (VAR)
Q, G |- f :: theta(tau)
The preconditions say that f is in the environment G with a suitable polymorphic type. We "guess" types t1..tn, and use them to instantiate f's polymorphic type variables a1..an, via a substitution theta. Under this substitution f's instantiated constraints theta(C) must be deducible (using |=) from the ambient constraints Q.
The point is that we "guess" the ai.
• An inference algorithm, often also presented using similar-looking rules, but in a form that can be read as an algorithm with no "guessing". Typically
• The "guessing" is replaced by generating fresh unification variables.
## Coherence
Suppose we have (I conflate classes Read and Show into one class Text for brevity):
class Text a where
show :: a -> String
x :: String
The trouble is that there is a constraint (Text t), where t is a type variable that is otherwise unconstrained. Moreover, the type that we choose for t affects the semantics of the program. For example, if we chose t = Int then we might get x = "3", but if we choose t = Float we might get x = "3.7". This is bad: we want our type system to be coherent in the sense that every well-typed program has but a single value.
In practice, the Haskell Report, and every Haskell implementation, rejects such a program saying something like
Cannot deduce (Text t) from ()
In algorithmic terms this is very natural: we indeed have a constraint (Text t) for some unification variable t, and no way to solve it, except by searching for possible instantiations of t. So we simply refrain from trying such a search.
But in terms of the type system specification it is harder. Usually a
Problem 1: how can w | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902904987335205, "perplexity": 2418.265171871848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00034-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-do-i-know-if-this-field-has-a-mass-term.185174/ | # How do I know if this field has a mass term?
1. Sep 17, 2007
### Lecticia
1. Special Relativity
2. The problem statement, all variables and given/known data
Consider this Lagrangian:
L=(1/2) (\partial_{\mu} \Psi)(\partial^{mu} \Psi) + \exp(-(a\times \Psi)^2)
Have this field a mass term?
2. Relevant equations
3. The attempt at a solution
2. Sep 17, 2007
### Staff: Mentor
Is this what one intended to write, or is this given in some text?
$$L= \frac{1}{2} (\partial_{\mu} \Psi)(\partial^{\mu} \Psi) + e^{-{(a \Psi)}^2}$$
3. Sep 17, 2007
### Lecticia
Yes, exactly this Lagrangian, where \Psi is a scalar field.
Last edited: Sep 17, 2007
4. Sep 17, 2007
### nrqed
This is a nonlinear field theory so, strictly speaking, there is no clear meaning for a mass term.
But I am guessing that they want you to treat the parameter "a" as small and to do a Taylor expansion of the exponential. If you do that, you will generate a mass term.
That's my guess.
5. Sep 17, 2007
### dextercioby
Well, just one question, you have the lagrangian, what are the field eqn's ?
6. Sep 17, 2007
### Lecticia
Do you mean the motion equations?
7. Sep 17, 2007
### Lecticia
Similar Discussions: How do I know if this field has a mass term? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920159101486206, "perplexity": 2699.703151083556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191984.96/warc/CC-MAIN-20170322212951-00459-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.jobilize.com/physics2/course/12-1-the-biot-savart-law-sources-of-magnetic-fields-by-openstax?qcr=www.quizover.com | # 12.1 The biot-savart law
Page 1 / 4
By the end of this section, you will be able to:
• Explain how to derive a magnetic field from an arbitrary current in a line segment
• Calculate magnetic field from the Biot-Savart law in specific geometries, such as a current in a line and a current in a circular arc
We have seen that mass produces a gravitational field and also interacts with that field. Charge produces an electric field and also interacts with that field. Since moving charge (that is, current) interacts with a magnetic field, we might expect that it also creates that field—and it does.
The equation used to calculate the magnetic field produced by a current is known as the Biot-Savart law. It is an empirical law named in honor of two scientists who investigated the interaction between a straight, current-carrying wire and a permanent magnet. This law enables us to calculate the magnitude and direction of the magnetic field produced by a current in a wire. The Biot-Savart law states that at any point P ( [link] ), the magnetic field $d\stackrel{\to }{B}$ due to an element $d\stackrel{\to }{l}$ of a current-carrying wire is given by
$d\stackrel{\to }{B}=\frac{{\mu }_{0}}{4\pi }\phantom{\rule{0.2em}{0ex}}\frac{Id\stackrel{\to }{l}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{^}{r}}{{r}^{2}}.$
The constant ${\mu }_{0}$ is known as the permeability of free space and is exactly
${\mu }_{0}=4\pi \phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{\text{−7}}\text{T}\cdot \text{m/A}$
in the SI system. The infinitesimal wire segment $d\stackrel{\to }{l}$ is in the same direction as the current I (assumed positive), r is the distance from $d\stackrel{\to }{l}$ to P and $\stackrel{^}{r}$ is a unit vector that points from $d\stackrel{\to }{l}$ to P , as shown in the figure.
The direction of $d\stackrel{\to }{B}$ is determined by applying the right-hand rule to the vector product $d\stackrel{\to }{l}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{^}{r}.$ The magnitude of $d\stackrel{\to }{B}$ is
$dB=\frac{{\mu }_{0}}{4\pi }\phantom{\rule{0.2em}{0ex}}\frac{I\phantom{\rule{0.2em}{0ex}}dl\phantom{\rule{0.2em}{0ex}}\mathrm{sin}\phantom{\rule{0.1em}{0ex}}\theta }{{r}^{2}}$
where $\theta$ is the angle between $d\stackrel{\to }{l}$ and $\stackrel{^}{r}.$ Notice that if $\theta =0,$ then $d\stackrel{\to }{B}=\stackrel{\to }{0}.$ The field produced by a current element $Id\stackrel{\to }{l}$ has no component parallel to $d\stackrel{\to }{l}.$
The magnetic field due to a finite length of current-carrying wire is found by integrating [link] along the wire, giving us the usual form of the Biot-Savart law.
## Biot-savart law
The magnetic field $\stackrel{\to }{B}$ due to an element $d\stackrel{\to }{l}$ of a current-carrying wire is given by
$\stackrel{\to }{B}=\frac{{\mu }_{0}}{4\pi }\underset{\text{wire}}{\int }\frac{I\phantom{\rule{0.2em}{0ex}}d\stackrel{\to }{l}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{^}{r}}{{r}^{2}}.$
Since this is a vector integral, contributions from different current elements may not point in the same direction. Consequently, the integral is often difficult to evaluate, even for fairly simple geometries. The following strategy may be helpful.
## Problem-solving strategy: solving biot-savart problems
To solve Biot-Savart law problems, the following steps are helpful:
1. Identify that the Biot-Savart law is the chosen method to solve the given problem. If there is symmetry in the problem comparing $\stackrel{\to }{B}$ and $d\stackrel{\to }{l},$ Ampère’s law may be the preferred method to solve the question.
2. Draw the current element length $d\stackrel{\to }{l}$ and the unit vector $\stackrel{^}{r},$ noting that $d\stackrel{\to }{l}$ points in the direction of the current and $\stackrel{^}{r}$ points from the current element toward the point where the field is desired.
3. Calculate the cross product $d\stackrel{\to }{l}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{^}{r}.$ The resultant vector gives the direction of the magnetic field according to the Biot-Savart law.
4. Use [link] and substitute all given quantities into the expression to solve for the magnetic field. Note all variables that remain constant over the entire length of the wire may be factored out of the integration.
5. Use the right-hand rule to verify the direction of the magnetic field produced from the current or to write down the direction of the magnetic field if only the magnitude was solved for in the previous part.
#### Questions & Answers
define electric image.obtain expression for electric intensity at any point on earthed conducting infinite plane due to a point charge Q placed at a distance D from it.
Mateshwar Reply
explain the lack of symmetry in the field of the parallel capacitor
Phoebe Reply
pls. explain the lack of symmetry in the field of the parallel capacitor
Phoebe
does your app come with video lessons?
Ahmed Reply
What is vector
Ajibola Reply
Vector is a quantity having a direction as well as magnitude
Damilare
tell me about charging and discharging of capacitors
Ahemen Reply
a big and a small metal spheres are connected by a wire, which of this has the maximum electric potential on the surface.
Bundi Reply
3 capacitors 2nf,3nf,4nf are connected in parallel... what is the equivalent capacitance...and what is the potential difference across each capacitor if the EMF is 500v
Prince Reply
equivalent capacitance is 9nf nd pd across each capacitor is 500v
santanu
four effect of heat on substances
Prince Reply
why we can find a electric mirror image only in a infinite conducting....why not in finite conducting plate..?
Rima Reply
because you can't fit the boundary conditions.
Jorge
what is the dimensions for VISCOUNSITY (U)
Branda
what is thermodynamics
Aniket Reply
the study of heat an other form of energy.
John
heat is internal kinetic energy of a body but it doesnt mean heat is energy contained in a body because heat means transfer of energy due to difference in temperature...and in thermo-dynamics we study cause, effect, application, laws, hypothesis and so on about above mentioned phenomenon in detail.
ing
It is abranch of physical chemistry which deals with the interconversion of all form of energy
Vishal
what is colamb,s law.?
Muhammad Reply
it is a low studied the force between 2 charges F=q.q`\r.r
Mostafa
what is the formula of del in cylindrical, polar media
Birengeso Reply
prove that the formula for the unknown resistor is Rx=R2 x R3 divided by R3,when Ig=0.
MAXWELL Reply
what is flux
Bundi Reply
Total number of field lines crossing the surface area
Kamru
Basically flux in general is amount of anything...In Electricity and Magnetism it is the total no..of electric field lines or Magnetic field lines passing normally through the suface
prince
what is temperature change
Celine
a bottle of soft drink was removed from refrigerator and after some time, it was observed that its temperature has increased by 15 degree Celsius, what is the temperature change in degree Fahrenheit and degree Celsius
Celine
process whereby the degree of hotness of a body (or medium) changes
Salim
Q=mcΔT
Salim
where The letter "Q" is the heat transferred in an exchange in calories, "m" is the mass of the substance being heated in grams, "c" is its specific heat capacity and the static value, and "ΔT" is its change in temperature in degrees Celsius to reflect the change in temperature.
Salim
what was the temperature of the soft drink when it was removed ?
Salim
15 degree Celsius
Celine
15 degree
Celine
ok I think is just conversion
Salim
15 degree Celsius to Fahrenheit
Salim
0 degree Celsius = 32 Fahrenheit
Salim
15 degree Celsius = (15×1.8)+32 =59 Fahrenheit
Salim
I dont understand
Celine
the question said you should convert 15 degree Celsius to Fahrenheit
Salim
To convert temperatures in degrees Celsius to Fahrenheit, multiply by 1.8 (or 9/5) and add 32.
Salim
what is d final ans for Fahrenheit and Celsius
Celine
it said what is temperature change in Fahrenheit and Celsius
Celine
the 15 is already in Celsius
Salim
So the final answer for Fahrenheit is 59
Salim
what is d final ans for Fahrenheit and Celsius
Celine
what are the effects of placing a dielectric between the plates of a capacitor
Bundi Reply
increase the capacitance.
Jorge
besides increasing the capacitance, is there any?
Bundi
mechanical stiffness and small size
Jorge
so as to increase the capacitance of a capacitor
Rahma
also to avoid diffusion of charges between the two plate since they are positive and negative.
Prince
### Read also:
#### Get the best University physics vol... course in your pocket!
Source: OpenStax, University physics volume 2. OpenStax CNX. Oct 06, 2016 Download for free at http://cnx.org/content/col12074/1.3
Google Play and the Google Play logo are trademarks of Google Inc.
Notification Switch
Would you like to follow the 'University physics volume 2' conversation and receive update notifications?
By By Rhodes | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850638270378113, "perplexity": 691.3186269420236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479627.17/warc/CC-MAIN-20190215224408-20190216010408-00532.warc.gz"} |
http://mathhelpforum.com/algebra/226533-simple-complex-numbers-question-can-show-me-solution-2.html | # Math Help - this is simple complex numbers question...can show me the solution?
1. ## Re: this is simple complex numbers question...can show me the solution?
Originally Posted by romsek
there's nothing in the original problem that stated p and q were real. We just assumed that.
That's really beyond the pale. It's patently obvious. If you don't assume it you can't solve the problem. But the possibility was (unknowingly) incorporated in post #3, ie, it has already been addressed.
Note: The definitions defining a complex number a+bi only apply if a and b are real.
EDIT: OK, post #3: "Gather the real and imaginary parts together in the form a+bi = 0 and equate a and b to 0."
That should have been the end of the thread. Explaining elementary arithmetic?
Page 2 of 2 First 12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9292795658111572, "perplexity": 1616.72307144309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256757.9/warc/CC-MAIN-20140728011736-00359-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://mathhelpforum.com/discrete-math/19457-cross-product-multiple-sets.html | # Thread: The cross product and multiple sets?
1. ## The cross product and multiple sets?
Hi all,
Just a quick question...
I have a question in my Computer Science homework that goes as follows:
(S x S) x S where S= {3,4}
Now does that mean that I end up, after doing the first product, doing it normally across each of the four sets to make a total of 16 ordered pairs?
Or do I have it so that I have several sets of three elements each?
Thanks so much!!!
2. Originally Posted by srstakey
Hi all,
Just a quick question...
I have a question in my Computer Science homework that goes as follows:
(S x S) x S where S= {3,4}
Now does that mean that I end up, after doing the first product, doing it normally across each of the four sets to make a total of 16 ordered pairs?
Or do I have it so that I have several sets of three elements each?
Thanks so much!!!
S x S = 0,
so
( S x S ) x S = 0 x S = 0
RonL
3. Originally Posted by srstakey
I have a question in my Computer Science homework that goes as follows: (S x S) x S where S= {3,4}
I think that “cross product” here refers to Cartesian Cross Products of sets.
If that is correct, then (SxS)xS would be a set of eight triples.
4. Originally Posted by Plato
I think that “cross product” here refers to Cartesian Cross Products of sets.
If that is correct, then (SxS)xS would be a set of eight triples.
Yes-thank you Plato!
(I never thought I would be thanking Plato himself ) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629631996154785, "perplexity": 995.5481096855812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00198-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/105785-another-trigonometric-inequality.html | # Math Help - another trigonometric inequality
1. ## another trigonometric inequality
Find the set of values which satisfy the inequality sin x < sqrt{3} cos x
for $0 \leq x \leq 360$
Ok , i solve it first
sin x = sqrt{3} cos x
tan x = sqrt{3}
x= 60 , 240
Then it has 4 subintervals ie 0 , 60 , 240 , 360
And it tested each and got this solutin
0 <= 60 and 240<x<=360
Am i correct ?
2. Originally Posted by thereddevils
Find the set of values which satisfy the inequality sin x < sqrt{3} cos x
for $0 \leq x \leq 360$
Ok , i solve it first
sin x = sqrt{3} cos x
tan x = sqrt{3}
x= 60 , 240
Then it has 4 subintervals ie 0 , 60 , 240 , 360
And it tested each and got this solutin
0 <= x < 60 and 240<x<=360
Am i correct ?
Yes (but a small correction of some typos is in red). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920482635498047, "perplexity": 3886.3395168745215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133485.50/warc/CC-MAIN-20140914011213-00126-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://cadabra.science/qa/105/cadabra-2-suppressing-output-font-size?show=138 | # Cadabra 2: suppressing output, font size
Two basic questions regarding Cadabra 2:
1. In Cadabra 1, if the command is ended with :, then the corresponding output is suppressed. How to do the same in Cadabra 2?
2. Is it possible to change font size at this stage? I'm using Ubuntu.
Do not end it with anything. So
ex:= A+B:
to enter expressions, and then
substitute(ex, $A = C$)
to act with an algorithm without showing output. The logic being that the latter is simply a Python statement, and statements by default do not show output.
by (65.1k points)
+1 vote
Font size can now be changed using the 'Font size' entry in the 'View' menu.
by (65.1k points)
Thanks a lot! Just a small problem - the file in usr/share/applications/Cadabra2.desktop is not getting the icon properly...
That file should have been removed by 'sudo make install', and replaced with cadabra2-gtk.desktop. It didn't?
cadabra2-gtk.desktop does appear in this location: /usr/local/share/applications. But somehow it's not taking the png file and the icon is not appearing. I reinstalled several times.
There is something fundamentally broken in the way freedesktop.org's rules work for icons. Sigh... Have you tried logging out and logging back in?
Yeah, I did it many times - no change. BTW, in my system, all the desktop files are in /usr/share/applications (instead of /usr/local...). Cadabra 1 is also there. I didn't notice where it was for Cadabra 2 before this update, but the icon was working then... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346899747848511, "perplexity": 4867.2372068316035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00587.warc.gz"} |
http://math.stackexchange.com/questions/377121/implicit-differentiation-if-sin-y-2-sin-x-show-fracdydx2-1-3-sec | # Implicit differentiation. If $\sin y=2\sin x$, show $(\frac{dy}{dx})^2=1 + 3\sec^2y$
I'm self teaching and stuck on the last question of the exercises on implicit differentiation. It says given that $\sin y=2\sin x$, show $(\frac{dy}{dx})^2=1 + 3\sec^2y$
My workings follow. I differentiate both sides w.r.t $x$, square and rearrange:
$$\cos y \frac{dy}{dx} = 2\cos x \Rightarrow \cos^2y(\frac{dy}{dx})^2 = 4\cos^2x$$
$$\Rightarrow (\frac{dy}{dx})^2 = \frac{4\cos^2x}{\cos^2y}$$
I'm now trying to rearrange the RHS to look like $1 + 3\sec^2y$ but failing. By employing the identity $\cos^2x + sin^2x = 1$and looking at the original equation, I can get to $\cos^2x = 1 - (\dfrac{\sin y}{2})^2$ and end up with
$$(\frac{dy}{dx})^2 = \frac{4(1 - \frac{1}{4}\sin^2y)}{\cos^2y} = \frac{4 - \sin^2y}{\cos^2y}$$
Can someone please put me on the right path? I wonder if I should differentiate both sides of $\cos y \frac{dy}{dx} = 2\cos x$ as that also leads to an equation involving $(\frac{dy}{dx})^2$.
-
You need to put your final denominator in terms of $\cos y$ - then see how it looks. – Mark Bennet Apr 30 '13 at 11:03
Hint: use $4-\sin^2 y = 3+\cos^2 y$.
-
Thanks to your hint, I get the right answer now. $\dfrac{3 + \cos^2y}{\cos^2y} = 1 + 3\sec^2y$ – PeteUK Apr 30 '13 at 11:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013415575027466, "perplexity": 190.1672760798471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065324.41/warc/CC-MAIN-20150827025425-00288-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://mathhelpforum.com/statistics/26957-probability.html | Math Help - probability
1. probability
There are 5 different colors of balls: white, black, blue, red, green. We randomly pick 6 balls. Each ball has a probability of 0.2 of getting each of the 5 colors. What is the probability that, from the 6 balls picked, there are white balls and black balls?
2. Originally Posted by allrighty
There are 5 different colors of balls: white, black, blue, red, green. We randomly pick 6 balls. Each ball has a probability of 0.2 of getting each of the 5 colors. What is the probability that, from the 6 balls picked, there are white balls and black balls?
find the probabilities for each case and add them up
by (all possible) cases, i mean:
probability of choosing 1 white, 0 black, 1 blue, 1 red, 3 green
probability of choosing 1 white, 0 black, 0 blue, 2 red, 3 green
probability of choosing 1 white, 0 black, 2 blue, 0 red, 3 green
.
.
.
.
or we could do:
1 - probability of choosing no white and no black ball
so all you have to worry about are the number of ways you can choose 6 balls among the three remaining colors. there is a formula for that sort of thing. can't remember right now, i'll have to look it up. but when you do get it, just multiply the answer by 0.2 and that will give you the probability of choosing no white and no black ball
3. Originally Posted by Jhevon
so all you have to worry about are the number of ways you can choose 6 balls among the three remaining colors
Is that right? A white AND a black are required.
I have another method but it's messy.
You could have for example 2W, 1B and 3 others 6!/(3!2!) ways each with probability 0.2^3 0.6^3
or 5W, 1B 6!/5! ways with probability 0.2^6
Having said all that, I'm a bit rubbish at probability so I'll wait and see what people say.
4. Originally Posted by a tutor
Is that right? A white AND a black are required.
yes, i believe so. what i said was i want to find the probability of having 0 white AND 0 black. and then take 1 minus that probability to find the probability of at least 1 white or 1 black. i think what i described does the trick... but then again, i'm a noob when it comes to probability as well
5. Original question said..
Originally Posted by allrighty
What is the probability that, from the 6 balls picked, there are white balls and black balls?
and you said..
Originally Posted by Jhevon
to find the probability of at least 1 white or 1 black.
6. Originally Posted by a tutor
Original question said..
and you said..
ah yes. my bad. i didn't see the "and." i was under the impression if we have either or we were good...
7. Hello, allrighty!
I think I've solved it . . .
There are 5 different colors of balls: white, black, blue, red, green.
We randomly pick 6 balls.
Each ball has a probability of 0.2 of getting each of the 5 colors.
What is the probability that, from the 6 balls picked, there are white balls and black balls?
The opposite of "some White and some Black" is "no White or no Black".
To get no White, we must pick six balls from the other four colors.
. . Then: . $P(\text{0 White}) \:=\:(0.8)^6$
To get no Black, we must pick six balls from the other four colors.
. . Then: . $P(\text{0 Black}) \:=\:(0.8)^6$
To get no White and no Black, we pick six balls from the other 3 colors.
. . Then: . $P(\text{0 White} \,\wedge \,\text{0 Black}) \:=\;(0.6)^6$
Hence: . $P(\text{0 White} \,\vee \,\text{0 Black}) \;=\;P(\text{0 White}) + P(\text{0 Black}) - P(\text{0 White} \,\wedge \,\text{0 Black})$
. . . . . . . . . . . . . . . . . . . . $= \quad\;\;(0.8)^6\quad \;+ \;\quad(0.8)^6 \qquad- \qquad(0.6)^6$
. . . . . . . . . . . . . . . . . . . . $= \qquad 0.4777632$
Therefore: . $P(\text{some White and some Black}) \;=\;1 - 0.4777632 \;=\;\boxed{\:0.522368\:}$
8. A neat solution Soroban.
I got the same answer rather more clumsily.
I wrote a quick easy program to do it the way I mentioned above.
Attached Thumbnails | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889181017875671, "perplexity": 330.80225365996927}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274289.5/warc/CC-MAIN-20140728011754-00369-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://www.science.gov/topicpages/c/carlo+photon+transport.html | #### Sample records for carlo photon transport
1. The 3-D Monte Carlo neutron and photon transport code MCMG and its algorithms
SciTech Connect
Deng, L.; Hu, Z.; Li, G.; Li, S.; Liu, Z.
2012-07-01
The 3-D Monte Carlo neutron and photon transport parallel code MCMG is developed. A new collision mechanism based on material but not nuclide is added in the code. Geometry cells and surfaces can be dynamically extended. Combination of multigroup and continuous cross-section transport is developed. The multigroup scattering is expansible to P5 and upper scattering is considered. Various multigroup libraries can be easily equipped in the code. The same results with the experiments and the MCNP code are obtained for a series of modes. The speedup of MCMG is a factor of 2-4 relative to the MCNP code in speed. (authors)
2. Macro-step Monte Carlo Methods and their Applications in Proton Radiotherapy and Optical Photon Transport
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and proved to be just as accurate and more efficient. This work has the potential to accelerate light modeling for both photodynamic therapy and near-infrared spectroscopic imaging.
3. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
4. Monte Carlo photon transport on vector and parallel superconductors: Final report
SciTech Connect
Martin, W.R.; Nowak, P.F.
1987-09-30
The vectorized Monte Carlo photon transport code VPHOT has been developed for the Cray-1, Cray-XMP, and Cray-2 computers. The effort in the current project was devoted to multitasking the VPHOT code and implement it on the Cray X-MP and Cray-2 parallel-vector supercomputers, examining the robustness of the vectorized algorithm for changes in the physics of the test problems, and evaluating the efficiency of alternative algorithms such as the ''stack-driven'' algorithm of Bobrowicz for possible incorporation into VPHOT. These tasks are discussed in this paper. 4 refs.
5. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
SciTech Connect
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
6. COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport
2014-06-01
Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.
7. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
SciTech Connect
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
8. penORNL: a parallel monte carlo photon and electron transport package using PENELOPE
SciTech Connect
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
9. Space applications of the MITS electron-photon Monte Carlo transport code system
SciTech Connect
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
10. Multiple processor version of a Monte Carlo code for photon transport in turbid media
Colasanti, Alberto; Guida, Giovanni; Kisslinger, Annamaria; Liuzzi, Raffaele; Quarto, Maria; Riccio, Patrizia; Roberti, Giuseppe; Villani, Fulvia
2000-10-01
Although Monte Carlo (MC) simulations represent an accurate and flexible tool to study the photon transport in strongly scattering media with complex geometrical topologies, they are very often infeasible because of their very high computation times. Parallel computing, in principle very suitable for MC approach because it consists in the repeated application of the same calculations to unrelated and superposing events, offers a possible approach to overcome this problem. It was developed an MC multiple processor code for optical and IR photon transport which was run on the parallel processor computer CRAY-T3E (128 DEC Alpha EV5 nodes, 600 Mflops) at CINECA (Bologna, Italy). The comparison between single processor and multiple processor runs for the same tissue models shows that the parallelization reduces the computation time by a factor of about N , where N is the number of used processors. This means a computation time reduction by a factor ranging from about 10 2 (as in our case where 128 processors are available) up to about 10 3 (with the most powerful parallel computers with 1024 processors). This reduction could make feasible MC simulations till now impracticable. The scaling of the execution time of the parallel code, as a function of the values of the main input parameters, is also evaluated.
11. Application of parallel computing to a Monte Carlo code for photon transport in turbid media
Colasanti, Alberto; Guida, Giovanni; Kisslinger, Annamaria; Liuzzi, Raffaele; Quarto, Maria; Riccio, Patrizia; Roberti, Giuseppe; Villani, Fulvia
1998-12-01
Monte Carlo (MC) simulations of photon transport in turbid media suffer a severe limitation represented by very high execution times in all practical cases. This problem could be approached with the technique of parallel computing, which, in principle, is very suitable for MC simulations because they consist in the repeated application of the same calculations to unrelated and superposing events. For the first time in the field of the optical and IR photon transport, we developed a MC parallel code running on the parallel processor computer CRAY-T3E (128 DEC Alpha EV5 nodes, 600 Mflops) at CINECA (Bologna, Italy). The comparison of several single processor runs (on Alpha AXP DEC 2100) and N-processor runs (on Cray T3E) for the same tissue models shows that the computation time is reduced by a factor of about 5*N, where N is the number of used processors. This means a computation time reduction by a factor ranging from about 102 (as in our case) up to about 5*103 (with the most powerful parallel computers) that could make feasible MC simulations till now impracticable.
12. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
Energy Science and Technology Software Center (ESTSC)
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less
13. A method for photon beam Monte Carlo multileaf collimator particle transport
Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe
2002-09-01
Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.
14. Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)
Kum, Oyeon
2004-11-01
Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide accurate" dose distribution within an acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.
15. ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes
SciTech Connect
Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A.; Seltzer, S.M.; Berger, M.J.
1993-06-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures.
16. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit
SciTech Connect
2009-11-15
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
17. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
SciTech Connect
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
18. Development of parallel monte carlo electron and photon transport (PMCEPT) code III: Applications to medical radiation physics
Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun
2012-05-01
Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.
19. Monte Carlo electron-photon transport using GPUs as an accelerator: Results for a water-aluminum-water phantom
SciTech Connect
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
20. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code
SciTech Connect
Morgan C. White
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.
1. A Monte Carlo study of high-energy photon transport in matter: application for multiple scattering investigation in Compton spectroscopy
PubMed Central
Brancewicz, Marek; Itou, Masayoshi; Sakurai, Yoshiharu
2016-01-01
The first results of multiple scattering simulations of polarized high-energy X-rays for Compton experiments using a new Monte Carlo program, MUSCAT, are presented. The program is developed to follow the restrictions of real experimental geometries. The new simulation algorithm uses not only well known photon splitting and interaction forcing methods but it is also upgraded with the new propagation separation method and highly vectorized. In this paper, a detailed description of the new simulation algorithm is given. The code is verified by comparison with the previous experimental and simulation results by the ESRF group and new restricted geometry experiments carried out at SPring-8. PMID:26698070
2. A Monte Carlo study of high-energy photon transport in matter: application for multiple scattering investigation in Compton spectroscopy.
PubMed
Brancewicz, Marek; Itou, Masayoshi; Sakurai, Yoshiharu
2016-01-01
The first results of multiple scattering simulations of polarized high-energy X-rays for Compton experiments using a new Monte Carlo program, MUSCAT, are presented. The program is developed to follow the restrictions of real experimental geometries. The new simulation algorithm uses not only well known photon splitting and interaction forcing methods but it is also upgraded with the new propagation separation method and highly vectorized. In this paper, a detailed description of the new simulation algorithm is given. The code is verified by comparison with the previous experimental and simulation results by the ESRF group and new restricted geometry experiments carried out at SPring-8. PMID:26698070
3. The MC21 Monte Carlo Transport Code
SciTech Connect
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
4. Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
SciTech Connect
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
5. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors.
PubMed
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
6. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors
PubMed Central
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
7. Monte Carlo simulations incorporating Mie calculations of light transport in tissue phantoms: Examination of photon sampling volumes for endoscopically compatible fiber optic probes
SciTech Connect
Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.
1996-04-01
Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.
8. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.
SciTech Connect
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
9. Integrated Tiger Series of electron/photon Monte Carlo transport codes: a user's guide for use on IBM mainframes
SciTech Connect
Kirk, B.L.
1985-12-01
The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.
10. RCPO1 - A Monte Carlo program for solving neutron and photon transport problems in three dimensional geometry with detailed energy description and depletion capability
SciTech Connect
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
11. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
SciTech Connect
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
12. Monte Carlo Simulation of Transport
Kuhl, Nelson M.
1996-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. Numerical experiments validate a mathematical model of Paul R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The Monte Carlo method is used to solve a test particle drift kinetic equation. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian, as suggested by the central limit theorem. The detailed structure of the collision operator and the role of conservation of momentum are investigated. Exponential decay of expected values allows the computation of the confinement times of both ions and electrons. Three-dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. Comparison with experimental data and derivation of scaling laws are presented.
13. Monte Carlo simulation of transport
SciTech Connect
Kuhl, N.M.
1996-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. Numerical experiments validate a mathematical model of Paul R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The Monte Carlo method is used to solve a test particle drift kinetic equation. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian, as suggested by the central limit theorem. The detailed structure of the collision operator and the role of conservation of momentum are investigated. Exponential decay of expected values allows the computation of the confinement times of both ions and electrons. Three-dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. Comparison with experimental data and derivation of scaling laws are presented. 13 refs., 6 figs.
14. Photon transport in binary photonic lattices
Rodrguez-Lara, B. M.; Moya-Cessa, H.
2013-03-01
We present a review of the mathematical methods that are used to theoretically study classical propagation and quantum transport in arrays of coupled photonic waveguides. We focus on analyzing two types of binary photonic lattices: those where either self-energies or couplings alternate. For didactic reasons, we split the analysis into classical propagation and quantum transport, but all methods can be implemented, mutatis mutandis, in a given case. On the classical side, we use coupled mode theory and present an operator approach to the Floquet-Bloch theory in order to study the propagation of a classical electromagnetic field in two particular infinite binary lattices. On the quantum side, we study the transport of photons in equivalent finite and infinite binary lattices by coupled mode theory and linear algebra methods involving orthogonal polynomials. Curiously, the dynamics of finite size binary lattices can be expressed as the roots and functions of Fibonacci polynomials.
15. Automated Monte Carlo biasing for photon-generated electrons near surfaces.
SciTech Connect
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
16. Improved geometry representations for Monte Carlo radiation transport.
SciTech Connect
Martin, Matthew Ryan
2004-08-01
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
17. Recent advances in the Mercury Monte Carlo particle transport code
SciTech Connect
Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.
2013-07-01
We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)
18. Photon dose calculation incorporating explicit electron transport.
PubMed
Yu, C X; Mackie, T R; Wong, J W
1995-07-01
Significant advances have been made in recent years to improve photon dose calculation. However, accurate prediction of dose perturbation effects near the interfaces of different media, where charged particle equilibrium is not established, remain unsolved. Furthermore, changes in atomic number, which affect the multiple Coulomb scattering of the secondary electrons, are not accounted for by current photon dose calculation algorithms. As local interface effects are mainly due to the perturbation of secondary electrons, a photon-electron cascade model is proposed which incorporates explicit electron transport in the calculation of the primary photon dose component in heterogeneous media. The primary photon beam is treated as the source of many electron pencil beams. The latter are transported using the Fermi-Eyges theory. The scattered photon dose contribution is calculated with the dose spread array [T.R. Mackie, J.W. Scrimger, and J.J. Battista, Med. Phys. 12, 188-196 (1985)] approach. Comparisons of the calculation with Monte Carlo simulation and TLD measurements show good agreement for positions near the polystyrene-aluminum interfaces. PMID:7565390
19. Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
SciTech Connect
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
20. Evaluation of bremsstrahlung contribution to photon transport in coupled photon-electron problems
Fernández, Jorge E.; Scot, Viviana; Di Giulio, Eugenio; Salvat, Francesc
2015-11-01
The most accurate description of the radiation field in x-ray spectrometry requires the modeling of coupled photon-electron transport. Compton scattering and the photoelectric effect actually produce electrons as secondary particles which contribute to the photon field through conversion mechanisms like bremsstrahlung (which produces a continuous photon energy spectrum) and inner-shell impact ionization (ISII) (which gives characteristic lines). The solution of the coupled problem is time consuming because the electrons interact continuously and therefore, the number of electron collisions to be considered is always very high. This complex problem is frequently simplified by neglecting the contributions of the secondary electrons. Recent works (Fernández et al., 2013; Fernández et al., 2014) have shown the possibility to include a separately computed coupled photon-electron contribution like ISII in a photon calculation for improving such a crude approximation while preserving the speed of the pure photon transport model. By means of a similar approach and the Monte Carlo code PENELOPE (coupled photon-electron Monte Carlo), the bremsstrahlung contribution is characterized in this work. The angular distribution of the photons due to bremsstrahlung can be safely considered as isotropic, with the point of emission located at the same place of the photon collision. A new photon kernel describing the bremsstrahlung contribution is introduced: it can be included in photon transport codes (deterministic or Monte Carlo) with a minimal effort. A data library to describe the energy dependence of the bremsstrahlung emission has been generated for all elements Z=1-92 in the energy range 1-150 keV. The bremsstrahlung energy distribution for an arbitrary energy is obtained by interpolating in the database. A comparison between a PENELOPE direct simulation and the interpolated distribution using the data base shows an almost perfect agreement. The use of the data base increases the calculation speed by several magnitude orders.
1. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
SciTech Connect
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
2. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
SciTech Connect
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
1989-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.
3. Applications of the Monte Carlo radiation transport toolkit at LLNL
Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine
1999-09-01
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
4. SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
SciTech Connect
West, J.T.; Murphy, J.
1988-01-01
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
5. Parallel and Portable Monte Carlo Particle Transport
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and ? eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute ?-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
6. Monte Carlo method for photon heating using temperature-dependent optical properties.
PubMed
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656
7. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study
PubMed Central
Zhang, Ying; Feng, Yuanming; Ming, Xin
2016-01-01
A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413
8. Monte Carlo simulation for the transport beamline
SciTech Connect
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
9. Monte Carlo simulation for the transport beamline
Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.
2013-07-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
10. Vertical Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.
11. Scalable Domain Decomposed Monte Carlo Particle Transport
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
12. Calculation of radiation therapy dose using all particle Monte Carlo transport
DOEpatents
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
13. Calculation of radiation therapy dose using all particle Monte Carlo transport
DOEpatents
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
14. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
SciTech Connect
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
15. Approximation for Horizontal Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Plantnick, Steven
1999-01-01
The effect of horizontal photon transport within real-world clouds can be of consequence to remote sensing problems based on plane-parallel cloud models. An analytic approximation for the root-mean-square horizontal displacement of reflected and transmitted photons relative to the incident cloud-top location is derived from random walk theory. The resulting formula is a function of the average number of photon scatterings, and particle asymmetry parameter and single scattering albedo. In turn, the average number of scatterings can be determined from efficient adding/doubling radiative transfer procedures. The approximation is applied to liquid water clouds for typical remote sensing solar spectral bands, involving both conservative and non-conservative scattering. Results compare well with Monte Carlo calculations. Though the emphasis is on horizontal photon transport in terrestrial clouds, the derived approximation is applicable to any multiple scattering plane-parallel radiative transfer problem. The complete horizontal transport probability distribution can be described with an analytic distribution specified by the root-mean-square and average displacement values. However, it is shown empirically that the average displacement can be reasonably inferred from the root-mean-square value. An estimate for the horizontal transport distribution can then be made from the root-mean-square photon displacement alone.
16. Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
17. Fiber transport of spatially entangled photons
Lffler, W.; Eliel, E. R.; Woerdman, J. P.; Euser, T. G.; Scharrer, M.; Russell, P.
2012-03-01
High-dimensional entangled photons pairs are interesting for quantum information and cryptography: Compared to the well-known 2D polarization case, the stronger non-local quantum correlations could improve noise resistance or security, and the larger amount of information per photon increases the available bandwidth. One implementation is to use entanglement in the spatial degree of freedom of twin photons created by spontaneous parametric down-conversion, which is equivalent to orbital angular momentum entanglement, this has been proven to be an excellent model system. The use of optical fiber technology for distribution of such photons has only very recently been practically demonstrated and is of fundamental and applied interest. It poses a big challenge compared to the established time and frequency domain methods: For spatially entangled photons, fiber transport requires the use of multimode fibers, and mode coupling and intermodal dispersion therein must be minimized not to destroy the spatial quantum correlations. We demonstrate that these shortcomings of conventional multimode fibers can be overcome by using a hollow-core photonic crystal fiber, which follows the paradigm to mimic free-space transport as good as possible, and are able to confirm entanglement of the fiber-transported photons. Fiber transport of spatially entangled photons is largely unexplored yet, therefore we discuss the main complications, the interplay of intermodal dispersion and mode mixing, the influence of external stress and core deformations, and consider the pros and cons of various fiber types.
18. The all particle method: Coupled neutron, photon, electron, charged particle Monte Carlo calculations
SciTech Connect
Cullen, D.E.; Perkins, S.T.; Plechaty, E.F.; Rathkopf, J.A.
1988-06-01
At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig.
19. A multiple-source photon beam model and its commissioning process for VMC++ Monte Carlo code
Tillikainen, L.; Siljamki, S.
2008-02-01
The use of Monte Carlo methods in photon beam treatment planning is becoming feasible due to advances in hardware and algorithms. However, a major challenge is the modeling of the radiation produced by individual linear accelerators. Monte Carlo simulation through the accelerator head or a parameterized source model may be used for this purpose. In this work, the latter approach was chosen due to larger flexibility and smaller amount of required information about the accelerator composition. The source model used includes sub-sources for primary photons emerging from target, extra-focal photons, and electron contamination. The free model parameters were derived by minimizing an objective function measuring deviations between pencil-beam-kernel based dose calculations and measurements. The output of the source model was then used as input for the VMC++ code, which was used to transport the particles through the accessory modules and the patient. To verify the procedure, VMC++ calculations were compared to measurements for open, wedged, and irregular MLC-shaped fields for 6MV and 15MV beams. The observed discrepancies were mostly within 2%, 2 mm. This work demonstrates that the developed procedure could, in the future, be used to commission the VMC++ algorithm for clinical use in a hospital.
20. Photon spectra calculation for an Elekta linac beam using experimental scatter measurements and Monte Carlo techniques.
PubMed
Juste, B; Miro, R; Campayo, J M; Diez, S; Verdu, G
2008-01-01
The present work is centered in reconstructing by means of a scatter analysis method the primary beam photon spectrum of a linear accelerator. This technique is based on irradiating the isocenter of a rectangular block made of methacrylate placed at 100 cm distance from surface and measuring scattered particles around the plastic at several specific positions with different scatter angles. The MCNP5 Monte Carlo code has been used to simulate the particles transport of mono-energetic beams to register the scatter measurement after contact the attenuator. Measured ionization values allow calculating the spectrum as the sum of mono-energetic individual energy bins using the Schiff Bremsstrahlung model. The measurements have been made in an Elekta Precise linac using a 6 MeV photon beam. Relative depth and profile dose curves calculated in a water phantom using the reconstructed spectrum agree with experimentally measured dose data to within 3%. PMID:19163410
1. Comparison of Monte Carlo simulations of photon/electron dosimetry in microscale applications.
PubMed
Joneja, O P; Negreanu, C; Stepanek, J; Chawl, R
2003-06-01
It is important to establish reliable calculational tools to plan and analyse representative microdosimetry experiments in the context of microbeam radiation therapy development. In this paper, an attempt has been made to investigate the suitability of the MCNP4C Monte Carlo code to adequately model photon/electron transport over micron distances. The case of a single cylindrical microbeam of 25-micron diameter incident on a water phantom has been simulated in detail with both MCNP4C and the code PSI-GEANT, for different incident photon energies, to get absorbed dose distributions at various depths, with and without electron transport being considered. In addition, dose distributions calculated for a single microbeam with a photon spectrum representative of the European Synchrotron Radiation Facility (ESRF) have been compared. Finally, a large number of cylindrical microbeams (a total of 2601 beams, placed on a 200-micron square pitch, covering an area of 1 cm2) incident on a water phantom have been considered to study cumulative radial dose distributions at different depths. From these distributions, ratios of peak (within the microbeam) to valley (mid-point along the diagonal connecting two microbeams) dose values have been determined. The various comparisons with PSI-GEANT results have shown that MCNP4C, with its high flexibility in terms of its numerous source and geometry description options, variance reduction methods, detailed error analysis, statistical checks and different tally types, can be a valuable tool for the analysis of microbeam experiments. PMID:12956187
2. Discrete Diffusion Monte Carlo for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
3. Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy
2015-09-01
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ? 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose difference in PDD and dose profiles were achieve using incident electron energy 6.4 MeV.
4. Review of Monte Carlo modeling of light transport in tissues.
PubMed
Zhu, Caigang; Liu, Quan
2013-05-01
A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and frequency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse optical tomography are briefly surveyed. Finally, the potential directions for the future development of the MC method in tissue optics are discussed. PMID:23698318
5. A generic algorithm for Monte Carlo simulation of proton transport
Salvat, Francesc
2013-12-01
A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.
6. Efficient photon treatment planning by the use of Swiss Monte Carlo Plan
Fix, M. K.; Manser, P.; Frei, D.; Volken, W.; Mini, R.; Born, E. J.
2007-06-01
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) usually can only be performed using a cumbersome multi-step procedure where many user interactions are needed. Automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new GUI-based photon MC environment has been developed resulting in a very flexible framework, namely the Swiss Monte Carlo Plan (SMCP). Appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment the MC particle transport has been divided into different parts: source, beam modifiers, and patient. The source part includes: Phase space-source, source models, and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory, hence no files are used as interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, three patient cases are shown. Thereby, comparisons between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
7. A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
SciTech Connect
Bal, Guillaume; Davis, Anthony B.; Langmore, Ian
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
8. Shift: A Massively Parallel Monte Carlo Radiation Transport Package
SciTech Connect
Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P
2015-01-01
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
9. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
10. Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
Moralles, M.; Guimarães, C. C.; Okuno, E.
2005-06-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.
11. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
PubMed
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
12. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
13. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
PubMed Central
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
14. Monte Carlo Assessments of Absorbed Doses to the Hands of Radiopharmaceutical Workers Due to Photon Emitters
SciTech Connect
Ilas, Dan; Eckerman, Keith F; Karagiannis, Harriet
2009-01-01
This paper describes the characterization of radiation doses to the hands of nuclear medicine technicians resulting from the handling of radiopharmaceuticals. Radiation monitoring using ring dosimeters indicates that finger dosimeters that are used to show compliance with applicable regulations may overestimate or underestimate radiation doses to the skin depending on the nature of the particular procedure and the radionuclide being handled. To better understand the parameters governing the absorbed dose distributions, a detailed model of the hands was created and used in Monte Carlo simulations of selected nuclear medicine procedures. Simulations of realistic configurations typical for workers handling radiopharmaceuticals were performedfor a range of energies of the source photons. The lack of charged-particle equilibrium necessitated full photon-electron coupled transport calculations. The results show that the dose to different regions of the fingers can differ substantially from dosimeter readings when dosimeters are located at the base of the finger. We tried to identify consistent patterns that relate the actual dose to the dosimeter readings. These patterns depend on the specific work conditions and can be used to better assess the absorbed dose to different regions of the exposed skin.
15. Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
16. Performance analysis of the Monte Carlo code MCNP4A for photon-based radiotherapy applications
SciTech Connect
DeMarco, J.J.; Solberg, T.D.; Wallace, R.E.; Smathers, J.B.
1995-12-31
The Los Alamos code MCNP4A (Monte Carlo M-Particle version 4A) is currently used to simulate a variety of problems ranging from nuclear reactor analysis to boron neutron capture therapy. This study is designed to evaluate MCNP4A as the dose calculation system for photon-based radiotherapy applications. A graphical user interface (MCNP Radiation Therapy) has been developed which automatically sets up the geometry and photon source requirements for three-dimensional simulations using Computed Tomography (CT) data. Preliminary results suggest the code is capable of calculating satisfactory dose distributions in a variety of simulated homogeneous and heterogeneous phantoms. The major drawback for this dosimetry system is the amount of time to obtain a statistically significant answer. MCNPRT allows the user to analyze the performance of MCNP4A as a function of material, geometry resolution and MCNP4A photon and electron physics parameters. A typical simulation geometry consists of a 10 MV photon point source incident on a 15 x 15 x 15 cm{sup 3} phantom composed of water voxels ranging in size from 10 x 10 x 10 mm{sup 3} to 2 x 2 x 2 mm{sup 3}. As the voxel size is decreased, a larger percentage of time is spent tracking photons through the voxelized geometry as opposed to the secondary electrons. A PRPR Patch file is under development that will optimize photon transport within the simulation phantom specifically for radiotherapy applications. MCNP4A also supports parallel processing capabilities via the Parallel Virtual Machine (PVM) message passing system. A dedicated network of five SUN SPARC2 processors produced a wall-clock speedup of 4.4 based on a simulation phantom containing 5 x 5 x 5 mm{sup 3} water voxels. The code was also tested on the 80 node IBM RS/6000 cluster at the Maui High Performance Computing Center (NHPCC). A non-dedicated system of 75 processors produces a wall clock speedup of 29 relative to one SUN SPARC2 computer.
17. Monte Carlo Simulation of Light Transport in Tissue, Beta Version
Energy Science and Technology Software Center (ESTSC)
2003-12-09
Understanding light-tissue interaction is fundamental in the field of Biomedical Optics. It has important implications for both therapeutic and diagnostic technologies. In this program, light transport in scattering tissue is modeled by absorption and scattering events as each photon travels through the tissue. the path of each photon is determined statistically by calculating probabilities of scattering and absorption. Other meausured quantities are total reflected light, total transmitted light, and total heat absorbed.
18. Monte Carlo simulation and experimental measurement of a nonspectroscopic radiation portal monitor for photon detection efficiencies of internally deposited radionuclides
Carey, Matthew Glen
Particle transport of radionuclide photons using the Monte Carlo N-Particle computer code can be used to determine a portal monitor's photon detection efficiency, in units of counts per photon, for internally deposited radionuclides. Good agreement has been found with experimental results for radionuclides that emit higher energy photons, such as Cs-137 and Co-60. Detection efficiency for radionuclides that emit lower energy photons, such as Am-241, greatly depend on the effective discriminator energy level of the portal monitor as well as any attenuating material between the source and detectors. This evaluation uses a chi-square approach to determine the best fit discriminator level of a non-spectroscopic portal monitor when the effective discriminator level, in units of energy, is not known. Internal detection efficiencies were evaluated experimentally using an anthropomorphic phantom with NIST traceable sources at various internal locations, and by simulation using MCNP5. The results of this research find that MCNP5 can be an effective tool for simulation of photon detection efficiencies, given a known discriminator level, for internally and externally deposited radionuclides. In addition, MCNP5 can be used for bounding personnel doses from either internally or externally deposited mixtures of radionuclides.
19. Efficient, Automated Monte Carlo Methods for Radiation Transport
PubMed Central
Kong, Rong; Ambrose, Martin; Spanier, Jerome
2012-01-01
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872
20. Efficient, automated Monte Carlo methods for radiation transport
SciTech Connect
Kong Rong; Ambrose, Martin; Spanier, Jerome
2008-11-20
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
1. Monte Carlo simulation of photon densities inside the dermis in LLLT (low level laser therapy)
Parvin, Parviz; Eftekharnoori, Somayeh; Dehghanpour, Hamid Reza
2009-09-01
In this work, the photon distribution of He:Ne laser within dermis tissue is studied. The dermis as a highly scattering media was irradiated by a low power laser. The photon densities as well as the corresponding isothermal contours were obtained by two different numerical methods, i.e., Lambert-Beer and Welch. The results were compared to that of Monte Carlo subsequently.
2. Accurate and efficient Monte Carlo solutions to the radiative transport equation in the spatial frequency domain
PubMed Central
2012-01-01
We present an approach to solving the radiative transport equation (RTE) for layered media in the spatial frequency domain (SFD) using Monte Carlo (MC) simulations. This is done by obtaining a complex photon weight from analysis of the Fourier transform of the RTE. We also develop a modified shortcut method that enables a single MC simulation to efficiently provide RTE solutions in the SFD for any number of spatial frequencies. We provide comparisons between the modified shortcut method and conventional discrete transform methods for SFD reflectance. Further results for oblique illumination illustrate the potential diagnostic utility of the SFD phase-shifts for analysis of layered media. PMID:21685989
3. A three-dimensional Monte Carlo calculation of the photon initiated showers and Kiel result
NASA Technical Reports Server (NTRS)
1985-01-01
The Kiel experimental results indicate an existence of the ultra high-energy gamma-rays coming from Cyg. X-3. However the result indicates that the number of the muons included in the photon initiated shower is the same as the number included in the proton initiated showers. According to our Monte Carlo calculation as shown in the graph of underpart, the number of muons included in the photon initiated showers should be less than 1/15 of the photon's. The previous simulation was made under one dimensional approximation. This time the result of three dimensional calculation is reported.
4. Monte Carlo generator photon jets used for luminosity at e+e- colliders
Fedotovich, G. V.; Kuraev, E. A.; Sibidanov, A. L.
2010-06-01
A Monte-Carlo Generator Photon Jets (MCGPJ) to simulate Bhabha scattering as well as production of two charged muons and two photons events is discussed. The theoretical precision of the cross sections with radiative corrections (RC) is estimated to be smaller than 0.2%. The Next Leading Order (NLO) radiative corrections proportional to ? are treated exactly, whereas the all logarithmically enhanced contributions, related to photon jets emitted in the collinear region, are taken into account in frame of the Structure Function approach. Numerous tests of the MCGPJ as well as a detailed comparison with other MC generators are presented.
5. Monte Carlo Modeling of Photon Interrogation Methods for Characterization of Special Nuclear Material
SciTech Connect
Pozzi, Sara A; Downar, Thomas J; Padovani, Enrico; Clarke, Shaun D
2006-01-01
This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission
6. Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
SciTech Connect
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 34, 5 5, and 2 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
7. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
8. Neutron streaming Monte Carlo radiation transport code MORSE-CG
SciTech Connect
Halley, A.M.; Miller, W.H.
1986-11-01
Calculations have been performed using the Monte Carlo code, MORSE-CG, to determine the neutron streaming through various straight and stepped gaps between radiation shield sectors in the conceptual tokamak fusion power plant design STARFIRE. This design calls for ''pie-shaped'' radiation shields with gaps between segments. It is apparent that some type of offset, or stepped gap, configuration will be necessary to reduce neutron streaming through these gaps. To evaluate this streaming problem, a MORSE-to-MORSE coupling technique was used, consisting of two separate transport calculations, which together defined the entire transport problem. The results define the effectiveness of various gap configurations to eliminate radiation streaming.
9. Comparing gold nano-particle enhanced radiotherapy with protons, megavoltage photons and kilovoltage photons: a Monte Carlo simulation
Lin, Yuting; McMahon, Stephen J.; Scarpelli, Matthew; Paganetti, Harald; Schuemann, Jan
2014-12-01
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6?MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10?nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10??m away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
10. Monte Carlo radiation transport: A revolution in science
SciTech Connect
Hendricks, J.
1993-04-01
When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.
11. SIMIND Monte Carlo simulation of a single photon emission CT
PubMed Central
Bahreyni Toossi, M. T.; Islamian, J. Pirayesh; Momennezhad, M.; Ljungberg, M.; Naseri, S. H.
2010-01-01
In this study, we simulated a Siemens E.CAM SPECT system using SIMIND Monte Carlo program to acquire its experimental characterization in terms of energy resolution, sensitivity, spatial resolution and imaging of phantoms using 99mTc. The experimental and simulation data for SPECT imaging was acquired from a point source and Jaszczak phantom. Verification of the simulation was done by comparing two sets of images and related data obtained from the actual and simulated systems. Image quality was assessed by comparing image contrast and resolution. Simulated and measured energy spectra (with or without a collimator) and spatial resolution from point sources in air were compared. The resulted energy spectra present similar peaks for the gamma energy of 99mTc at 140 KeV. FWHM for the simulation calculated to 14.01 KeV and 13.80 KeV for experimental data, corresponding to energy resolution of 10.01 and 9.86% compared to defined 9.9% for both systems, respectively. Sensitivities of the real and virtual gamma cameras were calculated to 85.11 and 85.39 cps/MBq, respectively. The energy spectra of both simulated and real gamma cameras were matched. Images obtained from Jaszczak phantom, experimentally and by simulation, showed similarity in contrast and resolution. SIMIND Monte Carlo could successfully simulate the Siemens E.CAM gamma camera. The results validate the use of the simulated system for further investigation, including modification, planning, and developing a SPECT system to improve the quality of images. PMID:20177569
12. Modeling photon transport in transabdominal fetal oximetry
Jacques, Steven L.; Ramanujam, Nirmala; Vishnoi, Gargi; Choe, Regine; Chance, Britton
2000-07-01
The possibility of optical oximetry of the blood in the fetal brain measured across the maternal abdomen just prior to birth is under investigated. Such measurements could detect fetal distress prior to birth and aid in the clinical decision regarding Cesarean section. This paper uses a perturbation method to model photon transport through a 8- cm-diam fetal brain located at a constant 2.5 cm below a curved maternal abdominal surface with an air/tissue boundary. In the simulation, a near-infrared light source delivers light to the abdomen and a detector is positioned up to 10 cm from the source along the arc of the abdominal surface. The light transport [W/cm2 fluence rate per W incident power] collected at the 10 cm position is Tm equals 2.2 X 10-6 cm-2 if the fetal brain has the same optical properties as the mother and Tf equals 1.0 X 10MIN6 cm-2 for an optically perturbing fetal brain with typical brain optical properties. The perturbation P equals (Tf - Tm)/Tm is -53% due to the fetal brain. The model illustrates the challenge and feasibility of transabdominal oximetry of the fetal brain.
13. Monte Carlo calculation of dose rate conversion factors for external exposure to photon emitters in soil.
PubMed
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
2000-03-01
The dose rate conversion factors D(CF) (absorbed dose rate in air per unit activity per unit of soil mass, nGy h(-1) per Bq kg(-1)) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D(CF) values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good agreement (less than 15% of difference) for photon energies above 1,500 keV. Antithetically, the agreement is not as good (difference of 20-30%) for the low energy photons. PMID:10688452
14. Neutron and photon transport in seagoing cargo containers
SciTech Connect
Pruet, J.; Descalle, M.-A.; Hall, J.; Pohl, B.; Prussin, S.G.
2005-05-01
Factors affecting sensing of small quantities of fissionable material in large seagoing cargo containers by neutron interrogation and detection of {beta}-delayed photons are explored. The propagation of variable-energy neutrons in cargos, subsequent fission of hidden nuclear material and production of the {beta}-delayed photons, and the propagation of these photons to an external detector are considered explicitly. Detailed results of Monte Carlo simulations of these stages in representative cargos are presented. Analytical models are developed both as a basis for a quantitative understanding of the interrogation process and as a tool to allow ready extrapolation of our results to cases not specifically considered here.
15. Monte Carlo simulation of secondary radiation exposure from high-energy photon therapy using an anthropomorphic phantom.
PubMed
Frankl, Matthias; Macin-Juan, Rafael
2016-03-01
The development of intensity-modulated radiotherapy treatments delivering large amounts of monitor units (MUs) recently raised concern about higher risks for secondary malignancies. In this study, optimised combinations of several variance reduction techniques (VRTs) have been implemented in order to achieve a high precision in Monte Carlo (MC) radiation transport simulations and the calculation of in- and out-of-field photon and neutron dose-equivalent distributions in an anthropomorphic phantom using MCNPX, v.2.7. The computer model included a Varian Clinac 2100C treatment head and a high-resolution head phantom. By means of the applied VRTs, a relative uncertainty for the photon dose-equivalent distribution of <1 % in-field and 15 % in average over the rest of the phantom could be obtained. Neutron dose equivalent, caused by photonuclear reactions in the linear accelerator components at photon energies of approximately >8 MeV, has been calculated. Relative uncertainty, calculated for each voxel, could be kept below 5 % in average over all voxels of the phantom. Thus, a very detailed neutron dose distribution could be obtained. The achieved precision now allows a far better estimation of both photon and especially neutron doses out-of-field, where neutrons can become the predominant component of secondary radiation. PMID:26311702
16. A Monte Carlo method for calculating the energy response of plastic scintillators to polarized photons below 100 keV
Mizuno, T.; Kanai, Y.; Kataoka, J.; Kiss, M.; Kurita, K.; Pearce, M.; Tajima, H.; Takahashi, H.; Tanaka, T.; Ueno, M.; Umeki, Y.; Yoshida, H.; Arimoto, M.; Axelsson, M.; Marini Bettolo, C.; Bogaert, G.; Chen, P.; Craig, W.; Fukazawa, Y.; Gunji, S.; Kamae, T.; Katsuta, J.; Kawai, N.; Kishimoto, S.; Klamra, W.; Larsson, S.; Madejski, G.; Ng, J. S. T.; Ryde, F.; Rydstrm, S.; Takahashi, T.; Thurston, T. S.; Varner, G.
2009-03-01
The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The response calculation includes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within 5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.
17. Low-energy photons in high-energy photon fields--Monte Carlo generated spectra and a new descriptive parameter.
PubMed
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rhmann, Antje; Poppe, Bjrn
2011-09-01
The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in particular for the induction of second malignancies. The present study discusses the general rules valid for the low-energy spectral component of radiotherapeutic photon beams at points within and in the periphery of the treatment field, taking as an example the Siemens Primus linear accelerator at 6 MV and 15 MV. The photon spectra at these points and their typical variations due to the target system, attenuation, single and multiple Compton scattering, are described by the Monte Carlo method, using the code BEAMnrc/EGSnrc. A survey of the role of low energy photons in the spectra within and around radiotherapy fields is presented. In addition to the spectra, some data compression has proven useful to support the overview of the behaviour of the low-energy component. A characteristic indicator of the presence of low-energy photons is the dose fraction attributable to photons with energies not exceeding 200 keV, termed P(D)(200 keV). Its values are calculated for different depths and lateral positions within a water phantom. For a pencil beam of 6 or 15 MV primary photons in water, the radial distribution of P(D)(200 keV) is bellshaped, with a wide-ranging exponential tail of half value 6 to 7 cm. The P(D)(200 keV) value obtained on the central axis of a photon field shows an approximately proportional increase with field size. Out-of-field P(D)(200 keV) values are up to an order of magnitude higher than on the central axis for the same irradiation depth. The 2D pattern of P(D)(200 keV) for a radiotherapy field visualizes the regions, e.g. at the field margin, where changes of detector responses and dose conversion factors, as well as increases of the RBE have to be anticipated. Parameter P(D)(200 keV) can also be used as a guidance supporting the selection of a calibration geometry suitable for radiation dosimeters to be used in small radiation fields. PMID:21530198
18. Current status of the PSG Monte Carlo neutron transport code
SciTech Connect
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
19. Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo
SciTech Connect
Booth, T.E.
1998-06-22
It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.
20. A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation
SciTech Connect
Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: [email protected]
2007-05-01
A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.
1. A Monte Carlo study on neutron and electron contamination of an unflattened 18-MV photon beam.
PubMed
Mesbahi, Asghar
2009-01-01
Recent studies on flattening filter (FF) free beams have shown increased dose rate and less out-of-field dose for unflattened photon beams. On the other hand, changes in contamination electrons and neutron spectra produced through photon (E>10 MV) interactions with linac components have not been completely studied for FF free beams. The objective of this study was to investigate the effect of removing FF on contamination electron and neutron spectra for an 18-MV photon beam using Monte Carlo (MC) method. The 18-MV photon beam of Elekta SL-25 linac was simulated using MCNPX MC code. The photon, electron and neutron spectra at a distance of 100 cm from target and on the central axis of beam were scored for 10 x 10 and 30 x 30 cm(2) fields. Our results showed increase in contamination electron fluence (normalized to photon fluence) up to 1.6 times for FF free beam, which causes more skin dose for patients. Neuron fluence reduction of 54% was observed for unflattened beams. Our study confirmed the previous measurement results, which showed neutron dose reduction for unflattened beams. This feature can lead to less neutron dose for patients treated with unflattened high-energy photon beams. PMID:18760613
2. Optimization of Monte Carlo transport simulations in stochastic media
SciTech Connect
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
3. A Monte Carlo simulation of ion transport at finite temperatures
Ristivojevic, Zoran; Petrović, Zoran Lj
2012-06-01
We have developed a Monte Carlo simulation for ion transport in hot background gases, which is an alternative way of solving the corresponding Boltzmann equation that determines the distribution function of ions. We consider the limit of low ion densities when the distribution function of the background gas remains unchanged due to collision with ions. Special attention has been paid to properly treating the thermal motion of the host gas particles and their influence on ions, which is very important at low electric fields, when the mean ion energy is comparable to the thermal energy of the host gas. We found the conditional probability distribution of gas velocities that correspond to an ion of specific velocity which collides with a gas particle. Also, we have derived exact analytical formulae for piecewise calculation of the collision frequency integrals. We address the cases when the background gas is monocomponent and when it is a mixture of different gases. The techniques described here are required for Monte Carlo simulations of ion transport and for hybrid models of non-equilibrium plasmas. The range of energies where it is necessary to apply the technique has been defined. The results we obtained are in excellent agreement with the existing ones obtained by complementary methods. Having verified our algorithm, we were able to produce calculations for Ar+ ions in Ar and propose them as a new benchmark for thermal effects. The developed method is widely applicable for solving the Boltzmann equation that appears in many different contexts in physics.
4. Characterization of a novel micro-irradiator using Monte Carlo radiation transport simulations
Rodriguez, Manuel; Jeraj, Robert
2008-06-01
Small animals are highly valuable resources for radiobiology research. While rodents have been widely used for decades, zebrafish embryos have recently become a very popular research model. However, unlike rodents, zebrafish embryos lack appropriate irradiation tools and methodologies. Therefore, the main purpose of this work is to use Monte Carlo radiation transport simulations to characterize dosimetric parameters, determine dosimetric sensitivity and help with the design of a new micro-irradiator capable of delivering irradiation fields as small as 1.0 mm in diameter. The system is based on a miniature x-ray source enclosed in a brass collimator with 3 cm diameter and 3 cm length. A pinhole of 1.0 mm diameter along the central axis of the collimator is used to produce a narrow photon beam. The MCNP5, Monte Carlo code, is used to study the beam energy spectrum, percentage depth dose curves, penumbra and effective field size, dose rate and radiation levels at 50 cm from the source. The results obtained from Monte Carlo simulations show that a beam produced by the miniature x-ray and the collimator system is adequate to totally or partially irradiate zebrafish embryos, cell cultures and other small specimens used in radiobiology research.
5. A multiple source model for 6 MV photon beam dose calculations using Monte Carlo.
PubMed
Fix, M K; Stampanoni, M; Manser, P; Born, E J; Mini, R; Regsegger, P
2001-05-01
A multiple source model (MSM) for the 6 MV beam of a Varian Clinac 2300 C/D was developed by simulating radiation transport through the accelerator head for a set of square fields using the GEANT Monte Carlo (MC) code. The corresponding phase space (PS) data enabled the characterization of 12 sources representing the main components of the beam defining system. By parametrizing the source characteristics and by evaluating the dependence of the parameters on field size, it was possible to extend the validity of the model to arbitrary rectangular fields which include the central 3 x 3 cm2 field without additional precalculated PS data. Finally, a sampling procedure was developed in order to reproduce the PS data. To validate the MSM, the fluence, energy fluence and mean energy distributions determined from the original and the reproduced PS data were compared and showed very good agreement. In addition, the MC calculated primary energy spectrum was verified by an energy spectrum derived from transmission measurements. Comparisons of MC calculated depth dose curves and profiles, using original and PS data reproduced by the MSM, agree within 1% and 1 mm. Deviations from measured dose distributions are within 1.5% and 1 mm. However, the real beam leads to some larger deviations outside the geometrical beam area for large fields. Calculated output factors in 10 cm water depth agree within 1.5% with experimentally determined data. In conclusion, the MSM produces accurate PS data for MC photon dose calculations for the rectangular fields specified. PMID:11384062
6. A deterministic computational model for the two dimensional electron and photon transport
Badavi, Francis F.; Nealy, John E.
2014-12-01
A deterministic (non-statistical) two dimensional (2D) computational model describing the transport of electron and photon typical of space radiation environment in various shield media is described. The 2D formalism is casted into a code which is an extension of a previously developed one dimensional (1D) deterministic electron and photon transport code. The goal of both 1D and 2D codes is to satisfy engineering design applications (i.e. rapid analysis) while maintaining an accurate physics based representation of electron and photon transport in space environment. Both 1D and 2D transport codes have utilized established theoretical representations to describe the relevant collisional and radiative interactions and transport processes. In the 2D version, the shield material specifications are made more general as having the pertinent cross sections. In the 2D model, the specification of the computational field is in terms of a distance of traverse z along an axial direction as well as a variable distribution of deflection (i.e. polar) angles ? where -?/2transport formalism, a combined mean-free-path and average trajectory approach is used. For candidate shielding materials, using the trapped electron radiation environments at low Earth orbit (LEO), geosynchronous orbit (GEO) and Jupiter moon Europa, verification of the 2D formalism vs. 1D and an existing Monte Carlo code are presented.
7. Dissipationless electron transport in photon-dressed nanostructures.
PubMed
Kibis, O V
2011-09-01
It is shown that the electron coupling to photons in field-dressed nanostructures can result in the ground electron-photon state with a nonzero electric current. Since the current is associated with the ground state, it flows without the Joule heating of the nanostructure and is nondissipative. Such a dissipationless electron transport can be realized in strongly coupled electron-photon systems with the broken time-reversal symmetry--particularly, in quantum rings and chiral nanostructures dressed by circularly polarized photons. PMID:21981519
8. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
9. Electron transport through a quantum dot assisted by cavity photons
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2013-11-01
We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, Lx/2, is weakly coupled at t = 0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry.
10. Electron transport through a quantum dot assisted by cavity photons.
PubMed
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2013-11-20
We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, Lx/2, is weakly coupled at t=0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry. PMID:24132041
11. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
SciTech Connect
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
12. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation
PubMed Central
Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun
2015-01-01
The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695
13. Acceleration of a Monte Carlo radiation transport code
SciTech Connect
Hochstedler, R.D.; Smith, L.M.
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
Bochud, Franois O.; Laedermann, Jean-Pascal; Sima, Octavian
2015-06-01
In radionuclide metrology, Monte Carlo (MC) simulation is widely used to compute parameters associated with primary measurements or calibration factors. Although MC methods are used to estimate uncertainties, the uncertainty associated with radiation transport in MC calculations is usually difficult to estimate. Counting statistics is the most obvious component of MC uncertainty and has to be checked carefully, particularly when variance reduction is used. However, in most cases fluctuations associated with counting statistics can be reduced using sufficient computing power. Cross-section data have intrinsic uncertainties that induce correlations when apparently independent codes are compared. Their effect on the uncertainty of the estimated parameter is difficult to determine and varies widely from case to case. Finally, the most significant uncertainty component for radionuclide applications is usually that associated with the detector geometry. Recent 2D and 3D x-ray imaging tools may be utilized, but comparison with experimental data as well as adjustments of parameters are usually inevitable.
15. A self-consistent electric field for Monte Carlo transport
SciTech Connect
Garabedian, P.R.
1987-01-01
The BETA transport code implements a Monte Carlo method to calculate ion and electron confinement times tau/sub i/ and tau/sub e/ for stellarator equilibria defined by the BETA equilibrium code. The magnetic field strength is represented by a double Fourier series in poloidal and toroidal angles psi and phi with coefficients depending on the toroidal flux s. A linearized drift kinetic equation determining the distribution functions of the ions and electrons is solved by a method of split time using an Adams ordinary differential equation algorithm to trace orbits and using a random walk to model the Fokker-Planck collision operator. Confinement times are estimated from exponential decay of expected values of the solution. Expected values of trigonometric functions of psi and phi serve to specify Fourier coefficients of an average over velocity space of the distribution functions.
16. Electron transport in magnetrons by a posteriori Monte Carlo simulations
Costin, C.; Minea, T. M.; Popa, G.
2014-02-01
Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few s) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 s of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.
17. Monte Carlo study of photon fields from a flattening filter-free clinical accelerator
SciTech Connect
Vassiliev, Oleg N.; Titt, Uwe; Kry, Stephen F.; Poenisch, Falk; Gillin, Michael T.; Mohan, Radhe
2006-04-15
In conventional clinical linear accelerators, the flattening filter scatters and absorbs a large fraction of primary photons. Increasing the beam-on time, which also increases the out-of-field exposure to patients, compensates for the reduction in photon fluence. In recent years, intensity modulated radiation therapy has been introduced, yielding better dose distributions than conventional three-dimensional conformal therapy. The drawback of this method is the further increase in beam-on time. An accelerator with the flattening filter removed, which would increase photon fluence greatly, could deliver considerably higher dose rates. The objective of the present study is to investigate the dosimetric properties of 6 and 18 MV photon beams from an accelerator without a flattening filter. The dosimetric data were generated using the Monte Carlo programs BEAMnrc and DOSXYZnrc. The accelerator model was based on the Varian Clinac 2100 design. We compared depth doses, dose rates, lateral profiles, doses outside collimation, total and collimator scatter factors for an accelerator with and without a flatteneing filter. The study showed that removing the filter increased the dose rate on the central axis by a factor of 2.31 (6 MV) and 5.45 (18 MV) at a given target current. Because the flattening filter is a major source of head scatter photons, its removal from the beam line could reduce the out-of-field dose.
18. Controlling single-photon transport with three-level quantum dots in photonic crystals
Yan, Cong-Hua; Jia, Wen-Zhi; Wei, Lian-Fu
2014-03-01
We investigate how to control single-photon transport along the photonic crystal waveguide with the recent experimentally demonstrated artificial atoms [i.e., ?-type quantum dots (QDs)] [S. G. Carter et al., Nat. Photon. 7, 329 (2013), 10.1038/nphoton.2013.41] in an all-optical way. Adopting full quantum theory in real space, we analytically calculate the transport coefficients of single photons scattered by a ?-type QD embedded in single- and two-mode photonic crystal cavities (PCCs), respectively. Our numerical results clearly show that the photonic transmission properties can be exactly manipulated by adjusting the coupling strengths of waveguide-cavity and QD-cavity interactions. Specifically, for the PCC with two degenerate orthogonal polarization modes coupled to a ?-type QD with two degenerate ground states, we find that the photonic transmission spectra show three Rabi-splitting dips and the present system could serve as single-photon polarization beam splitters. The feasibility of our proposal with the current photonic crystal technique is also discussed.
19. Identifying key surface parameters for optical photon transport in GEANT4/GATE simulations.
PubMed
Nilsson, Jenny; Cuplov, Vesna; Isaksson, Mats
2015-09-01
For a scintillator used for spectrometry, the generation, transport and detection of optical photons have a great impact on the energy spectrum resolution. A complete Monte Carlo model of a scintillator includes a coupled ionizing particle and optical photon transport, which can be simulated with the GEANT4 code. The GEANT4 surface parameters control the physics processes an optical photon undergoes when reaching the surface of a volume. In this work the impact of each surface parameter on the optical transport was studied by looking at the optical spectrum: the number of detected optical photons per ionizing source particle from a large plastic scintillator, i.e. the output signal. All simulations were performed using GATE v6.2 (GEANT4 Application for Tomographic Emission). The surface parameter finish (polished, ground, front-painted or back-painted) showed the greatest impact on the optical spectrum whereas the surface parameter ?(?), which controls the surface roughness, had a relatively small impact. It was also shown how the surface parameters reflectivity and reflectivity types (specular spike, specular lobe, Lambertian and backscatter) changed the optical spectrum depending on the probability for reflection and the combination of reflectivity types. A change in the optical spectrum will ultimately have an impact on a simulated energy spectrum. By studying the optical spectra presented in this work, a GEANT4 user can predict the shift in an optical spectrum caused be the alteration of a specific surface parameter. PMID:26046519
20. Single Photon Transport through an Atomic Chain Coupled to a One-dimensional Photonic Waveguide
Liao, Zeyang; Zeng, Xiaodong; Zubairy, M. Suhail
2015-03-01
We study the dynamics of a single photon pulse travels through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our result is consistent with previous calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single photon pulse with finite bandwidth can even approach 100%. The spectrum of the reflected and transmitted photon can also be significantly different from the single atom case. Many interesting physics can occur in this system such as the photonic bandgap effects, quantum entanglement generation, Fano-type interference, superradiant effects and nonlinear frequency conversion. For engineering, this system may be used as a single photon frequency filter, single photon modulation and photon storage.
1. Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
2. Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations
Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are computed by using both gray media and non-gray media approaches. The non-gray media simulations take into consideration the dispersion and polarization effects of phonon transport. The effects of volume fraction, size, shape and distribution of the nanowire fillers on heat flow and hence thermal conductivity are studied. In addition, the computational performances of the gray and non-gray media approaches are compared.
3. Simple beam models for Monte Carlo photon beam dose calculations in radiotherapy.
PubMed
Fix, M K; Keller, H; Regsegger, P; Born, E J
2000-12-01
Monte Carlo (code GEANT) produced 6 and 15 MV phase space (PS) data were used to define several simple photon beam models. For creating the PS data the energy of starting electrons hitting the target was tuned to get correct depth dose data compared to measurements. The modeling process used the full PS information within the geometrical boundaries of the beam including all scattered radiation of the accelerator head. Scattered radiation outside the boundaries was neglected. Photons and electrons were assumed to be radiated from point sources. Four different models were investigated which involved different ways to determine the energies and locations of beam particles in the output plane. Depth dose curves, profiles, and relative output factors were calculated with these models for six field sizes from 5x5 to 40x40cm2 and compared to measurements. Model 1 uses a photon energy spectrum independent of location in the PS plane and a constant photon fluence in this plane. Model 2 takes into account the spatial particle fluence distribution in the PS plane. A constant fluence is used again in model 3, but the photon energy spectrum depends upon the off axis position. Model 4, finally uses the spatial particle fluence distribution and off axis dependent photon energy spectra in the PS plane. Depth dose curves and profiles for field sizes up to 10x10cm2 were not model sensitive. Good agreement between measured and calculated depth dose curves and profiles for all field sizes was reached for model 4. However, increasing deviations were found for increasing field sizes for models 1-3. Large deviations resulted for the profiles of models 2 and 3. This is due to the fact that these models overestimate and underestimate the energy fluence at large off axis distances. Relative output factors consistent with measurements resulted only for model 4. PMID:11190957
4. Optimizing light transport in scintillation crystals for time-of-flight PET: an experimental and optical Monte Carlo simulation study
PubMed Central
Berg, Eric; Roncali, Emilie; Cherry, Simon R.
2015-01-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040
5. Optimizing light transport in scintillation crystals for time-of-flight PET: an experimental and optical Monte Carlo simulation study.
PubMed
Berg, Eric; Roncali, Emilie; Cherry, Simon R
2015-06-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040
6. Robust light transport in non-Hermitian photonic lattices.
PubMed
Longhi, Stefano; Gatti, Davide; Della Valle, Giuseppe
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
7. Robust light transport in non-Hermitian photonic lattices
PubMed Central
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
8. Robust light transport in non-Hermitian photonic lattices
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-08-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure.
9. A Fano cavity test for Monte Carlo proton transport algorithms
SciTech Connect
Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo
2014-01-15
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE{sub 0})/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not small enough. Conclusions: Using conservative user-defined simulation parameters, both PENH and Geant4 pass the Fano cavity test for proton transport. Our methodology is applicable to any kind of charged particle, provided that the considered MC code is able to track the charged particle considered.
10. Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.
PubMed
Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B
2014-11-01
The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967
11. Monte Carlo impurity transport modeling in the DIII-D transport
SciTech Connect
Evans, T.E.; Finkenthal, D.F.
1998-04-01
A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCIs unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed.
12. Status of the MORSE multigroup Monte Carlo radiation transport code
SciTech Connect
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
13. Analysis of EBR-II neutron and photon physics by multidimensional transport-theory techniques
SciTech Connect
Jacqmin, R.P.; Finck, P.J.; Palmiotti, G.
1994-03-01
This paper contains a review of the challenges specific to the EBR-II core physics, a description of the methods and techniques which have been developed for addressing these challenges, and the results of some validation studies relative to power-distribution calculations. Numerical tests have shown that the VARIANT nodal code yields eigenvalue and power predictions as accurate as finite difference and discrete ordinates transport codes, at a small fraction of the cost. Comparisons with continuous-energy Monte Carlo results have proven that the errors introduced by the use of the diffusion-theory approximation in the collapsing procedure to obtain broad-group cross sections, kerma factors, and photon-production matrices, have a small impact on the EBR-II neutron/photon power distribution.
14. The difference of scoring dose to water or tissues in Monte Carlo dose calculations for low energy brachytherapy photon sources
SciTech Connect
Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank
2011-03-15
Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w,m)} over D{sub 90(m,m)} for clinical implants matches D{sub w,m}/D{sub m,m} at 1 cm from the single point sources. Conclusions: Given the small variation with distance, using conversion factors based on the emitted photon spectrum (or its mean energy) of a given source introduces minimal error. The large differences observed between scoring schemes underline the need for guidelines on choice of media for dose reporting. Providing such guidelines is beyond the scope of this work.
15. Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
PubMed
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-01-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using only hydrogen content of tissues, a conclusion that might impact MRI dose calculation, but can also help selecting the optimal tissue substitutes when calibrat-ing MVCT devices. PMID:26699320
16. Effect of transverse magnetic fields on dose distribution and RBE of photon beams: comparing PENELOPE and EGS4 Monte Carlo codes
Nettelbeck, H.; Takacs, G. J.; Rosenfeld, A. B.
2008-09-01
The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C1 and C2 and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population.
17. Simulating photon scattering effects in structurally detailed ventricular models using a Monte Carlo approach
PubMed Central
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon packets as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct humped morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with virtual-electrode regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
18. Monte Carlo-based revised values of dose rate constants at discrete photon energies
PubMed Central
Selvam, T. Palani; Shrivastava, Vandana; Chourasiya, Ghanashyam; Babu, D. Appala Raju
2014-01-01
Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength Sk needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 3050 keV and up to 4% at 0.2 cm at 30 keV). A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. Sk calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (2050 keV) when compared to the published values. The deviations observed in the values of dose rate and Sk affect the values of dose rate constants up to 3%. PMID:24600166
19. A Monte Carlo simulation for predicting photon return from sodium laser guide star
Feng, Lu; Kibblewhite, Edward; Jin, Kai; Xue, Suijian; Shen, Zhixia; Bo, Yong; Zuo, Junwei; Wei, Kai
2015-10-01
Sodium laser guide star is an ideal source for astronomical adaptive optics system correcting wave-front aberration caused by atmospheric turbulence. However, the cost and difficulties to manufacture a compact high quality sodium laser with power higher than 20W is not a guarantee that the laser will provide a bright enough laser guide star due to the physics of sodium atom in the atmosphere. It would be helpful if a prediction tool could provide the estimation of photon generating performance for arbitrary laser output formats, before an actual laser were designed. Based on rate equation, we developed a Monte Carlo simulation software that could be used to predict sodium laser guide star generating performance for arbitrary laser formats. In this paper, we will describe the model of our simulation, its implementation and present comparison results with field test data.
20. Evaluation of Electron Contamination in Cancer Treatment with Megavoltage Photon Beams: Monte Carlo Study
PubMed Central
Seif, F.; Bayatiani, M. R.
2015-01-01
Background Megavoltage beams used in radiotherapy are contaminated with secondary electrons. Different parts of linac head and air above patient act as a source of this contamination. This contamination can increase damage to skin and subcutaneous tissue during radiotherapy. Monte Carlo simulation is an accurate method for dose calculation in medical dosimetry and has an important role in optimization of linac head materials. The aim of this study was to calculate electron contamination of Varian linac. Materials and Method The 6MV photon beam of Varian (2100 C/D) linac was simulated by Monte Carlo code, MCNPX, based on its companys instructions. The validation was done by comparing the calculated depth dose and profiles of simulation with dosimetry measurements in a water phantom (error less than 2%). The Percentage Depth Dose (PDDs), profiles and contamination electron energy spectrum were calculated for different therapeutic field sizes (55 to 4040 cm2) for both linacs. Results The dose of electron contamination was observed to rise with increase in field size. The contribution of the secondary contamination electrons on the surface dose was 6% for 55 cm2 to 27% for 4040 cm2, respectively. Conclusion Based on the results, the effect of electron contamination on patient surface dose cannot be ignored, so the knowledge of the electron contamination is important in clinical dosimetry. It must be calculated for each machine and considered in Treatment Planning Systems. PMID:25973409
1. Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
SciTech Connect
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
2. Detailed calculation of inner-shell impact ionization to use in photon transport codes
Fernandez, Jorge E.; Scot, Viviana; Verardi, Luca; Salvat, Francesc
2014-02-01
Secondary electrons can modify the intensity of the XRF characteristic lines by means of a mechanism known as inner-shell impact ionization (ISII). The ad-hoc code KERNEL (which calls the PENELOPE package) has been used to characterize the electron correction in terms of angular, spatial and energy distributions. It is demonstrated that the angular distribution of the characteristic photons due to ISII can be safely considered as isotropic, and that the source of photons from electron interactions is well represented as a point source. The energy dependence of the correction is described using an analytical model in the energy range 1-150 keV, for all the emission lines (K, L and M) of the elements with atomic numbers Z=11-92. It is introduced a new photon kernel comprising the correction due to ISII, suitable to be adopted in photon transport codes (deterministic or Monte Carlo) with a minimal effort. The impact of the correction is discussed for the most intense K (Kα1,Kα2,Kβ1) and L (Lα1,Lα2) lines.
3. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
4. FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
SciTech Connect
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
5. Detector-selection technique for Monte Carlo transport in azimuthally symmetric geometries
SciTech Connect
Hoffman, T.J.; Tang, J.S.; Parks, C.V.
1982-01-01
Many radiation transport problems contain geometric symmetries which are not exploited in obtaining their Monte Carlo solutions. An important class of problems is that in which the geometry is symmetric about an axis. These problems arise in the analyses of a reactor core or shield, spent fuel shipping casks, tanks containing radioactive solutions, radiation transport in the atmosphere (air-over-ground problems), etc. Although amenable to deterministic solution, such problems can often be solved more efficiently and accurately with the Monte Carlo method. For this class of problems, a technique is described in this paper which significantly reduces the variance of the Monte Carlo-calculated effect of interest at point detectors.
6. Utilization of a Photon Transport Code to Investigate Radiation Therapy Treatment Planning Quantities and Techniques.
Palta, Jatinder Raj
A versatile computer program MORSE, based on neutron and photon transport theory has been utilized to investigate radiation therapy treatment planning quantities and techniques. A multi-energy group representation of transport equation provides a concise approach in utilizing Monte Carlo numerical techniques to multiple radiation therapy treatment planning problems. A general three dimensional geometry is used to simulate radiation therapy treatment planning problems in configurations of an actual clinical setting. Central axis total and scattered dose distributions for homogeneous and inhomogeneous water phantoms are calculated and the correction factor for lung and bone inhomogeneities are also evaluated. Results show that Monte Carlo calculations based on multi-energy group transport theory predict the depth dose distributions that are in good agreement with available experimental data. Improved correction factors based on the concepts of lung-air-ratio and bone-air-ratio are proposed in lieu of the presently used correction factors that are based on tissue-air-ratio power law method for inhomogeneity corrections. Central axis depth dose distributions for a bremsstrahlung spectrum from a linear accelerator is also calculated to exhibit the versatility of the computer program in handling multiple radiation therapy problems. A novel approach is undertaken to study the dosimetric properties of brachytherapy sources. Dose rate constants for various radionuclides are calculated from the numerically generated dose rate versus source energy curves. Dose rates can also be generated for any point brachytherapy source with any arbitrary energy spectrum at various radial distances from this family of curves.
7. Determination of peripheral underdosage at the lung-tumor interface using Monte Carlo radiation transport calculations
SciTech Connect
Taylor, Michael; Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick
2012-04-01
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
8. SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research
Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.
2014-03-01
Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.
9. Comparative analysis of discrete and continuous absorption weighting estimators used in Monte Carlo simulations of radiative transport in turbid media
PubMed Central
Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan
2014-01-01
We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029
10. 3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations
Majaron, Boris; Milani?, Matija; Jia, Wangcun; Nelson, J. S.
2010-11-01
We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.
11. LDRD project 151362 : low energy electron-photon transport.
SciTech Connect
Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James
2013-09-01
At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.
12. Monte Carlo calculations of correction factors for plastic phantoms in clinical photon and electron beam dosimetry
SciTech Connect
Araki, Fujio; Hanyu, Yuji; Fukuoka, Miyoko; Matsumoto, Kenji; Okumura, Masahiko; Oguchi, Hiroshi
2009-07-15
The purpose of this study is to calculate correction factors for plastic water (PW) and plastic water diagnostic-therapy (PWDT) phantoms in clinical photon and electron beam dosimetry using the EGSnrc Monte Carlo code system. A water-to-plastic ionization conversion factor k{sub pl} for PW and PWDT was computed for several commonly used Farmer-type ionization chambers with different wall materials in the range of 4-18 MV photon beams. For electron beams, a depth-scaling factor c{sub pl} and a chamber-dependent fluence correction factor h{sub pl} for both phantoms were also calculated in combination with NACP-02 and Roos plane-parallel ionization chambers in the range of 4-18 MeV. The h{sub pl} values for the plane-parallel chambers were evaluated from the electron fluence correction factor {phi}{sub pl}{sup w} and wall correction factors P{sub wall,w} and P{sub wall,pl} for a combination of water or plastic materials. The calculated k{sub pl} and h{sub pl} values were verified by comparison with the measured values. A set of k{sub pl} values computed for the Farmer-type chambers was equal to unity within 0.5% for PW and PWDT in photon beams. The k{sub pl} values also agreed within their combined uncertainty with the measured data. For electron beams, the c{sub pl} values computed for PW and PWDT were from 0.998 to 1.000 and from 0.992 to 0.997, respectively, in the range of 4-18 MeV. The {phi}{sub pl}{sup w} values for PW and PWDT were from 0.998 to 1.001 and from 1.004 to 1.001, respectively, at a reference depth in the range of 4-18 MeV. The difference in P{sub wall} between water and plastic materials for the plane-parallel chambers was 0.8% at a maximum. Finally, h{sub pl} values evaluated for plastic materials were equal to unity within 0.6% for NACP-02 and Roos chambers. The h{sub pl} values also agreed within their combined uncertainty with the measured data. The absorbed dose to water from ionization chamber measurements in PW and PWDT plastic materials corresponds to that in water within 1%. Both phantoms can thus be used as a substitute for water for photon and electron dosimetry.
13. Few-photon transport in many-body photonic systems: A scattering approach
Lee, Changhyoup; Noh, Changsuk; Schetakis, Nikolaos; Angelakis, Dimitris G.
2015-12-01
We study the quantum transport of multiphoton Fock states in one-dimensional Bose-Hubbard lattices implemented in QED cavity arrays (QCAs). We propose an optical scheme to probe the underlying many-body states of the system by analyzing the properties of the transmitted light using scattering theory. To this end, we employ the Lippmann-Schwinger formalism within which an analytical form of the scattering matrix can be found. The latter is evaluated explicitly for the two-particle, two-site case which we use to study the resonance properties of two-photon scattering, as well as the scattering probabilities and the second-order intensity correlations of the transmitted light. The results indicate that the underlying structure of the many-body states of the model in question can be directly inferred from the physical properties of the transported photons in its QCA realization. We find that a fully resonant two-photon scattering scenario allows a faithful characterization of the underlying many-body states, unlike in the coherent driving scenario usually employed in quantum master-equation treatments. The effects of losses in the cavities, as well as the incoming photons' pulse shapes and initial correlations, are studied and analyzed. Our method is general and can be applied to probe the structure of any many-body bosonic model amenable to a QCA implementation, including the Jaynes-Cummings-Hubbard model, the extended Bose-Hubbard model, as well as a whole range of spin models.
14. Monte Carlo simulation of small electron fields collimated by the integrated photon MLC
Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus
2011-02-01
In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.
15. Monte Carlo simulation of small electron fields collimated by the integrated photon MLC.
PubMed
Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus
2011-02-01
In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found. PMID:21242628
16. CAD based Monte Carlo method: Algorithms for geometric evaluation in support of Monte Carlo radiation transport calculation
Wang, Mengkuo
In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the modeling capabilities of CAD software. The two major approaches are the Converter approach and the CAD engine based approach. By carefully analyzing the strategies and algorithms of these two approaches, the CAD engine based approach has peen identified as the more promising approach. Though currently the performance of this approach is not satisfactory, there is room for improvement. The development and implementation of an improved CAD based approach is the focus of this thesis. Algorithms to accelerate the CAD engine based approach are studied. The major acceleration algorithm is the Oriented Bounding Box algorithm, which is used in computer graphics. The difference in application between computer graphics and particle transport has been considered and the algorithm has been modified for particle transport. The major work of this thesis has been the development of the MCNPX/CGM code and the testing, benchmarking and implementation of the acceleration algorithms. MCNPX is a Monte Carlo code and CGM is a CAD geometry engine. A facet representation of the geometry provided the least slowdown of the Monte Carlo code. The CAD model generates the facet representation. The Oriented Bounding Box algorithm was the fastest acceleration technique adopted for this work. The slowdown of the MCNPX/CGM to MCNPX was reduced to a factor of 3 when the facet model is used. MCNPX/CGM has been successfully validated against test problems in medical physics and a fusion energy device. MCNPX/CGM gives exactly the same results as the standard MCNPX when an MCNPX geometry model is available. For the case of the complicated fusion device---the stellerator, the MCNPX/CGM's results closely match a one-dimension model calculation performed by ARIES team.
17. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY
SciTech Connect
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
18. Monte Carlo linear accelerator simulation of megavoltage photon beams: Independent determination of initial beam parameters
SciTech Connect
Almberg, Sigrun Saur; Frengen, Jomar; Kylling, Arve; Lindmo, Tore
2012-01-15
19. Monte Carlo modelling of positron transport in real world applications
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
20. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.
PubMed
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy. PMID:23085901
1. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
2. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
SciTech Connect
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
3. MC-PEPTITA: A Monte Carlo model for Photon, Electron and Positron Tracking In Terrestrial AtmosphereApplication for a terrestrial gamma ray flash
Sarria, D.; Blelly, P.-L.; Forme, F.
2015-05-01
Terrestrial gamma ray flashes are natural bursts of X and gamma rays, correlated to thunderstorms, that are likely to be produced at an altitude of about 10 to 20 km. After the emission, the flux of gamma rays is filtered and altered by the atmosphere and a small part of it may be detected by a satellite on low Earth orbit (RHESSI or Fermi, for example). Thus, only a residual part of the initial burst can be measured and most of the flux is made of scattered primary photons and of secondary emitted electrons, positrons, and photons. Trying to get information on the initial flux from the measurement is a very complex inverse problem, which can only be tackled by the use of a numerical model solving the transport of these high-energy particles. For this purpose, we developed a numerical Monte Carlo model which solves the transport in the atmosphere of both relativistic electrons/positrons and X/gamma rays. It makes it possible to track the photons, electrons, and positrons in the whole Earth environment (considering the atmosphere and the magnetic field) to get information on what affects the transport of the particles from the source region to the altitude of the satellite. We first present the MC-PEPTITA model, and then we validate it by comparison with a benchmark GEANT4 simulation with similar settings. Then, we show the results of a simulation close to Fermi event number 091214 in order to discuss some important properties of the photons and electrons/positrons that are reaching satellite altitude.
4. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation
SciTech Connect
Yoriyaz, Helio; Moralles, Mauricio; Tarso Dalledone Siqueira, Paulo de; Costa Guimaraes, Carla da; Belonsi Cintra, Felipe; Santos, Adimir dos
2009-11-15
Purpose: Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. Methods: For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Results: Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons.Conclusion: Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.
5. A GAMOS plug-in for GEANT4 based Monte Carlo simulation of radiation-induced light transport in biological media.
PubMed
Glaser, Adam K; Kanick, Stephen C; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W
2013-05-01
We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle's traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790
6. A GAMOS plug-in for GEANT4 based Monte Carlo simulation of radiation-induced light transport in biological media
PubMed Central
Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.
2013-01-01
We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particles traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790
7. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
8. A fully coupled Monte Carlo/discrete ordinates solution to the neutron transport equation. Final report
SciTech Connect
Filippone, W.L.; Baker, R.S.
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
9. Investigation of a probe design for facilitating the uses of the standard photon diffusion equation at short source-detector separations: Monte Carlo simulations
Tseng, Sheng-Hao; Hayakawa, Carole; Spanier, Jerome; Durkin, Anthony J.
2009-09-01
We design a special diffusing probe to investigate the optical properties of human skin in vivo. The special geometry of the probe enables a modified two-layer (MTL) diffusion model to precisely describe the photon transport even when the source-detector separation is shorter than 3 mean free paths. We provide a frequency domain comparison between the Monte Carlo model and the diffusion model in both the MTL geometry and conventional semiinfinite geometry. We show that using the Monte Carlo model as a benchmark method, the MTL diffusion theory performs better than the diffusion theory in the semiinfinite geometry. In addition, we carry out Monte Carlo simulations with the goal of investigating the dependence of the interrogation depth of this probe on several parameters including source-detector separation, sample optical properties, and properties of the diffusing high-scattering layer. From the simulations, we find that the optical properties of samples modulate the interrogation volume greatly, and the source-detector separation and the thickness of the diffusing layer are the two dominant probe parameters that impact the interrogation volume. Our simulation results provide design guidelines for a MTL geometry probe.
10. Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies.
PubMed
Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos
2016-01-01
A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed. PMID:26903232
11. Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies
PubMed Central
Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos
2016-01-01
A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed. PMID:26903232
12. Monte Carlo study of photon beams from medical linear accelerators: Optimization, benchmark and spectra
Sheikh-Bagheri, Daryoush
1999-12-01
BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette technique used in conjunction with UBS is substantially modified to make simulations additionally 2 times more efficient. Finally, a novel and robust technique, called selective bremsstrahlung splitting (SBS), is developed and shown to improve the efficiency of photon beam simulations by an additional factor of 3-4, depending on the end- point considered. The optimized BEAM code is benchmarked by comparing calculated and measured ionization distributions in water from the 10 and 20 MV photon beams of the NRCC linac. Unlike previous calculations, the incident e - energy is known independently to 1%, the entire extra-focal radiation is simulated and e- contamination is accounted for. Both beams use clinical jaws, whose dimensions are accurately measured, and which are set for a 10 x 10 cm2 field at 110 cm. At both energies, the calculated and the measured values of ionization on the central-axis in the buildup region agree within 1% of maximum dose. The agreement is well within statistics elsewhere on the central-axis. Ionization profiles match within 1% of maximum dose, except at the geometrical edges of the field, where the disagreement is up to 5% of dose maximum. Causes for this discrepancy are discussed. The benchmarked BEAM code is then used to simulate beams from the major commercial medical linear accelerators. The off-axis factors are matched within statistical uncertainties, for most of the beams at the 1 ? level and for all at the 2 ? level. The calculated and measured depth-dose data agree within 1% (local dose), at about 1% (1 ? level) statistics, at all depths past depth of maximum dose for almost all beams. The calculated photon spectra and average energy distributions are compared to those published by Mohan et al. and decomposed into direct and scattered photon components.
13. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy
Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.
2010-08-01
This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.
14. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
15. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.
PubMed
Czarnecki, D; Zink, K
2013-04-21
The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors ?(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 10 and 1 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size. PMID:23514734
16. Effect of statistical fluctuation in Monte Carlo based photon beam dose calculation on gamma index evaluation
Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2013-03-01
The ?-index test has been commonly adopted to quantify the degree of agreement between a reference dose distribution and an evaluation dose distribution. Monte Carlo (MC) simulation has been widely used for the radiotherapy dose calculation for both clinical and research purposes. The goal of this work is to investigate both theoretically and experimentally the impact of the MC statistical fluctuation on the ?-index test when the fluctuation exists in the reference, the evaluation, or both dose distributions. To the first order approximation, we theoretically demonstrated in a simplified model that the statistical fluctuation tends to overestimate ?-index values when existing in the reference dose distribution and underestimate ?-index values when existing in the evaluation dose distribution given the original ?-index is relatively large for the statistical fluctuation. Our numerical experiments using realistic clinical photon radiation therapy cases have shown that (1) when performing a ?-index test between an MC reference dose and a non-MC evaluation dose, the average ?-index is overestimated and the gamma passing rate decreases with the increase of the statistical noise level in the reference dose; (2) when performing a ?-index test between a non-MC reference dose and an MC evaluation dose, the average ?-index is underestimated when they are within the clinically relevant range and the gamma passing rate increases with the increase of the statistical noise level in the evaluation dose; (3) when performing a ?-index test between an MC reference dose and an MC evaluation dose, the gamma passing rate is overestimated due to the statistical noise in the evaluation dose and underestimated due to the statistical noise in the reference dose. We conclude that the ?-index test should be used with caution when comparing dose distributions computed with MC simulation.
17. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
SciTech Connect
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
18. Update On the Status of the FLUKA Monte Carlo Transport Code*
NASA Technical Reports Server (NTRS)
Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.
2006-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.
19. Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study
SciTech Connect
Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon
2012-03-15
Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.
20. Time series analysis of Monte Carlo neutron transport calculations
Nease, Brian Robert
A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.
1. Modeling bioluminescent photon transport in tissue based on Radiosity-diffusion model
Sun, Li; Wang, Pu; Tian, Jie; Zhang, Bo; Han, Dong; Yang, Xin
2010-03-01
Bioluminescence tomography (BLT) is one of the most important non-invasive optical molecular imaging modalities. The model for the bioluminescent photon propagation plays a significant role in the bioluminescence tomography study. Due to the high computational efficiency, diffusion approximation (DA) is generally applied in the bioluminescence tomography. But the diffusion equation is valid only in highly scattering and weakly absorbing regions and fails in non-scattering or low-scattering tissues, such as a cyst in the breast, the cerebrospinal fluid (CSF) layer of the brain and synovial fluid layer in the joints. A hybrid Radiosity-diffusion model is proposed for dealing with the non-scattering regions within diffusing domains in this paper. This hybrid method incorporates a priori information of the geometry of non-scattering regions, which can be acquired by magnetic resonance imaging (MRI) or x-ray computed tomography (CT). Then the model is implemented using a finite element method (FEM) to ensure the high computational efficiency. Finally, we demonstrate that the method is comparable with Mont Carlo (MC) method which is regarded as a 'gold standard' for photon transportation simulation.
2. Delocalization of electrons by cavity photons in transport through a quantum dot molecule
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2014-11-01
We present results on cavity-photon-assisted electron transport through two lateral quantum dots embedded in a finite quantum wire. The double quantum dot system is weakly connected to two leads and strongly coupled to a single quantized photon cavity mode with initially two linearly polarized photons in the cavity. Including the full electron-photon interaction, the transient current controlled by a plunger-gate in the central system is studied by using quantum master equation. Without a photon cavity, two resonant current peaks are observed in the range selected for the plunger gate voltage: The ground state peak, and the peak corresponding to the first-excited state. The current in the ground state is higher than in the first-excited state due to their different symmetry. In a photon cavity with the photon field polarized along or perpendicular to the transport direction, two extra side peaks are found, namely, photon-replica of the ground state and photon-replica of the first-excited state. The side-peaks are caused by photon-assisted electron transport, with multiphoton absorption processes for up to three photons during an electron tunneling process. The inter-dot tunneling in the ground state can be controlled by the photon cavity in the case of the photon field polarized along the transport direction. The electron charge is delocalized from the dots by the photon cavity. Furthermore, the current in the photon-induced side-peaks can be strongly enhanced by increasing the electron-photon coupling strength for the case of photons polarized along the transport direction.
3. Light transport and lasing in complex photonic structures
Liew, Seng Fatt
Complex photonic structures refer to composite optical materials with dielectric constant varying on length scales comparable to optical wavelengths. Light propagation in such heterogeneous composites is greatly different from homogeneous media due to scattering of light in all directions. Interference of these scattered light waves gives rise to many fascinating phenomena and it has been a fast growing research area, both for its fundamental physics and for its practical applications. In this thesis, we have investigated the optical properties of photonic structures with different degree of order, ranging from periodic to random. The first part of this thesis consists of numerical studies of the photonic band gap (PBG) effect in structures from 1D to 3D. From these studies, we have observed that PBG effect in a 1D photonic crystal is robust against uncorrelated disorder due to preservation of long-range positional order. However, in higher dimensions, the short-range positional order alone is sufficient to form PBGs in 2D and 3D photonic amorphous structures (PASS). We have identified several parameters including dielectric filling fraction and degree of order that can be tuned to create a broad isotropic PBG. The largest PBG is produced by the dielectric networks due to local uniformity in their dielectric constant distribution. In addition, we also show that deterministic aperiodic structures (DASs) such as the golden-angle spiral and topological defect structures can support a wide PBG and their optical resonances contain unexpected features compared to those in photonic crystals. Another growing research field based on complex photonic structures is the study of structural color in animals and plants. Previous studies have shown that non-iridescent color can be generated from PASs via single or double scatterings. For better understanding of the coloration mechanisms, we have measured the wavelength-dependent scattering length from the biomimetic samples. Our theoretical modeling and analysis explains why single scattering of light is dominant over multiple scattering in similar biological structures and is responsible for color generation. In collaboration with evolutionary biologists, we examine how closely-related species and populations of butterflies have evolved their structural color. We have used artificial selection on a lab model butterfly to evolve violet color from an ultra-violet brown color. The same coloration mechanism is found in other blue/violet species that have evolved their color in nature, which implies the same evolution path for their nanostructure. While the absorption of light is ubiquitous in nature and in applications, the question remains how absorption modifies the transmission in random media. Therefore, we numerically study the effects of optical absorption on the highest transmission states in a two-dimensional disordered waveguide. Our results show that strong absorption turns the highest transmission channel in random media from diffusive to ballistic-like transport. Finally, we have demonstrated lasing mode selection in a nearly circular semiconductor microdisk laser by shaping the spatial profile of the pump beam. Despite of strong mode overlap, selective pumping suppresses the competing lasing modes by either increasing their thresholds or reducing their power slopes. As a result, we can switch both the lasing frequency and the output direction. This powerful technique can have potential application as an on-chip tunable light source.
4. Dosimetric impact of monoenergetic photon beams in the small-animal irradiation with inhomogeneities: A Monte Carlo evaluation
Chow, James C. L.
2013-05-01
This study investigated the variations of the dose and dose distribution in a small-animal irradiation due to the photon beam energy and presence of inhomogeneity. Based on the same mouse computed tomography image set, three Monte Carlo phantoms namely, inhomogeneous, homogeneous and bone-tissue phantoms were used in this study. These phantoms were generated by overriding the relative electron density of no voxel (inhomogeneous), all voxel (homogeneous) and the bone voxel (bone-tissue) to one. 360 photon arcs with beam energies of 50-1250 kV were used in mouse irradiations. Doses in the above phantoms were calculated using the EGSnrc-based DOSXYZnrc code through the DOSCTP. It was found that the dose conformity increased with the increase of the photon beam energy from the kV to MV range. For the inhomogeneous mouse phantom, increasing the photon beam energy from 50 kV to 1250 kV increased about 21 times the dose deposited at the isocenter. For the bone dose enhancement, the mean dose was 1.4 times higher when the bone inhomogeneity was not neglected using the 50 kV photon beams in the mouse irradiation. Bone dose enhancement affecting the mean dose in the mouse irradiation can be found in the photon beams with energy range of 50-200 kV, and the dose enhancement decreases with an increase of the beam energy. Moreover, the MV photon beam has a higher dose at the isocenter, and a better dose conformity compared to the kV beam.
5. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications.
PubMed
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions. PMID:26417866
6. Monte Carlo simulation on pre-clinical irradiation: A heterogeneous phantom study on monoenergetic kilovoltage photon beams
Chow, James C. L.
2012-10-01
This study investigated radiation dose variations in pre-clinical irradiation due to the photon beam energy and presence of tissue heterogeneity. Based on the same mouse computed tomography image dataset, three phantoms namely, heterogeneous, homogeneous and bone homogeneous were used. These phantoms were generated by overriding the relative electron density of no voxel (heterogeneous), all voxel (homogeneous) and the bone voxel (bone homogeneous) to one. 360 photon arcs with beam energies of 50 - 1250 keV were used in mouse irradiations. Doses in the above phantoms were calculated using the EGSnrc-based DOSXYZnrc code through the DOSCTP. Monte Carlo simulations were carried out in parallel using multiple nodes in a high-performance computing cluster. It was found that the dose conformity increased with the increase of the photon beam energy from the keV to MeV range. For the heterogeneous mouse phantom, increasing the photon beam energy from 50 keV to 1250 keV increased seven times the dose deposited at the isocenter. For the bone dose enhancement, the mean dose was 2.7 times higher when the bone heterogeneity was not neglected using the 50 keV photon beams in the mouse irradiation. Bone dose enhancement affecting the mean dose was found in the photon beams with energy range of 50 - 200 keV and the dose enhancement decreased with an increase of the beam energy. Moreover, the MeV photon beam had a higher dose at the isocenter, and a better dose conformity compared to the keV beam.
7. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
8. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios
SciTech Connect
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
9. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media 2. Transport results
USGS Publications Warehouse
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.
10. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms
Bouchard, Hugo; Bielajew, Alex
2015-07-01
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fanos theorem. Additionally, Lewis approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fanos and Lewis approaches are stated in this new equation. Fanos theorem is found not to apply in the presence of electromagnetic fields. Lewis theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.
11. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms.
PubMed
Bouchard, Hugo; Bielajew, Alex
2015-07-01
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equationin a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equationis modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equationare evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equationis modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equationis generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms. PMID:26061045
12. The Monte Carlo approach to transport modeling in deca-nanometer MOSFETs
Sangiorgi, Enrico; Palestri, Pierpaolo; Esseni, David; Fiegna, Claudio; Selmi, Luca
2008-09-01
In this paper, we review recent developments of the Monte Carlo approach to the simulation of semi-classical carrier transport in nano-MOSFETs, with particular focus on the inclusion of quantum-mechanical effects in the simulation (using either the multi-subband approach or quantum corrections to the electrostatic potential) and on the numerical stability issues related to the coupling of the transport with the Poisson equation. Selected applications are presented, including the analysis of quasi-ballistic transport, the determination of the RF characteristics of deca-nanometric MOSFETs, and the study of non-conventional device structures and channel materials.
13. Correlated few-photon transport in one-dimensional waveguides: Linear and nonlinear dispersions
SciTech Connect
Roy, Dibyendu
2011-04-15
We address correlated few-photon transport in one-dimensional waveguides coupled to a two-level system (TLS), such as an atom or a quantum dot. We derive exactly the single-photon and two-photon current (transmission) for linear and nonlinear (tight-binding sinusoidal) energy-momentum dispersion relations of photons in the waveguides and compare the results for the different dispersions. A large enhancement of the two-photon current for the sinusoidal dispersion has been seen at a certain transition energy of the TLS away from the single-photon resonances.
14. Radiation dose measurements and Monte Carlo calculations for neutron and photon reactions in a human head phantom for accelerator-based boron neutron capture therapy
Kim, Don-Soo
15. Data decomposition of Monte Carlo particle transport simulations via tally servers
SciTech Connect
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
16. Electron transport in radiotherapy using local-to-global Monte Carlo
SciTech Connect
Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A.; Ballinger, C.T.; Neuenschwander, H.; Mackie, T.R.; Reckwerdt, P.J.
1994-09-01
Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the steps to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given.
17. Dosimetric variation due to the photon beam energy in the small-animal irradiation: A Monte Carlo study
SciTech Connect
Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A.
2010-10-15
Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to the presence of the bone heterogeneity, was more significant than that due to the lung heterogeneity. Hence, for kV photon energies of the range used in small-animal irradiation, the increase of the mean and bone dose due to the photoelectric effect could be a dosimetric concern.
18. Backscatter towards the monitor ion chamber in high-energy photon and electron beams: charge integration versus Monte Carlo simulation
Verhaegen, F.; Symonds-Tayler, R.; Liu, H. H.; Nahum, A. E.
2000-11-01
In some linear accelerators, the charge collected by the monitor ion chamber is partly caused by backscattered particles from accelerator components downstream from the chamber. This influences the output of the accelerator and also has to be taken into account when output factors are derived from Monte Carlo simulations. In this work, the contribution of backscattered particles to the monitor ion chamber response of a Varian 2100C linac was determined for photon beams (6, 10 MV) and for electron beams (6, 12, 20 MeV). The experimental procedure consisted of charge integration from the target in a photon beam or from the monitor ion chamber in electron beams. The Monte Carlo code EGS4/BEAM was used to study the contribution of backscattered particles to the dose deposited in the monitor ion chamber. Both measurements and simulations showed a linear increase in backscatter fraction with decreasing field size for photon and electron beams. For 6 MV and 10 MV photon beams, a 2-3% increase in backscatter was obtained for a 0.50.5 cm2 field compared to a 4040 cm2 field. The results for the 6 MV beam were slightly higher than for the 10 MV beam. For electron beams (6, 12, 20 MeV), an increase of similar magnitude was obtained from measurements and simulations for 6 MeV electrons. For higher energy electron beams a smaller increase in backscatter fraction was found. The problem is of less importance for electron beams since large variations of field size for a single electron energy usually do not occur.
19. Backscatter towards the monitor ion chamber in high-energy photon and electron beams: charge integration versus Monte Carlo simulation.
PubMed
Verhaegen, F; Symonds-Tayler, R; Liu, H H; Nahum, A E
2000-11-01
In some linear accelerators, the charge collected by the monitor ion chamber is partly caused by backscattered particles from accelerator components downstream from the chamber. This influences the output of the accelerator and also has to be taken into account when output factors are derived from Monte Carlo simulations. In this work, the contribution of backscattered particles to the monitor ion chamber response of a Varian 2100C linac was determined for photon beams (6, 10 MV) and for electron beams (6, 12, 20 MeV). The experimental procedure consisted of charge integration from the target in a photon beam or from the monitor ion chamber in electron beams. The Monte Carlo code EGS4/BEAM was used to study the contribution of backscattered particles to the dose deposited in the monitor ion chamber. Both measurements and simulations showed a linear increase in backscatter fraction with decreasing field size for photon and electron beams. For 6 MV and 10 MV photon beams, a 2-3% increase in backscatter was obtained for a 0.5 x 0.5 cm2 field compared to a 40 x 40 cm2 field. The results for the 6 MV beam were slightly higher than for the 10 MV beam. For electron beams (6, 12, 20 MeV), an increase of similar magnitude was obtained from measurements and simulations for 6 MeV electrons. For higher energy electron beams a smaller increase in backscatter fraction was found. The problem is of less importance for electron beams since large variations of field size for a single electron energy usually do not occur. PMID:11098896
20. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space
DOEpatents
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
1. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.
PubMed
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663
2. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code
SciTech Connect
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.
3. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code
DOE PAGESBeta
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
4. Minimizing the cost of splitting in Monte Carlo radiation transport simulation
SciTech Connect
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).
5. Capabilities, Implementation, and Benchmarking of Shift, a Massively Parallel Monte Carlo Radiation Transport Code
DOE PAGESBeta
Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T
2016-01-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less
6. A simplified spherical harmonic method for coupled electron-photon transport calculations
SciTech Connect
Josef, J.A.
1996-12-01
In this thesis we have developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. Our theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. Previous analyses have indicated that the P{sub 1} DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S{sub N} equations, yet is very effective for the 1-D S{sub N} equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP{sub N} method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP{sub N} method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations.
7. Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program
SciTech Connect
Moskowitz, B.S.
2000-02-01
This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems.
8. MONTE CARLO PARTICLE TRANSPORT IN MEDIA WITH EXPONENTIALLY VARYING TIME-DEPENDENT CROSS-SECTIONS
SciTech Connect
F. BROWN; W. MARTIN
2001-02-01
A probability density function (PDF) and random sampling procedure for the distance to collision were derived for the case of exponentially varying cross-sections. Numerical testing indicates that both are correct. This new sampling procedure has direct application in a new method for Monte Carlo radiation transport, and may be generally useful for analyzing physical problems where the material cross-sections change very rapidly in an exponential manner.
9. Cavity-photon-switched coherent transient transport in a double quantum waveguide
SciTech Connect
Abdullah, Nzar Rauf Gudmundsson, Vidar; Tang, Chi-Shung; Manolescu, Andrei
2014-12-21
We study a cavity-photon-switched coherent electron transport in a symmetric double quantum waveguide. The waveguide system is weakly connected to two electron reservoirs, but strongly coupled to a single quantized photon cavity mode. A coupling window is placed between the waveguides to allow electron interference or inter-waveguide transport. The transient electron transport in the system is investigated using a quantum master equation. We present a cavity-photon tunable semiconductor quantum waveguide implementation of an inverter quantum gate, in which the output of the waveguide system may be selected via the selection of an appropriate photon number or “photon frequency” of the cavity. In addition, the importance of the photon polarization in the cavity, that is, either parallel or perpendicular to the direction of electron propagation in the waveguide system is demonstrated.
10. Magnetic confinement of electron and photon radiotherapy dose: A Monte Carlo simulation with a nonuniform longitudinal magnetic field
SciTech Connect
Chen Yu; Bielajew, Alex F.; Litzenberg, Dale W.; Moran, Jean M.; Becchetti, Frederick D.
2005-12-15
It recently has been shown experimentally that the focusing provided by a longitudinal nonuniform high magnetic field can significantly improve electron beam dose profiles. This could permit precise targeting of tumors near critical areas and minimize the radiation dose to surrounding healthy tissue. The experimental results together with Monte Carlo simulations suggest that the magnetic confinement of electron radiotherapy beams may provide an alternative to proton or heavy ion radiation therapy in some cases. In the present work, the external magnetic field capability of the Monte Carlo code PENELOPE was utilized by providing a subroutine that modeled the actual field produced by the solenoid magnet used in the experimental studies. The magnetic field in our simulation covered the region from the vacuum exit window to the phantom including surrounding air. In a longitudinal nonuniform magnetic field, it is observed that the electron dose can be focused in both the transverse and longitudinal directions. The measured dose profiles of the electron beam are generally reproduced in the Monte Carlo simulations to within a few percent in the region of interest provided that the geometry and the energy of the incident electron beam are accurately known. Comparisons for the photon beam dose profiles with and without the magnetic field are also made. The experimental results are qualitatively reproduced in the simulation. Our simulation shows that the excessive dose at the beam entrance is due to the magnetic field trapping and focusing scattered secondary electrons that were produced in the air by the incident photon beam. The simulations also show that the electron dose profile can be manipulated by the appropriate control of the beam energy together with the strength and displacement of the longitudinal magnetic field.
11. Boltzmann equation and Monte Carlo studies of electron transport in resistive plate chambers
Bonjakovi?, D.; Petrovi?, Z. Lj; White, R. D.; Dujko, S.
2014-10-01
A multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique are used to investigate electron transport in Resistive Plate Chambers (RPCs) that are used for timing and triggering purposes in many high energy physics experiments at CERN and elsewhere. Using cross sections for electron scattering in C2H2F4, iso-C4H10 and SF6 as an input in our Boltzmann and Monte Carlo codes, we have calculated data for electron transport as a function of reduced electric field E/N in various C2H2F4/iso-C4H10/SF6 gas mixtures used in RPCs in the ALICE, CMS and ATLAS experiments. Emphasis is placed upon the explicit and implicit effects of non-conservative collisions (e.g. electron attachment and/or ionization) on the drift and diffusion. Among many interesting and atypical phenomena induced by the explicit effects of non-conservative collisions, we note the existence of negative differential conductivity (NDC) in the bulk drift velocity component with no indication of any NDC for the flux component in the ALICE timing RPC system. We systematically study the origin and mechanisms for such phenomena as well as the possible physical implications which arise from their explicit inclusion into models of RPCs. Spatially-resolved electron transport properties are calculated using a Monte Carlo simulation technique in order to understand these phenomena.
12. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4
Dixon, D. A.; Prinja, A. K.; Franke, B. C.
2015-09-01
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
13. Epp - A C++ EGSnrc user code for Monte Carlo simulation of radiation transport
Cui, Congwu; Lippuner, Jonas; Ingleby, Harry R.; Di Valentino, David N. M.; Elbakri, Idris A.
2010-04-01
Easy particle propagation (Epp) is a Monte Carlo simulation EGSnrc user code that we have developed for dose calculation in a voxelized volume, and to generate images of an arbitrary geometry irradiated by a particle source. The dose calculation aspect is a reimplementation of the function of DOSXYZnrc with new features added and some restrictions removed. Epp is designed for x-ray application, but can be readily extended to trace other kinds of particles. Epp is based on the EGSnrc C++ class library (egspp) which makes modeling particle sources and simulation geometries simpler than in DOSXYZnrc and other BEAM user codes based on EGSnrc code system. With Epp geometries can be modeled analytically or voxelized geometries, such as those in DOSXYZnrc, can be used. Compared to DOSXYZnrc (slightly modified from the official version for saving phase space information of photons leaving the geometry), Epp is at least two times faster. Photon propagation to the image plane is integrated into Epp (other particles possible with minor extension to the current code) with an ideal detector defined. When only the resultant images are needed, there is no need to save the particle data. This results in significant savings of data storage space, network load, and time for file I/O. Epp was validated against DOSXYZnrc for imaging and dose calculation by comparing simulation results with the same input. Epp can be used as a Monte Carlo simulation tool for faster imaging and radiation dose applications.
14. Monte Carlo study of the energy and angular dependence of the response of plastic scintillation detectors in photon beams
SciTech Connect
Wang, Lilie L. W.; Klein, David; Beddar, A. Sam
2010-10-15
Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs' responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At {+-}1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer to water equivalent if there is a silica tube around the sensitive volume. The angular dependence of the response of the three PSDs in a 6 MV photon beam is not of concern at 2% level.
15. Exponentially-convergent Monte Carlo for the 1-D transport equation
SciTech Connect
Peterson, J. R.; Morel, J. E.; Ragusa, J. C.
2013-07-01
We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)
16. Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.
2003-01-01
Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.
17. GPU-Accelerated Monte Carlo Electron Transport Methods: Development and Application for Radiation Dose Calculations Using Six GPU cards
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software testbed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. For photon part, photoelectric effect, Compton scattering and pair production were modeled. Voxelized geometry was supported. A serial CPU code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla™ M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x106 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy.
18. A Monte Carlo tool for combined photon and proton treatment planning verification
Seco, J.; Jiang, H.; Herrup, D.; Kooy, H.; Paganetti, H.
2007-06-01
Photons and protons are usually used independently to treat cancer. However, at MGH patients can be treated with both photons and protons since both modalities are available on site. A combined therapy can be advantageous in cancer therapy due to the skin sparing ability of photons and the sharp Bragg peak fall-off for protons beyond the tumor. In the present work, we demonstrate how to implement a combined 3D MC toolkit for photon and proton (ph-pr) therapy, which can be used for verification of the treatment plan. The commissioning of a MC system for combined ph-pr involves initially the development of a MC model of both the photon and proton treatment heads. The MC dose tool was evaluated on a head and neck patient treated with both combined photon and proton beams. The combined ph-pr dose agreed with measurements in solid water phantom to within 3%mm. Comparison with commercial planning system pencil beam prediction agrees within 3% (except for air cavities and bone regions).
19. Topological Photonic Quasicrystals: Fractal Topological Spectrum and Protected Transport
Bandres, Miguel A.; Rechtsman, Mikael C.; Segev, Mordechai
2016-01-01
We show that it is possible to have a topological phase in two-dimensional quasicrystals without any magnetic field applied, but instead introducing an artificial gauge field via dynamic modulation. This topological quasicrystal exhibits scatter-free unidirectional edge states that are extended along the system's perimeter, contrary to the states of an ordinary quasicrystal system, which are characterized by power-law decay. We find that the spectrum of this Floquet topological quasicrystal exhibits a rich fractal (self-similar) structure of topological "minigaps," manifesting an entirely new phenomenon: fractal topological systems. These topological minigaps form only when the system size is sufficiently large because their gapless edge states penetrate deep into the bulk. Hence, the topological structure emerges as a function of the system size, contrary to periodic systems where the topological phase can be completely characterized by the unit cell. We demonstrate the existence of this topological phase both by using a topological index (Bott index) and by studying the unidirectional transport of the gapless edge states and its robustness in the presence of defects. Our specific model is a Penrose lattice of helical optical waveguides—a photonic Floquet quasicrystal; however, we expect this new topological quasicrystal phase to be universal.
20. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
SciTech Connect
Lee, S.R.; Cummings, J.C.; Nolen, S.D. |
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
1. A bone composition model for Monte Carlo x-ray transport simulations
SciTech Connect
Zhou Hu; Keall, Paul J.; Graves, Edward E.
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
2. A bone composition model for Monte Carlo x-ray transport simulations.
PubMed
Zhou, Hu; Keall, Paul J; Graves, Edward E
2009-03-01
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones. PMID:19378761
3. High-speed evaluation of track-structure Monte Carlo electron transport simulations.
PubMed
Pasciak, A S; Ford, J R
2008-10-01
There are many instances where Monte Carlo simulation using the track-structure method for electron transport is necessary for the accurate analytical computation and estimation of dose and other tally data. Because of the large electron interaction cross-sections and highly anisotropic scattering behavior, the track-structure method requires an enormous amount of computation time. For microdosimetry, radiation biology and other applications involving small site and tally sizes, low electron energies or high-Z/low-Z material interfaces where the track-structure method is preferred, a computational device called a field-programmable gate array (FPGA) is capable of executing track-structure Monte Carlo electron-transport simulations as fast as or faster than a standard computer can complete an identical simulation using the condensed history (CH) technique. In this paper, data from FPGA-based track-structure electron-transport computations are presented for five test cases, from simple slab-style geometries to radiation biology applications involving electrons incident on endosteal bone surface cells. For the most complex test case presented, an FPGA is capable of evaluating track-structure electron-transport problems more than 500 times faster than a standard computer can perform the same track-structure simulation and with comparable accuracy. PMID:18780958
4. Ion beam transport in tissue-like media using the Monte Carlo code SHIELD-HIT.
PubMed
Gudowska, Irena; Sobolevsky, Nikolai; Andreo, Pedro; Belki?, Dzevad; Brahme, Anders
2004-05-21
The development of the Monte Carlo code SHIELD-HIT (heavy ion transport) for the simulation of the transport of protons and heavier ions in tissue-like media is described. The code SHIELD-HIT, a spin-off of SHIELD (available as RSICC CCC-667), extends the transport of hadron cascades from standard targets to that of ions in arbitrary tissue-like materials, taking into account ionization energy-loss straggling and multiple Coulomb scattering effects. The consistency of the results obtained with SHIELD-HIT has been verified against experimental data and other existing Monte Carlo codes (PTRAN, PETRA), as well as with deterministic models for ion transport, comparing depth distributions of energy deposition by protons, 12C and 20Ne ions impinging on water. The SHIELD-HIT code yields distributions consistent with a proper treatment of nuclear inelastic collisions. Energy depositions up to and well beyond the Bragg peak due to nuclear fragmentations are well predicted. Satisfactory agreement is also found with experimental determinations of the number of fragments of a given type, as a function of depth in water, produced by 12C and 14N ions of 670 MeV u(-1), although less favourable agreement is observed for heavier projectiles such as 16O ions of the same energy. The calculated neutron spectra differential in energy and angle produced in a mimic of a Martian rock by irradiation with 12C ions of 290 MeV u(-1) also shows good agreement with experimental data. It is concluded that a careful analysis of stopping power data for different tissues is necessary for radiation therapy applications, since an incorrect estimation of the position of the Bragg peak might lead to a significant deviation from the prescribed dose in small target volumes. The results presented in this study indicate the usefulness of the SHIELD-HIT code for Monte Carlo simulations in the field of light ion radiation therapy. PMID:15214534
5. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
6. Surface dose reduction from bone interface in kilovoltage X-ray radiation therapy: a Monte Carlo study of photon spectra.
PubMed
Chow, James C L; Owrangi, Amir M
2012-01-01
This study evaluated the dosimetric impact of surface dose reduction due to the loss of backscatter from the bone interface in kilovoltage (kV) X-ray radiation therapy. Monte Carlo simulation was carried out using the EGSnrc code. An inhomogeneous phantom containing a thin water layer (0.5-5 mm) on top of a bone (thickness = 1 cm) was irradiated by a clinical 105 kVp photon beam produced by a Gulmay D3225 X-ray machine. Field sizes of 2, 5, and 10 cm diameter and source-to-surface distance of 20 cm were used. Surface doses for different phantom configurations were calculated using the DOSXYZnrc code. Photon energy spectra at the phantom surface and bone were determined according to the phase-space files at the particle scoring planes which included the multiple crossers. For comparison, all Monte Carlo simulations were repeated in a phantom with the bone replaced by water. Surface dose reduction was found when a bone was underneath the water layer. When the water thickness was equal to 1 mm for the circular field of 5 cm diameter, a surface dose reduction of 6.3% was found. The dose reduction decreased to 4.7% and 3.4% when the water thickness increased to 3 and 5 mm, respectively. This shows that the impact of the surface dose uncertainty decreased while the water thickness over the bone increased. This result was supported by the decrease in relative intensity of the lower energy photons in the energy spectrum when the water layer was with and over the bone, compared to without the bone. We concluded that surface dose reduction of 7.8%-1.1% was found when the water thickness increased from 0.5-5 mm for circular fields with diameters ranging from 2-10 cm. This decrease of surface dose results in an overestimation of prescribed dose at the patient's surface, and might be a concern when using kV photon beam to treat skin tumors in sites such as forehead, chest wall, and kneecap. PMID:22955657
7. Monte Carlo simulation of the operational quantities at the realistic mixed neutron-photon radiation fields CANEL and SIGMA.
PubMed
Lacoste, V; Gressier, V
2007-01-01
The Institute for Radiological Protection and Nuclear Safety owns two facilities producing realistic mixed neutron-photon radiation fields, CANEL, an accelerator driven moderator modular device, and SIGMA, a graphite moderated americium-beryllium assembly. These fields are representative of some of those encountered at nuclear workplaces, and the corresponding facilities are designed and used for calibration of various instruments, such as survey meters, personal dosimeters or spectrometric devices. In the framework of the European project EVIDOS, irradiations of personal dosimeters were performed at CANEL and SIGMA. Monte Carlo calculations were performed to estimate the reference values of the personal dose equivalent at both facilities. The Hp(10) values were calculated for three different angular positions, 0 degrees, 45 degrees and 75 degrees, of an ICRU phantom located at the position of irradiation. PMID:17578872
8. On Monte Carlo modeling of megavoltage photon beams: A revisited study on the sensitivity of beam parameters
SciTech Connect
Chibani, Omar; Moftah, Belal; Ma, C.-M. Charlie
2011-01-15
Purpose: To commission Monte Carlo beam models for five Varian megavoltage photon beams (4, 6, 10, 15, and 18 MV). The goal is to closely match measured dose distributions in water for a wide range of field sizes (from 2x2 to 35x35 cm{sup 2}). The second objective is to reinvestigate the sensitivity of the calculated dose distributions to variations in the primary electron beam parameters. Methods: The GEPTS Monte Carlo code is used for photon beam simulations and dose calculations. The linear accelerator geometric models are based on (i) manufacturer specifications, (ii) corrections made by Chibani and Ma [''On the discrepancies between Monte Carlo dose calculations and measurements for the 18 MV Varian photon beam,'' Med. Phys. 34, 1206-1216 (2007)], and (iii) more recent drawings. Measurements were performed using pinpoint and Farmer ionization chambers, depending on the field size. Phase space calculations for small fields were performed with and without angle-based photon splitting. In addition to the three commonly used primary electron beam parameters (E{sub AV} is the mean energy, FWHM is the energy spectrum broadening, and R is the beam radius), the angular divergence ({theta}) of primary electrons is also considered. Results: The calculated and measured dose distributions agreed to within 1% local difference at any depth beyond 1 cm for different energies and for field sizes varying from 2x2 to 35x35 cm{sup 2}. In the penumbra regions, the distance to agreement is better than 0.5 mm, except for 15 MV (0.4-1 mm). The measured and calculated output factors agreed to within 1.2%. The 6, 10, and 18 MV beam models use {theta}=0 deg., while the 4 and 15 MV beam models require {theta}=0.5 deg. and 0.6 deg., respectively. The parameter sensitivity study shows that varying the beam parameters around the solution can lead to 5% differences with measurements for small (e.g., 2x2 cm{sup 2}) and large (e.g., 35x35 cm{sup 2}) fields, while a perfect agreement is maintained for the 10x10 cm{sup 2} field. The influence of R on the central-axis depth dose and the strong influence of {theta} on the lateral dose profiles are demonstrated. Conclusions: Dose distributions for very small and very large fields were proved to be more sensitive to variations in E{sub AV}, R, and {theta} in comparison with the 10x10 cm{sup 2} field. Monte Carlo beam models need to be validated for a wide range of field sizes including small field sizes (e.g., 2x2 cm{sup 2}).
9. SAF values for internal photon emitters calculated for the RPI-P pregnant-female models using Monte Carlo methods
SciTech Connect
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
10. A 3D photon superposition/convolution algorithm and its foundation on results of Monte Carlo calculations.
PubMed
Ulmer, W; Pyyry, J; Kaissl, W
2005-04-21
Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes < or =5.5 cm(2) and densities < or = 0.25 g cm(-3), in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. PMID:15815095
11. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.
PubMed
Jabbari, Keyvan; Seuntjens, Jan
2014-07-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
12. MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES
SciTech Connect
Afanasiev, A.; Vainio, R.
2013-08-15
A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.
13. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
PubMed Central
Jabbari, Keyvan; Seuntjens, Jan
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
14. Analysis of atmospheric gamma-ray flashes detected in near space with allowance for the transport of photons in the atmosphere
Babich, L. P.; Donskoy, E. N.; Kutsyk, I. M.
2008-07-01
Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.
15. Analysis of atmospheric gamma-ray flashes detected in near space with allowance for the transport of photons in the atmosphere
SciTech Connect
Babich, L. P. Donskoy, E. N.; Kutsyk, I. M.
2008-07-15
Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.
16. New nuclear data for high-energy all-particle Monte Carlo transport
SciTech Connect
Cox, L.J.; Chadwick, M.B.; Resler, D.A.
1994-06-01
We are extending the LLNL nuclear data libraries to 250 MeV for neutron and proton interaction with biologically important nuclei; i.e. H, C, N, 0, F, P, and Ca. Because of the large number of reaction channels that open with increasing energies, the data is generated in particle production cross section format with energy-angle correlated distributions for the outgoing particles in the laboratory frame of reference. The new Production Cross Section data Library (PCSL) will be used in PEREGRINE -- the new all-particle Monte Carlo transport code being developed at LLNL for dose calculation in radiation therapy planning.
17. Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport
Dureau, David; Potte, Gal
2014-06-01
This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.
18. Monte Carlo evaluation of electron transport in heterojunction bipolar transistor base structures
Maziar, C. M.; Klausmeier-Brown, M. E.; Bandyopadhyay, S.; Lundstrom, M. S.; Datta, S.
1986-07-01
Electron transport through base structures of Al(x)Ga(1-x)As heterojunction bipolar transistors is evaluated by Monte Carlo simulation. Simulation results demonstrate the effectiveness of both ballistic launching ramps and graded bases for reducing base transit time. Both techniques are limited, however, in their ability to maintain short transit times across the wide bases that are desirable for reduction of base resistance. Simulation results demonstrate that neither technique is capable of maintaining a 1-ps transit time across a 0.25-micron base. The physical mechanisms responsible for limiting the performance of each structure are identified and a promising hybrid structure is described.
19. Evaluation of PENFAST--a fast Monte Carlo code for dose calculations in photon and electron radiotherapy treatment planning.
PubMed
Habib, B; Poumarede, B; Tola, F; Barthe, J
2010-01-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. PMID:19342258
20. Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.
PubMed
Piels, Molly; Zibar, Darko
2016-02-01
The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation. PMID:26906783
1. 3D imaging using combined neutron-photon fan-beam tomography: A Monte Carlo study.
PubMed
Hartman, J; Yazdanpanah, A Pour; Barzilov, A; Regentova, E
2016-05-01
The application of combined neutron-photon tomography for 3D imaging is examined using MCNP5 simulations for objects of simple shapes and different materials. Two-dimensional transmission projections were simulated for fan-beam scans using 2.5MeV deuterium-deuterium and 14MeV deuterium-tritium neutron sources, and high-energy X-ray sources, such as 1MeV, 6MeV and 9MeV. Photons enable assessment of electron density and related mass density, neutrons aid in estimating the product of density and material-specific microscopic cross section- the ratio between the two provides the composition, while CT allows shape evaluation. Using a developed imaging technique, objects and their material compositions have been visualized. PMID:26953978
2. Coupling 3D Monte Carlo light transport in optically heterogeneous tissues to photoacoustic signal generation.
PubMed
Jacques, Steven L
2014-12-01
The generation of photoacoustic signals for imaging objects embedded within tissues is dependent on how well light can penetrate to and deposit energy within an optically absorbing object, such as a blood vessel. This report couples a 3D Monte Carlo simulation of light transport to stress wave generation to predict the acoustic signals received by a detector at the tissue surface. The Monte Carlo simulation allows modeling of optically heterogeneous tissues, and a simple MATLAB acoustic algorithm predicts signals reaching a surface detector. An example simulation considers a skin with a pigmented epidermis, a dermis with a background blood perfusion, and a 500-?m-dia. blood vessel centered at a 1-mm depth in the skin. The simulation yields acoustic signals received by a surface detector, which are generated by a pulsed 532-nm laser exposure before and after inserting the blood vessel. A MATLAB version of the acoustic algorithm and a link to the 3D Monte Carlo website are provided. PMID:25426426
3. A direction-selective flattening filter for clinical photon beams. Monte Carlo evaluation of a new concept
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rhmann, Antje; Poppe, Bjrn
2011-07-01
A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 10 cm2, 4 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department, patient comfort, reduction of errors due to patient movement and a slight, probably just noticable improvement of the treatment outcome due to radiobiological reasons. In comparison with the FF system, the number of head leakage photons per Gy in the irradiated region has been reduced at 15 MV by factors 1/2.54 (DSF) and 1/3.96 (FFF), and the source strength of photoneutrons was reduced by factors 1/2.81 (DSF) and 1/3.49 (FFF).
4. A deterministic electron, photon, proton and heavy ion transport suite for the study of the Jovian moon Europa
Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.
2011-02-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe extended mission. For the upcoming EJSM mission with an expected launch date of 2020, the transport suite is used to compute the depth dose profile for the traditional aluminum silicon as a standard shield target combination, as well as simulating the shielding response of a high charge number (Z) material such as tantalum (Ta). Finally, a shield optimization algorithm is discussed which can guide the instrument designers and fabrication personnel with the choice of graded-Z shield selection and analysis.
5. Single photon transport along a one-dimensional waveguide with a side manipulated cavity QED system.
PubMed
Yan, Cong-Hua; Wei, Lian-Fu
2015-04-20
An external mirror coupling to a cavity with a two-level atom inside is put forward to control the photon transport along a one-dimensional waveguide. Using a full quantum theory of photon transport in real space, it is shown that the Rabi splittings of the photonic transmission spectra can be controlled by the cavity-mirror couplings; the splittings could still be observed even when the cavity-atom system works in the weak coupling regime, and the transmission probability of the resonant photon can be modulated from 0 to 100%. Additionally, our numerical results show that the appearance of Fano resonance is related to the strengths of the cavity-mirror coupling and the dissipations of the system. An experimental demonstration of the proposal with the current photonic crystal waveguide technique is suggested. PMID:25969078
6. Output correction factors for nine small field detectors in 6 MV radiation therapy photon beams: A PENELOPE Monte Carlo study
SciTech Connect
Benmakhlouf, Hamza; Sempau, Josep; Andreo, Pedro
2014-04-15
Purpose: To determine detector-specific output correction factors,k{sub Q} {sub c{sub l{sub i{sub n}}}} {sub ,Q} {sub m{sub s{sub r}}} {sup f{sub {sup {sub c}{sub l}{sub i}{sub n}{sub {sup ,f{sub {sup {sub m}{sub s}{sub r}{sub ,}}}}}}}} in 6 MV small photon beams for air and liquid ionization chambers, silicon diodes, and diamond detectors from two manufacturers. Methods: Field output factors, defined according to the international formalism published byAlfonso et al. [Med. Phys. 35, 5179–5186 (2008)], relate the dosimetry of small photon beams to that of the machine-specific reference field; they include a correction to measured ratios of detector readings, conventionally used as output factors in broad beams. Output correction factors were calculated with the PENELOPE Monte Carlo (MC) system with a statistical uncertainty (type-A) of 0.15% or lower. The geometries of the detectors were coded using blueprints provided by the manufacturers, and phase-space files for field sizes between 0.5 × 0.5 cm{sup 2} and 10 × 10 cm{sup 2} from a Varian Clinac iX 6 MV linac used as sources. The output correction factors were determined scoring the absorbed dose within a detector and to a small water volume in the absence of the detector, both at a depth of 10 cm, for each small field and for the reference beam of 10 × 10 cm{sup 2}. Results: The Monte Carlo calculated output correction factors for the liquid ionization chamber and the diamond detector were within about ±1% of unity even for the smallest field sizes. Corrections were found to be significant for small air ionization chambers due to their cavity dimensions, as expected. The correction factors for silicon diodes varied with the detector type (shielded or unshielded), confirming the findings by other authors; different corrections for the detectors from the two manufacturers were obtained. The differences in the calculated factors for the various detectors were analyzed thoroughly and whenever possible the results were compared to published data, often calculated for different accelerators and using the EGSnrc MC system. The differences were used to estimate a type-B uncertainty for the correction factors. Together with the type-A uncertainty from the Monte Carlo calculations, an estimation of the combined standard uncertainty was made, assigned to the mean correction factors from various estimates. Conclusions: The present work provides a consistent and specific set of data for the output correction factors of a broad set of detectors in a Varian Clinac iX 6 MV accelerator and contributes to improving the understanding of the physics of small photon beams. The correction factors cannot in general be neglected for any detector and, as expected, their magnitude increases with decreasing field size. Due to the reduced number of clinical accelerator types currently available, it is suggested that detector output correction factors be given specifically for linac models and field sizes, rather than for a beam quality specifier that necessarily varies with the accelerator type and field size due to the different electron spot dimensions and photon collimation systems used by each accelerator model.
7. Unified single-photon and single-electron counting statistics: From cavity QED to electron transport
SciTech Connect
Lambert, Neill; Chen, Yueh-Nan; Nori, Franco
2010-12-15
A key ingredient of cavity QED is the coupling between the discrete energy levels of an atom and photons in a single-mode cavity. The addition of periodic ultrashort laser pulses allows one to use such a system as a source of single photons--a vital ingredient in quantum information and optical computing schemes. Here we analyze and time-adjust the photon-counting statistics of such a single-photon source and show that the photon statistics can be described by a simple transport-like nonequilibrium model. We then show that there is a one-to-one correspondence of this model to that of nonequilibrium transport of electrons through a double quantum dot nanostructure, unifying the fields of photon-counting statistics and electron-transport statistics. This correspondence empowers us to adapt several tools previously used for detecting quantum behavior in electron-transport systems (e.g., super-Poissonian shot noise and an extension of the Leggett-Garg inequality) to single-photon-source experiments.
8. A Deterministic Electron, Photon, Proton and Heavy Ion Radiation Transport Suite for the Study of the Jovian System
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William
2011-01-01
A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.
9. Dynamic Monte-Carlo modeling of hydrogen isotope reactive diffusive transport in porous graphite
Schneider, R.; Rai, A.; Mutzke, A.; Warrier, M.; Salonen, E.; Nordlund, K.
2007-08-01
An equal mixture of deuterium and tritium will be the fuel used in a fusion reactor. It is important to study the recycling and mixing of these hydrogen isotopes in graphite from several points of view: (i) impact on the ratio of deuterium to tritium in a reactor, (ii) continued use of graphite as a first wall and divertor material, and (iii) reaction with carbon atoms and the transport of hydrocarbons will provide insight into chemical erosion. Dynamic Monte-Carlo techniques are used to study the reactive-diffusive transport of hydrogen isotopes and interstitial carbon atoms in a 3-D porous graphite structure irradiated with hydrogen and deuterium and is compared with published experimental results for hydrogen re-emission and isotope exchange.
10. Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
SciTech Connect
Sun, C. C.; You, A. H.; Wong, E. K.
2010-07-07
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
11. Monte Carlo simulations of electron transport for electron beam-induced deposition of nanostructures
Salvat-Pujol, Francesc; Jeschke, Harald O.; Valenti, Roser
2013-03-01
Tungsten hexacarbonyl, W(CO)6, is a particularly interesting precursor molecule for electron beam-induced deposition of nanoparticles, since it yields deposits whose electronic properties can be tuned from metallic to insulating. However, the growth of tungsten nanostructures poses experimental difficulties: the metal content of the nanostructure is variable. Furthermore, fluctuations in the tungsten content of the deposits seem to trigger the growth of the nanostructure. Monte Carlo simulations of electron transport have been carried out with the radiation-transport code Penelope in order to study the charge and energy deposition of the electron beam in the deposit and in the substrate. These simulations allow us to examine the conditions under which nanostructure growth takes place and to highlight the relevant parameters in the process.
12. A full-band Monte Carlo model for hole transport in silicon
Jallepalli, S.; Rashed, M.; Shih, W.-K.; Maziar, C. M.; Tasch, A. F., Jr.
1997-03-01
Hole transport in bulk silicon is explored using an efficient and accurate Monte Carlo (MC) tool based on the local pseudopotential band structure. Acoustic and optical phonon scattering, ionized impurity scattering, and impact ionization are the dominant scattering mechanisms that have been included. In the interest of computational efficiency, momentum relaxation times have been used to describe ionized impurity scattering and self-scattering rates have been computed in a dynamic fashion. The temperature and doping dependence of low-field hole mobility is obtained and good agreement with experimental data has been observed. MC extracted impact ionization coefficients are also shown to agree well with published experimental data. Momentum and energy relaxation times are obtained as a function of the average hole energy for use in moment based hydrodynamic simulators. The MC model is suitable for studying both low-field and high-field hole transport in silicon.
13. Comparison of generalized transport and Monte-Carlo models of the escape of a minor species
NASA Technical Reports Server (NTRS)
Demars, H. G.; Barakat, A. R.; Schunk, R. W.
1993-01-01
The steady-state diffusion of a minor species through a static background species is studied using a Monte Carlo model and a generalized 16-moment transport model. The two models are in excellent agreement in the collision-dominated region and in the 'transition region'. In the 'collisionless' region the 16-moment solution contains two singularities, and physical meaning cannot be assigned to the solution in their vicinity. In all regions, agreement between the models is best for the distribution function and for the lower-order moments and is less good for higher-order moments. Moments of order higher than the heat flow and hence beyond the level of description provided by the transport model have a noticeable effect on the shape of distribution functions in the collisionless region.
14. Monte Carlo Calculation of Slow Electron Beam Transport in Solids:. Reflection Coefficient Theory Implications
Bentabet, A.
The reflection coefficient theory developed by Vicanek and Urbassek showed that the backscattering coefficient of light ions impinging on semi-infinite solid targets is strongly related to the range and the first transport cross-section as well. In this work and in the electron case, we show that not only the backscattering coefficient is, but also most of electron transport quantities (such as the mean penetration depth, the diffusion polar angles, the final backscattering energy, etc.), are strongly correlated to both these quantities (i.e. the range and the first transport cross-section). In addition, most of the electron transport quantities are weakly correlated to the distribution of the scattering angle and the total elastic cross-section as well. To make our study as straightforward and clear as possible, we have projected different input data of elastic cross-sections and ranges in our Monte Carlo code to study the mean penetration depth and the backscattering coefficient of slow electrons impinging on semi-infinite aluminum and gold in the energy range up to 10 keV. The possibility of extending the present study to other materials and other transport quantities using the same models is a valid process.
15. Monte Carlo simulation and Boltzmann equation analysis of non-conservative positron transport in H2
Bankovi?, A.; Dujko, S.; White, R. D.; Buckman, S. J.; Petrovi?, Z. Lj.
2012-05-01
This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n0 are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n0 the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H2 under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.
16. Hybrid two-dimensional Monte-Carlo electron transport in self-consistent electromagnetic fields
SciTech Connect
Mason, R.J.; Cranfill, C.W.
1985-01-01
The physics and numerics of the hybrid electron transport code ANTHEM are described. The need for the hybrid modeling of laser generated electron transport is outlined, and a general overview of the hybrid implementation in ANTHEM is provided. ANTHEM treats the background ions and electrons in a laser target as coupled fluid components moving relative to a fixed Eulerian mesh. The laser converts cold electrons to an additional hot electron component which evolves on the mesh as either a third coupled fluid or as a set of Monte Carlo PIC particles. The fluids and particles move in two-dimensions through electric and magnetic fields calculated via the Implicit Moment method. The hot electrons are coupled to the background thermal electrons by Coulomb drag, and both the hot and cold electrons undergo Rutherford scattering against the ion background. Subtleties of the implicit E- and B-field solutions, the coupled hydrodynamics, and large time step Monte Carlo particle scattering are discussed. Sample applications are presented.
17. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport
SciTech Connect
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
18. Single-photon transport through an atomic chain coupled to a one-dimensional nanophotonic waveguide
Liao, Zeyang; Zeng, Xiaodong; Zhu, Shi-Yao; Zubairy, M. Suhail
2015-08-01
We study the dynamics of a single-photon pulse traveling through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single-photon pulse with finite bandwidth can even approach 100 % . The spectrum of the reflected and transmitted photon can also be significantly different from the single-atom case. Many interesting physical phenomena can occur in this system such as the photonic band-gap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single-photon frequency filter, single-photon modulation, and may find important applications in quantum information.
19. Monte Carlo model of the transport in the atmosphere of relativistic electrons and ?-rays associated to TGF
Sarria, D.; Forme, F.; Blelly, P.
2013-12-01
Onboard TARANIS satellite, the CNES mission dedicated to the study of TLE and TGFs, IDEE and XGRE are the two instruments which will measure the relativistic electrons and X and gamma rays. At the altitude of the satellite, the fluxes have been significantly altered by the filtering of the atmosphere and the satellite only measures a subset of the particles. Therefore, the inverse problem, to get an information on the sources and on the mechanisms responsible for these emissions, is rather tough to tackle, especially if we want to take advantage of the other instruments which will provide indirect information on those particles. The only reasonable way to solve this problem is to embed in the data processing, a theoretical approach using a numerical model of the generation and the transport of these burst emissions. For this purpose, we start to develop a numerical Monte carlo model which solves the transport in the atmosphere of both relativistic electrons and gamma-rays. After a brief presentation of the model and the validation by comparison with GEANT 4, we discuss how the photons and electrons may be spatially dispersed as a function of their energy at the altitude of the satellite, depending on the source properties, and the impact that could have on the detection by the satellite. Then, we give preliminary results on the interaction of the energetic particles with the neutral atmosphere, mainly in term of production rate of excited states, which will accessible through MCP experiment, and ionized species, which are important for the electrodynamics.
20. Jet transport and photon bremsstrahlung via longitudinal and transverse scattering
Qin, Guang-You; Majumder, Abhijit
2015-04-01
We study the effect of multiple scatterings on the propagation of hard partons and the production of jet-bremsstrahlung photons inside a dense medium in the framework of deep-inelastic scattering off a large nucleus. We include the momentum exchanges in both longitudinal and transverse directions between the hard partons and the constituents of the medium. Keeping up to the second order in a momentum gradient expansion, we derive the spectrum for the photon emission from a hard quark jet when traversing dense nuclear matter. Our calculation demonstrates that the photon bremsstrahlung process is influenced not only by the transverse momentum diffusion of the propagating hard parton, but also by the longitudinal drag and diffusion of the parton momentum. A notable outcome is that the longitudinal drag tends to reduce the amount of stimulated emission from the hard parton.
1. Ballistic transport in one-dimensional random dimer photonic crystals
Cherid, Samira; Bentata, Samir; Zitouni, Ali; Djelti, Radouan; Aziz, Zoubir
2014-04-01
Using the transfer-matrix technique and the Kronig Penney model, we numerically and analytically investigate the effect of short-range correlated disorder in Random Dimer Model (RDM) on transmission properties of the light in one dimensional photonic crystals made of three different materials. Such systems consist of two different structures randomly distributed along the growth direction, with the additional constraint that one kind of these layers always appear in pairs. It is shown that the one dimensional random dimer photonic crystals support two types of extended modes. By shifting of the dimer resonance toward the host fundamental stationary resonance state, we demonstrate the existence of the ballistic response in these systems.
2. Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations
SciTech Connect
Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W.
2011-04-15
Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.
3. a Test Particle Model for Monte Carlo Simulation of Plasma Transport Driven by Quasineutrality
Kuhl, Nelson M.
1995-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. We perform numerical experiments to validate a mathematical model of P. R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The simulations are made using a transport code written by O. Betancourt and M. Taylor, with changes to incorporate our case studies. We adopt a test particle model naturally suggested by the problem of tracking particles in plasma physics. The statistics due to collisions are modeled by a drift kinetic equation whose numerical solution is based on the Monte Carlo method of A. Boozer and G. Kuo -Petravic. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian. It is shown that details of the collision operator other than its dependence on the collision frequency and temperature matter little for transport, and the role of conservation of momentum is investigated. Exponential decay makes it possible to find the confinement times of both ions and electrons by high performance computing. Three -dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. We make a convergence study of the method, derive scaling laws that are in good agreement with predictions from experimental data, and present a comparison with the JET experiment.
4. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-01
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
5. Epithelial cancers and photon migration: Monte Carlo simulations and diffuse reflectance measurements
Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David
2015-07-01
Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.
6. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.
7. Monte Carlo analysis of high-frequency non-equilibrium transport in mercury-cadmium-telluride for infrared detection
Palermo, Christophe; Varani, Luca; Vaissière, Jean-Claude
2004-04-01
We present a theoretical analysis of both static and small-signal electron transport in Hg0.8Cd0.2Te in order to study the high-frequency behaviour of this material usually employed for infrared detection. Firstly we simulate static conditions by using a Monte Carlo simulation in order to extract transport parameters. Then, an analytical method based on hydrodynamic equations is used to perform the small-signal study by modelling the high-frequency differential mobility. This approach allows a full study of the frequency response for arbitrary electric fields starting only from static parameters and to overcome technical problems of direct Monte Carlo simulations.
8. Monte Carlo Study of Fetal Dosimetry Parameters for 6 MV Photon Beam
PubMed Central
Atarod, Maryam; Shokrani, Parvaneh
2013-01-01
Because of the adverse effects of ionizing radiation on fetuses, prior to radiotherapy of pregnant patients, fetal dose should be estimated. Fetal dose has been studied by several authors in different depths in phantoms with various abdomen thicknesses (ATs). In this study, the effect of maternal AT and depth in fetal dosimetry was investigated, using peripheral dose (PD) distribution evaluations. A BEAMnrc model of Oncor linac using out of beam components was used for dose calculations in out of field border. A 6 MV photon beam was used to irradiate a chest phantom. Measurements were done using EBT2 radiochromic film in a RW3 phantom as abdomen. The followings were measured for different ATs: Depth PD profiles at two distances from the field's edge, and in-plane PD profiles at two depths. The results of this study show that PD is depth dependent near the field's edge. The increase in AT does not change PD depth of maximum and its distribution as a function of distance from the field's edge. It is concluded that estimating the maximum fetal dose, using a flat phantom, i.e., without taking into account the AT, is possible. Furthermore, an in-plane profile measured at any depth can represent the dose variation as a function of distance. However, in order to estimate the maximum PD the depth of Dmax in out of field should be used for in-plane profile measurement. PMID:24083135
9. Study of the response of plastic scintillation detectors in small-field 6 MV photon beams by Monte Carlo simulations
SciTech Connect
Wang, Lilie L. W.; Beddar, Sam
2011-03-15
Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10x10 down to 0.5x0.5 cm{sup 2} for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5x0.5 cm{sup 2} photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs' responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10x10 to 0.5x0.5 cm{sup 2}, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%-2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.
10. Influence of electrodes on the photon energy deposition in CVD-diamond dosimeters studied with the Monte Carlo code PENELOPE.
PubMed
Grka, B; Nilsson, B; Fernndez-Varea, J M; Svensson, R; Brahme, A
2006-08-01
A new dosimeter, based on chemical vapour deposited (CVD) diamond as the active detector material, is being developed for dosimetry in radiotherapeutic beams. CVD-diamond is a very interesting material, since its atomic composition is close to that of human tissue and in principle it can be designed to introduce negligible perturbations to the radiation field and the dose distribution in the phantom due to its small size. However, non-tissue-equivalent structural components, such as electrodes, wires and encapsulation, need to be carefully selected as they may induce severe fluence perturbation and angular dependence, resulting in erroneous dose readings. By introducing metallic electrodes on the diamond crystals, interface phenomena between high- and low-atomic-number materials are created. Depending on the direction of the radiation field, an increased or decreased detector signal may be obtained. The small dimensions of the CVD-diamond layer and electrodes (around 100 microm and smaller) imply a higher sensitivity to the lack of charged-particle equilibrium and may cause severe interface phenomena. In the present study, we investigate the variation of energy deposition in the diamond detector for different photon-beam qualities, electrode materials and geometric configurations using the Monte Carlo code PENELOPE. The prototype detector was produced from a 50 microm thick CVD-diamond layer with 0.2 microm thick silver electrodes on both sides. The mean absorbed dose to the detector's active volume was modified in the presence of the electrodes by 1.7%, 2.1%, 1.5%, 0.6% and 0.9% for 1.25 MeV monoenergetic photons, a complete (i.e. shielded) (60)Co photon source spectrum and 6, 18 and 50 MV bremsstrahlung spectra, respectively. The shift in mean absorbed dose increases with increasing atomic number and thickness of the electrodes, and diminishes with increasing thickness of the diamond layer. From a dosimetric point of view, graphite would be an almost perfect electrode material. This study shows that, for the considered therapeutic beam qualities, the perturbation of the detector signal due to charge-collecting graphite electrodes of thicknesses between 0.1 and 700 microm is negligible within the calculation uncertainty of 0.2%. PMID:16861769
11. Mesh-based Monte Carlo method for fibre-optic optogenetic neural stimulation with direct photon flux recording strategy.
PubMed
Shin, Younghoon; Kwon, Hyuk-Sang
2016-03-21
We propose a Monte Carlo (MC) method based on a direct photon flux recording strategy using inhomogeneous, meshed rodent brain atlas. This MC method was inspired by and dedicated to fibre-optics-based optogenetic neural stimulations, thus providing an accurate and direct solution for light intensity distributions in brain regions with different optical properties. Our model was used to estimate the 3D light intensity attenuation for close proximity between an implanted optical fibre source and neural target area for typical optogenetics applications. Interestingly, there are discrepancies with studies using a diffusion-based light intensity prediction model, perhaps due to use of improper light scattering models developed for far-field problems. Our solution was validated by comparison with the gold-standard MC model, and it enabled accurate calculations of internal intensity distributions in an inhomogeneous near light source domain. Thus our strategy can be applied to studying how illuminated light spreads through an inhomogeneous brain area, or for determining the amount of light required for optogenetic manipulation of a specific neural target area. PMID:26914289
12. Gel dosimetry measurements and Monte Carlo modeling for external radiotherapy photon beams: Comparison with a treatment planning system dose distribution
Valente, M.; Aon, E.; Brunetto, M.; Castellano, G.; Gallivanone, F.; Gambarini, G.
2007-09-01
Gel dosimetry has proved to be useful to determine absorbed dose distributions in radiotherapy, as well as to validate treatment plans. Gel dosimetry allows dose imaging and is particularly helpful for non-uniform dose distribution measurements, as may occur when multiple-field irradiation techniques are employed. In this work, we report gel-dosimetry measurements and Monte Carlo (PENELOPE ®) calculations for the dose distribution inside a tissue-equivalent phantom exposed to a typical multiple-field irradiation. Irradiations were performed with a 10 MV photon beam from a Varian ® Clinac 18 accelerator. The employed dosimeters consisted of layers of Fricke Xylenol Orange radiochromic gel. The method for absorbed dose imaging was based on analysis of visible light transmittance, usually detected by means of a CCD camera. With the aim of finding a simple method for light transmittance image acquisition, a commercial flatbed-like scanner was employed. The experimental and simulated dose distributions have been compared with those calculated with a commercially available treatment planning system, showing a reasonable agreement.
13. On the Monte Carlo simulation of small-field micro-diamond detectors for megavoltage photon dosimetry
Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa
2016-01-01
Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated (5~\\text{mm}× 5 mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging.
14. On the Monte Carlo simulation of small-field micro-diamond detectors for megavoltage photon dosimetry.
PubMed
Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa
2016-01-01
Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated ([Formula: see text] mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging. PMID:26630437
15. Understanding the lateral dose response functions of high-resolution photon detectors by reverse Monte Carlo and deconvolution analysis.
PubMed
Looe, Hui Khee; Harder, Dietrich; Poppe, Björn
2015-08-21
The purpose of the present study is to understand the mechanism underlying the perturbation of the field of the secondary electrons, which occurs in the presence of a detector in water as the surrounding medium. By means of 'reverse' Monte Carlo simulation, the points of origin of the secondary electrons contributing to the detector's signal are identified and associated with the detector's mass density, electron density and atomic composition. The spatial pattern of the origin of these secondary electrons, in addition to the formation of the detector signal by components from all parts of its sensitive volume, determines the shape of the lateral dose response function, i.e. of the convolution kernel K(x,y) linking the lateral profile of the absorbed dose in the undisturbed surrounding medium with the associated profile of the detector's signal. The shape of the convolution kernel is shown to vary essentially with the electron density of the detector's material, and to be attributable to the relative contribution by the signal-generating secondary electrons originating within the detector's volume to the total detector signal. Finally, the representation of the over- or underresponse of a photon detector by this density-dependent convolution kernel will be applied to provide a new analytical expression for the associated volume effect correction factor. PMID:26267311
16. Guiding electromagnetic waves around sharp corners: topologically protected photonic transport in meta-waveguides (Presentation Recording)
Shvets, Gennady B.; Khanikaev, Alexander B.; Ma, Tzuhsuan; Lai, Kueifu
2015-09-01
Science thrives on analogies, and a considerable number of inventions and discoveries have been made by pursuing an unexpected connection to a very different field of inquiry. For example, photonic crystals have been referred to as "semiconductors of light" because of the far-reaching analogies between electron propagation in a crystal lattice and light propagation in a periodically modulated photonic environment. However, two aspects of electron behavior, its spin and helicity, escaped emulation by photonic systems until recent invention of photonic topological insulators (PTIs). The impetus for these developments in photonics came from the discovery of topologically nontrivial phases in condensed matter physics enabling edge states immune to scattering. The realization of topologically protected transport in photonics would circumvent a fundamental limitation imposed by the wave equation: inability of reflections-free light propagation along sharply bent pathway. Topologically protected electromagnetic states could be used for transporting photons without any scattering, potentially underpinning new revolutionary concepts in applied science and engineering. I will demonstrate that a PTI can be constructed by applying three types of perturbations: (a) finite bianisotropy, (b) gyromagnetic inclusion breaking the time-reversal (T) symmetry, and (c) asymmetric rods breaking the parity (P) symmetry. We will experimentally demonstrate (i) the existence of the full topological bandgap in a bianisotropic, and (ii) the reflectionless nature of wave propagation along the interface between two PTIs with opposite signs of the bianisotropy.
17. Bone and mucosal dosimetry in skin radiation therapy: a Monte Carlo study using kilovoltage photon and megavoltage electron beams
Chow, James C. L.; Jiang, Runqing
2012-06-01
This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6?MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (dmax) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose.
18. Improved cache performance in Monte Carlo transport calculations using energy banding
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
19. Massively parallel kinetic Monte Carlo simulations of charge carrier transport in organic semiconductors
van der Kaap, N. J.; Koster, L. J. A.
2016-02-01
A parallel, lattice based Kinetic Monte Carlo simulation is developed that runs on a GPGPU board and includes Coulomb like particle-particle interactions. The performance of this computationally expensive problem is improved by modifying the interaction potential due to nearby particle moves, instead of fully recalculating it. This modification is achieved by adding dipole correction terms that represent the particle move. Exact evaluation of these terms is guaranteed by representing all interactions as 32-bit floating numbers, where only the integers between -222 and 222 are used. We validate our method by modelling the charge transport in disordered organic semiconductors, including Coulomb interactions between charges. Performance is mainly governed by the particle density in the simulation volume, and improves for increasing densities. Our method allows calculations on large volumes including particle-particle interactions, which is important in the field of organic semiconductors.
20. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
SciTech Connect
Romano, Paul K; Brown, Forrest B; Forget, Benoit
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
1. Monte Carlo simulation of ballistic transport in high-mobility channels
Sabatini, G.; Marinchio, H.; Palermo, C.; Varani, L.; Daoud, T.; Teissier, R.; Rodilla, H.; Gonzlez, T.; Mateos, J.
2009-11-01
By means of Monte Carlo simulations coupled with a two-dimensional Poisson solver, we evaluate directly the possibility to use high mobility materials in ultra fast devices exploiting ballistic transport. To this purpose, we have calculated specific physical quantities such as the transit time, the transit velocity, the free flight time and the mean free path as functions of applied voltage in InAs channels with different lengths, from 2000 nm down to 50 nm. In this way the transition from diffusive to ballistic transport is carefully described. We remark a high value of the mean transit velocity with a maximum of 14105 m/s for a 50 nm-long channel and a transit time shorter than 0.1 ps, corresponding to a cutoff frequency in the terahertz domain. The percentage of ballistic electrons and the number of scatterings as functions of distance are also reported, showing the strong influence of quasi-ballistic transport in the shorter channels.
2. Monte Carlo modeling of transport in PbSe nanocrystal films
SciTech Connect
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
3. Cartesian Meshing Impacts for PWR Assemblies in Multigroup Monte Carlo and Sn Transport
Manalo, K.; Chin, M.; Sjoden, G.
2014-06-01
Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport to calculate quantities of interest, such as flux and eigenvalue in a nuclear reactor. Many 3D parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory (both in deviation of mass and exact geometry representation). For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal corresponds to 2D and volumetric to 3D, respectively). Our initial analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same multigroup problems, we also intend to characterize this linear relationship in discrete ordinates (3D PENTRAN) and discuss issues related to transport cross-comparison. In addition, we discuss auto-conversion techniques with our 3D Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model.
4. Observing gas and dust in simulations of star formation with Monte Carlo radiation transport on Voronoi meshes
Hubber, D. A.; Ercolano, B.; Dale, J.
2016-02-01
Ionizing feedback from massive stars dramatically affects the interstellar medium local to star-forming regions. Numerical simulations are now starting to include enough complexity to produce morphologies and gas properties that are not too dissimilar from observations. The comparison between the density fields produced by hydrodynamical simulations and observations at given wavelengths relies however on photoionization/chemistry and radiative transfer calculations. We present here an implementation of Monte Carlo radiation transport through a Voronoi tessellation in the photoionization and dust radiative transfer code MOCASSIN. We show for the first time a synthetic spectrum and synthetic emission line maps of a hydrodynamical simulation of a molecular cloud affected by massive stellar feedback. We show that the approach on which previous work is based, which remapped hydrodynamical density fields on to Cartesian grids before performing radiative transfer/photoionization calculations, results in significant errors in the temperature and ionization structure of the region. Furthermore, we describe the mathematical process of tracing photon energy packets through a Voronoi tessellation, including optimizations, treating problematic cases and boundary conditions. We perform various benchmarks using both the original version of MOCASSIN and the modified version using the Voronoi tessellation. We show that for uniform grids, or equivalently a cubic lattice of cell generating points, the new Voronoi version gives the same results as the original Cartesian grid version of MOCASSIN for all benchmarks. For non-uniform initial conditions, such as using snapshots from smoothed particle hydrodynamics simulations, we show that the Voronoi version performs better than the Cartesian grid version, resulting in much better resolution in dense regions.
5. The validity of the density scaling method in primary electron transport for photon and electron beams
SciTech Connect
Woo, M.K.; Cunningham, J.R. )
1990-03-01
In the convolution/superposition method of photon beam dose calculations, inhomogeneities are usually handled by using some form of scaling involving the relative electron densities of the inhomogeneities. In this paper the accuracy of density scaling as applied to primary electrons generated in photon interactions is examined. Monte Carlo calculations are compared with density scaling calculations for air and cork slab inhomogeneities. For individual primary photon kernels as well as for photon interactions restricted to a thin layer, the results can differ significantly, by up to 50%, between the two calculations. However, for realistic photon beams where interactions occur throughout the whole irradiated volume, the discrepancies are much less severe. The discrepancies for the kernel calculation are attributed to the scattering characteristics of the electrons and the consequent oversimplified modeling used in the density scaling method. A technique called the kernel integration technique is developed to analyze the general effects of air and cork inhomogeneities. It is shown that the discrepancies become significant only under rather extreme conditions, such as immediately beyond the surface after a large air gap. In electron beams all the primary electrons originate from the surface of the phantom and the errors caused by simple density scaling can be much more significant. Various aspects relating to the accuracy of density scaling for air and cork slab inhomogeneities are discussed.
6. Technical Note: Study of the electron transport parameters used in PENELOPE for the Monte Carlo simulation of Linac targets
SciTech Connect
Rodriguez, Miguel; Sempau, Josep; Brualla, Lorenzo
2015-06-15
Purpose: The Monte Carlo simulation of electron transport in Linac targets using the condensed history technique is known to be problematic owing to a potential dependence of absorbed dose distributions on the electron step length. In the PENELOPE code, the step length is partially determined by the transport parameters C1 and C2. The authors have investigated the effect on the absorbed dose distribution of the values given to these parameters in the target. Methods: A monoenergetic 6.26 MeV electron pencil beam from a point source was simulated impinging normally on a cylindrical tungsten target. Electrons leaving the tungsten were discarded. Radial absorbed dose profiles were obtained at 1.5 cm of depth in a water phantom located at 100 cm for values of C1 and C2 in the target both equal to 0.1, 0.01, or 0.001. A detailed simulation case was also considered and taken as the reference. Additionally, lateral dose profiles were estimated and compared with experimental measurements for a 6 MV photon beam of a Varian Clinac 2100 for the cases of C1 and C2 both set to 0.1 or 0.001 in the target. Results: On the central axis, the dose obtained for the case C1 = C2 = 0.1 shows a deviation of (17.2% ± 1.2%) with respect to the detailed simulation. This difference decreases to (3.7% ± 1.2%) for the case C1 = C2 = 0.01. The case C1 = C2 = 0.001 produces a radial dose profile that is equivalent to that of the detailed simulation within the reached statistical uncertainty of 1%. The effect is also appreciable in the crossline dose profiles estimated for the realistic geometry of the Linac. In another simulation, it was shown that the error made by choosing inappropriate transport parameters can be masked by tuning the energy and focal spot size of the initial beam. Conclusions: The use of large path lengths for the condensed simulation of electrons in a Linac target with PENELOPE conducts to deviations of the dose in the patient or phantom. Based on the results obtained in this work, values of C1 and C2 larger than 0.001 should not be used in Linac targets without further investigation.
7. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the γ-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2 mm and 1%/1 mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved.
8. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy.
PubMed
Tian, Zhen; Graves, Yan Jiang; Jia, Xun; Jiang, Steve B
2014-11-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the γ-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2 mm and 1%/1 mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved. PMID:25295381
9. Monte Carlo based method for conversion of in-situ gamma ray spectra obtained with a portable Ge detector to an incident photon flux energy distribution.
PubMed
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
1998-02-01
A Monte Carlo based method for the conversion of an in-situ gamma-ray spectrum obtained with a portable Ge detector to photon flux energy distribution is proposed. The spectrum is first stripped of the partial absorption and cosmic-ray events leaving only the events corresponding to the full absorption of a gamma ray. Applying to the resulting spectrum the full absorption efficiency curve of the detector determined by calibrated point sources and Monte Carlo simulations, the photon flux energy distribution is deduced. The events corresponding to partial absorption in the detector are determined by Monte Carlo simulations for different incident photon energies and angles using the CERN's GEANT library. Using the detector's characteristics given by the manufacturer as input it is impossible to reproduce experimental spectra obtained with point sources. A transition zone of increasing charge collection efficiency has to be introduced in the simulation geometry, after the inactive Ge layer, in order to obtain good agreement between the simulated and experimental spectra. The functional form of the charge collection efficiency is deduced from a diffusion model. PMID:9450590
10. Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations
Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.
2014-06-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.
11. Pre-conditioned backward Monte Carlo solutions to radiative transport in planetary atmospheres. Fundamentals: Sampling of propagation directions in polarising media
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better control than forward integration over the planet region contributing to the solution, and this presents a clear advantage when estimating the disk-integrated signal at moderate and large phase angles. A one-slab, plane-parallel version of the PBMC algorithm is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A72
12. The lower timing resolution bound for scintillators with non-negligible optical photon transport time in time-of-flight PET.
PubMed
Vinke, Ruud; Olcott, Peter D; Cates, Joshua W; Levin, Craig S
2014-10-21
In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3mmנ3mmנ20mm LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time. PMID:25255807
13. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams.
PubMed
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-21
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (gamma) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% +/- 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm(2) field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm(2)) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% +/- 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D gamma-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for gamma(>1) was 4.7% +/- 3.7%, the average gamma(1%) value was 1.19 +/- 0.16 and the mean 2D gamma-value was 0.44 +/- 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions. PMID:19934489
14. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-01
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 × 10 cm2 field at the first density interface from tissue to lung equivalent material. Small fields (2 × 2 cm2) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% ± 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D γ-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for γ>1 was 4.7% ± 3.7%, the average γ1% value was 1.19 ± 0.16 and the mean 2D γ-value was 0.44 ± 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions.
15. MCNP: Photon benchmark problems
SciTech Connect
Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.
1991-09-01
The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs.
16. One-dimensional hopping transport in disordered organic solids. II. Monte Carlo simulations
Kohary, K.; Cordes, H.; Baranovskii, S. D.; Thomas, P.; Yamasaki, S.; Hensel, F.; Wendorff, J.-H.
2001-03-01
Drift mobility of charge carriers in strongly anisotropic disordered organic media is studied by Monte Carlo computer simulations. Results for the nearest-neighbor hopping are in excellent agreement with those of the analytic theory (Cordes et al., preceding paper). It is widely believed that the low-field drift mobility in disordered organic solids has the form ?~exp[-(T0/T)2] with characteristic temperature T0 depending solely on the scale of the energy distribution of localized states responsible for transport. Taking into account electron transitions to more distant sites than the nearest neighbors, we show that this dependence is not universal and parameter T0 depends also on the concentration of localized states and on the decay length of the electron wave function in localized states. The results of computer simulation evidence that correlations in the distribution of localized states influence essentially not only the field dependence as known from the literature, but also the temperature dependence of the drift mobility. In particular, strong space-energy correlations diminish the role of long-range hopping transitions in the charge carrier transport.
17. A new Monte Carlo program for simulating light transport through Port Wine Stain skin.
PubMed
Lister, T; Wright, P A; Chappell, P H
2014-05-01
A new Monte Carlo program is presented for simulating light transport through clinically normal skin and skin containing Port Wine Stain (PWS) vessels. The program consists of an eight-layer mathematical skin model constructed from optical coefficients described previously. A simulation including diffuse illumination at the surface and subsequent light transport through the model is carried out using a radiative transfer theory ray-tracing technique. Total reflectance values over 39 wavelengths are scored by the addition of simulated light returning to the surface within a specified region and surface reflections (calculated using Fresnel's equations). These reflectance values are compared to measurements from individual participants, and characteristics of the model are adjusted until adequate agreement is produced between simulated and measured skin reflectance curves. The absorption and scattering coefficients of the epidermis are adjusted through changes in the simulated concentrations and mean diameters of epidermal melanosomes to reproduce non-lesional skin colour. Pseudo-cylindrical horizontal vessels are added to the skin model, and their simulated mean depths, diameters and number densities are adjusted to reproduce measured PWS skin colour. Accurate reproductions of colour measurement data are produced by the program, resulting in realistic predictions of melanin and PWS blood vessel parameters. Using a modest personal computer, the simulation currently requires an average of five and a half days to complete. PMID:24142045
18. Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
SciTech Connect
Mei, S. Knezevic, I.; Maurer, L. N.; Aksamija, Z.
2014-10-28
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 μm, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
19. Oxygen transport properties estimation by classical trajectory–direct simulation Monte Carlo
SciTech Connect
Bruno, Domenico; Frezzotti, Aldo Ghiroldi, Gian Pietro
2015-05-15
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300–900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
20. Experimental validation of a coupled neutron-photon inverse radiation transport solver
Mattingly, John; Mitchell, Dean J.; Harding, Lee T.
2011-10-01
Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of ?-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.
SciTech Connect
Mattingly, John K.; Mitchell, Dean James; Harding, Lee T.
2010-08-01
Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of {alpha}-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.
2. Suppression of population transport and control of exciton distributions by entangled photons
Schlawin, Frank; Dorfman, Konstantin E.; Fingerhut, Benjamin P.; Mukamel, Shaul
2013-04-01
Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty.
3. Development of a photon-cell interactive monte carlo simulation for non-invasive measurement of blood glucose level by Raman spectroscopy.
PubMed
Sakota, Daisuke; Kosaka, Ryo; Nishida, Masahiro; Maruyama, Osamu
2015-08-01
Turbidity variation is one of the major limitations in Raman spectroscopy for quantifying blood components, such as glucose, non-invasively. To overcome this limitation, we have developed a Raman scattering simulation using a photon-cell interactive Monte Carlo (pciMC) model that tracks photon migration in both the extra- and intracellular spaces without relying on the macroscopic scattering phase function and anisotropy factor. The interaction of photons at the plasma-cell boundary of randomly oriented three-dimensionally biconcave red blood cells (RBCs) is modeled using geometric optics. The validity of the developed pciMCRaman was investigated by comparing simulation and experimental results of Raman spectroscopy of glucose level in a bovine blood sample. The scattering of the excitation laser at a wavelength of 785 nm was simulated considering the changes in the refractive index of the extracellular solution. Based on the excitation laser photon distribution within the blood, the Raman photon derived from the hemoglobin and glucose molecule at the Raman shift of 1140 cm(-1) = 862 nm was generated, and the photons reaching the detection area were counted. The simulation and experimental results showed good correlation. It is speculated that pciMCRaman can provide information about the ability and limitations of the measurement of blood glucose level. PMID:26737759
4. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams
Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei
2015-06-01
The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.
5. Event-by-event Monte Carlo simulation of radiation transport in vapor and liquid water
Papamichael, Georgios Ioannis
A Monte-Carlo Simulation is presented for Radiation Transport in water. This process is of utmost importance, having applications in oncology and therapy of cancer, in protecting people and the environment, waste management, radiation chemistry and on some solid-state detectors. It's also a phenomenon of interest in microelectronics on satellites in orbit that are subject to the solar radiation and in space-craft design for deep-space missions receiving background radiation. The interaction of charged particles with the medium is primarily due to their electromagnetic field. Three types of interaction events are considered: Elastic scattering, impact excitation and impact ionization. Secondary particles (electrons) can be generated by ionization. At each stage, along with the primary particle we explicitly follow all secondary electrons (and subsequent generations). Theoretical, semi-empirical and experimental formulae with suitable corrections have been used in each case to model the cross sections governing the quantum mechanical process of interactions, thus determining stochastically the energy and direction of outgoing particles following an event. Monte-Carlo sampling techniques have been applied to accurate probability distribution functions describing the primary particle track and all secondary particle-medium interaction. A simple account of the simulation code and a critical exposition of its underlying assumptions (often missing in the relevant literature) are also presented with reference to the model cross sections. Model predictions are in good agreement with existing computational data and experimental results. By relying heavily on a theoretical formulation, instead of merely fitting data, it is hoped that the model will be of value in a wider range of applications. Possible future directions that are the object of further research are pointed out.
6. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
7. A Monte Carlo evaluation of dose enhancement by cisplatin and titanocene dichloride chemotherapy drugs in brachytherapy with photon emitting sources.
PubMed
Yahya Abadi, Akram; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Knaup, Courtney
2014-06-01
Some chemotherapy drugs contain a high Z element in their structure that can be used for tumour dose enhancement in radiotherapy. In the present study, dose enhancement factors (DEFs) by cisplatin and titanocene dichloride agents in brachytherapy were quantified based on Monte Carlo simulation. Six photon emitting brachytherapy sources were simulated and their dose rate constant and radial dose function were determined and compared with published data. Dose enhancement factor was obtained for 1, 3 and 5 % concentrations of cisplatin and titanocene dichloride chemotherapy agents in a tumour, in soft tissue phantom. The results of the dose rate constant and radial dose function showed good agreement with published data. Our results have shown that depending on the type of chemotherapy agent and brachytherapy source, DEF increases with increasing chemotherapy drug concentration. The maximum in-tumour averaged DEF for cisplatin and titanocene dichloride are 4.13 and 1.48, respectively, reached with 5 % concentrations of the agents, and (125)I source. Dose enhancement factor is considerably higher for both chemotherapy agents with (125)I, (103)Pd and (169)Yb sources, compared to (192)Ir, (198)Au and (60)Co sources. At similar concentrations, dose enhancement for cisplatin is higher compared with titanocene dichloride. Based on the results of this study, combination of brachytherapy and chemotherapy with agents containing a high Z element resulted in higher radiation dose to the tumour. Therefore, concurrent use of chemotherapy and brachytherapy with high atomic number drugs can have the potential benefits of dose enhancement. However, more preclinical evaluations in this area are necessary before clinical application of this method. PMID:24706342
8. Monte Carlo Neutrino Transport through Remnant Disks from Neutron Star Mergers
Richers, Sherwood; Kasen, Daniel; O'Connor, Evan; Fernández, Rodrigo; Ott, Christian D.
2015-11-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 1046 erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 1048 erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.
9. Anisotropy collision effect on ion transport in cold gas discharges with Monte Carlo simulation
SciTech Connect
1995-12-31
Ion-molecule collision cross sections and transport and reaction coefficients are one of the basic data needed for discharge modelling of non thermal cold plasmas. In the literature, numerous methods are devoted to the experimental and theoretical determination of these basic data. However, data on ion-molecule collision cross sections are very sparse and in certain case practically not existent for low and intermediate ion energy range. So, the aim of this communication is to give, in the case of two ions in their parent gases (N{sub 2}{sup +}/N{sub 2} and O{sub 2}{sup +}/O{sub 2}), the set of collision cross sections involving momentum transfer, symmetric charge transfer and also inelastic (vibration and ionisation) cross sections. The differential collision cross section is also given in order to take into account the strong anisotropy effect of elastic collisions of ions which are scattered mainly in the forward direction at the intermediate energy range. The differential cross sections are full calculated from interaction potential of polarization at low energy range and potentials of Lennard-Jones for N{sub 2}{sup +}/N{sub 2} and a modified form for O{sub 2}{sup +}/O{sub 2} at upper energy and then, by using a swarm unfolding technique, they are fitted until to obtain the best agreement between the transport and reaction coefficients measured from classical swarm experiments and calculated from Monte Carlo simulation of ion transport for a large range of reduced electric field E/N.
10. Design of a hybrid computational fluid dynamics-monte carlo radiation transport methodology for radioactive particulate resuspension studies.
PubMed
Ali, Fawaz; Waller, Ed
2014-10-01
11. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
12. SU-E-T-142: Effect of the Bone Heterogeneity On the Unflattened and Flattened Photon Beam Dosimetry: A Monte Carlo Comparison
SciTech Connect
Chow, J; Owrangi, A
2014-06-01
Purpose: This study compared the dependence of depth dose on bone heterogeneity of unflattened photon beams to that of flattened beams. Monte Carlo simulations (the EGSnrc-based codes) were used to calculate depth doses in phantom with a bone layer in the buildup region of the 6 MV photon beams. Methods: Heterogeneous phantom containing a bone layer of 2 cm thick at a depth of 1 cm in water was irradiated by the unflattened and flattened 6 MV photon beams (field size = 10×10 cm{sup 2}). Phase-space files of the photon beams based on the Varian TrueBeam linac were generated by the Geant4 and BEAMnrc codes, and verified by measurements. Depth doses were calculated using the DOSXYZnrc code with beam angles set to 0° and 30°. For dosimetric comparison, the above simulations were repeated in a water phantom using the same beam geometry with the bone layer replaced by water. Results: Our results showed that the beam output of unflattened photon beams was about 2.1 times larger than the flattened beams in water. Comparing the water phantom to the bone phantom, larger doses were found in water above and below the bone layer for both the unflattened and flattened photon beams. When both beams were turned 30°, the deviation of depth dose between the bone and water phantom became larger compared to that with beam angle equal to 0°. Dose ratio of the unflattened and flattened photon beams showed that the unflattened beam has larger depth dose in the buildup region compared to the flattened beam. Conclusion: Although the unflattened photon beam had different beam output and quality compared to the flattened, dose enhancements due to the bone scatter were found similar. However, we discovered that depth dose deviation due to the presence of bone was sensitive to the beam obliquity.
13. Dosimetric advantage of using 6 MV over 15 MV photons in conformal therapy of lung cancer: Monte Carlo studies in patient geometries.
PubMed
Wang, Lu; Yorke, Ellen; Desobry, Gregory; Chui, Chen-Shou
2002-01-01
Many lung cancer patients who undergo radiation therapy are treated with higher energy photons (15-18 MV) to obtain deeper penetration and better dose uniformity. However, the longer range of the higher energy recoil electrons in the low-density medium may cause lateral electronic disequilibrium and degrade the target coverage. To compare the dose homogeneity achieved with lower versus higher energy photon beams, we performed a dosimetric study of 6 and 15 MV three-dimensional (3D) conformal treatment plans for lung cancer using an accurate, patient-specific dose-calculation method based on a Monte Carlo technique. A 6 and 15 MV 3D conformal treatment plan was generated for each of two patients with target volumes exceeding 200 cm(3) on an in-house treatment planning system in routine clinical use. Each plan employed four conformally shaped photon beams. Each dose distribution was recalculated with the Monte Carlo method, utilizing the same beam geometry and patient-specific computed tomography (CT) images. Treatment plans using the two energies were compared in terms of their isodose distributions and dose-volume histograms (DVHs). The 15 MV dose distributions and DVHs generated by the clinical treatment planning calculations were as good as, or slightly better than, those generated for 6 MV beams. However, the Monte Carlo dose calculation predicted increased penumbra width with increased photon energy resulting in decreased lateral dose homogeneity for the 15 MV plans. Monte Carlo calculations showed that all target coverage indicators were significantly worse for 15 MV than for 6 MV; particularly the portion of the planning target volume (PTV) receiving at least 95% of the prescription dose (V(95)) dropped dramatically for the 15 MV plan in comparison to the 6 MV. Spinal cord and lung doses were clinically equivalent for the two energies. In treatment planning of tumors that abut lung tissue, lower energy (6 MV) photon beams should be preferred over higher energies (15-18 MV) because of the significant loss of lateral dose equilibrium for high-energy beams in the low-density medium. Any gains in radial dose uniformity across steep density gradients for higher energy beams must be weighed carefully against the lateral beam degradation due to penumbra widening. PMID:11818004
14. penMesh--Monte Carlo radiation transport simulation in a triangle mesh geometry.
PubMed
2009-12-01
We have developed a general-purpose Monte Carlo simulation code, called penMesh, that combines the accuracy of the radiation transport physics subroutines from PENELOPE and the flexibility of a geometry based on triangle meshes. While the geometric models implemented in most general-purpose codes--such as PENELOPE's quadric geometry--impose some limitations in the shape of the objects that can be simulated, triangle meshes can be used to describe any free-form (arbitrary) object. Triangle meshes are extensively used in computer-aided design and computer graphics. We took advantage of the sophisticated tools already developed in these fields, such as an octree structure and an efficient ray-triangle intersection algorithm, to significantly accelerate the triangle mesh ray-tracing. A detailed description of the new simulation code and its ray-tracing algorithm is provided in this paper. Furthermore, we show how it can be readily used in medical imaging applications thanks to the detailed anatomical phantoms already available. In particular, we present a whole body radiography simulation using a triangulated version of the anthropomorphic NCAT phantom. An example simulation of scatter fraction measurements using a standardized abdomen and lumbar spine phantom, and a benchmark of the triangle mesh and quadric geometries in the ray-tracing of a mathematical breast model, are also presented to show some of the capabilities of penMesh. PMID:19435677
15. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations
SciTech Connect
Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.
2012-07-01
In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)
16. Monte Carlo simulation of radiation transport in human skin with rigorous treatment of curved tissue boundaries
Majaron, Boris; Milani?, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries.
17. Proton transport in water and DNA components: A Geant4 Monte Carlo simulation
Champion, C.; Incerti, S.; Tran, H. N.; Karamitros, M.; Shin, J. I.; Lee, S. B.; Lekadir, H.; Bernal, M.; Francis, Z.; Ivanchenko, V.; Fojón, O. A.; Hanssen, J.; Rivarola, R. D.
2013-07-01
Accurate modeling of DNA damages resulting from ionizing radiation remains a challenge of today's radiobiology research. An original set of physics processes has been recently developed for modeling the detailed transport of protons and neutral hydrogen atoms in liquid water and in DNA nucleobases using the Geant4-DNA extension of the open source Geant4 Monte Carlo simulation toolkit. The theoretical cross sections as well as the mean energy transfers during the different ionizing processes were taken from recent works based on classical as well as quantum mechanical predictions. Furthermore, in order to compare energy deposition patterns in liquid water and DNA material, we here propose a simplified cellular nucleus model made of spherical voxels, each containing randomly oriented nanometer-size cylindrical targets filled with either liquid water or DNA material (DNA nucleobases) both with a density of 1 g/cm3. These cylindrical volumes have dimensions comparable to genetic material units of mammalian cells, namely, 25 nm (diameter) × 25 nm (height) for chromatin fiber segments, 10 nm (d) × 5 nm (h) for nucleosomes and 2 nm (d) × 2 nm (h) for DNA segments. Frequencies of energy deposition in the cylindrical targets are presented and discussed.
18. Monte Carlo simulation of radiation transport in human skin with rigorous treatment of curved tissue boundaries.
PubMed
Majaron, Boris; Milanič, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries. PMID:25604544
19. Comparison of the Angular Dependence of Monte Carlo Particle Transport Modeling Software
Chancellor, Jeff; Guetersloh, Stephen
2011-04-01
Modeling nuclear interactions is relevant to cancer radiotherapy, space mission dosimetry and the use of heavy ion research beams. In heavy ion radiotherapy, fragmentation of the primary ions has the unwanted effect of reducing dose localization, contributing to a non-negligible dose outside the volume of tissue being treated. Fragmentation in spaceship walls, hardware and human tissue can lead to large uncertainties in estimates of radiation risk inside the crew habitat. Radiation protection mandates very conservative dose estimations, and reduction of uncertainties is critical to avoid limitations on allowed mission duration and maximize shielding design. Though fragment production as a function of scattering angle has not been well characterized, experimental simulation with Monte Carlo particle transport models have shown good agreement with data obtained from on-axis detectors with large acceptance angles. However, agreement worsens with decreasing acceptance angle, attributable in part to incorrect transverse momentum assumptions in the models. We will show there is an unacceptable angular discrepancy in modeling off-axis fragments produced by inelastic nuclear interaction of the primary ion. The results will be compared to published measurements of 400 MeV/nucleon carbon beams interacting in C, CH2, Al, Cu, Sn, and Pb targets.
20. Comparison of the Angular Dependence of Monte Carlo Particle Transport Modeling Software
Chancellor, Jeff; Guetersloh, Stephen
2011-03-01
Modeling nuclear interactions is relevant to cancer radiotherapy, space mission dosimetry and the use of heavy ion research beams. In heavy ion radiotherapy, fragmentation of the primary ions has the unwanted effect of reducing dose localization, contributing to a non-negligible dose outside the volume of tissue being treated. Fragmentation in spaceship walls, hardware and human tissue can lead to large uncertainties in estimates of radiation risk inside the crew habitat. Radiation protection mandates very conservative dose estimations, and reduction of uncertainties is critical to avoid limitations on allowed mission duration and maximize shielding design. Though fragment production as a function of scattering angle has not been well characterized, experimental simulation with Monte Carlo particle transport models have shown good agreement with data obtained from on-axis detectors with large acceptance angles. However, agreement worsens with decreasing acceptance angle, attributable in part to incorrect transverse momentum assumptions in the models. We will show there is an unacceptable angular discrepancy in modeling off-axis fragments produced by inelastic nuclear interaction of the primary ion. The results will be compared to published measurements of 400 MeV/nucleon carbon beams interacting in C, CH2, Al, Cu, Sn, and Pb targets.
1. Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle
de Bellefon, G. M.; Wirth, B. D.
2011-06-01
A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program ( Gontard and Nabielek [1]) and Minato and co-workers ( Minato et al. [2]).
2. Poster — Thur Eve — 48: Dosimetric dependence on bone backscatter in orthovoltage radiotherapy: A Monte Carlo photon fluence spectral study
SciTech Connect
Chow, J; Grigor, G
2014-08-15
This study investigated dosimetric impact due to the bone backscatter in orthovoltage radiotherapy. Monte Carlo simulations were used to calculate depth doses and photon fluence spectra using the EGSnrc-based code. Inhomogeneous bone phantom containing a thin water layer (1–3 mm) on top of a bone (1 cm) to mimic the treatment sites of forehead, chest wall and kneecap was irradiated by the 220 kVp photon beam produced by the Gulmay D3225 x-ray machine. Percentage depth doses and photon energy spectra were determined using Monte Carlo simulations. Results of percentage depth doses showed that the maximum bone dose was about 210–230% larger than the surface dose in the phantoms with different water thicknesses. Surface dose was found to be increased from 2.3 to 3.5%, when the distance between the phantom surface and bone was increased from 1 to 3 mm. This increase of surface dose on top of a bone was due to the increase of photon fluence intensity, resulting from the bone backscatter in the energy range of 30 – 120 keV, when the water thickness was increased. This was also supported by the increase of the intensity of the photon energy spectral curves at the phantom and bone surface as the water thickness was increased. It is concluded that if the bone inhomogeneity during the dose prescription in the sites of forehead, chest wall and kneecap with soft tissue thickness = 1–3 mm is not considered, there would be an uncertainty in the dose delivery.
3. Improved Hybrid Monte Carlo/n-Moment Transport Equations Model for the Polar Wind
Barakat, A. R.; Ji, J.; Schunk, R. W.
2013-12-01
In many space plasma problems (e.g. terrestrial polar wind, solar wind, etc.), the plasma gradually evolves from dense collision-dominated into rarified collisionless conditions. For decades, numerous attempts were made in order to address this type of problem using simulations based on one of two approaches. These approaches are: (1) the (fluid-like) Generalized Transport Equations, GTE, and (2) the particle-based Monte Carlo (MC) techniques. In contrast to the computationally intensive MC, the GTE approach can be considerably more efficient but its validity is questionable outside the collision-dominated region depending on the number of transport parameters considered. There have been several attempts to develop hybrid models that combine the strengths of both approaches. In particular, low-order GTE formulations were applied within the collision-dominated region, while an MC simulation was applied within the collisionless region and in the collisional-to-collisionless transition region. However, attention must be paid to assuring the consistency of the two approaches in the region where they are matched. Contrary to all previous studies, our model pays special attention to the ';matching' issue, and hence eliminates the discontinuities/inaccuracies associated with mismatching. As an example, we applied our technique to the Coulomb-Milne problem because of its relevance to the problem of space plasma flow from high- to low-density regions. We will compare the velocity distribution function and its moments (density, flow velocity, temperature, etc.) from the following models: (1) the pure MC model, (2) our hybrid model, and (3) previously published hybrid models. We will also consider a wide range of the test-to-background mass ratio.
4. Consequences of removing the flattening filter from linear accelerators in generating high dose rate photon beams for clinical applications: A Monte Carlo study verified by measurement
Ishmael Parsai, E.; Pearson, David; Kvale, Thomas
2007-08-01
An Elekta SL-25 medical linear accelerator (Elekta Oncology Systems, Crawley, UK) has been modelled using Monte Carlo simulations with the photon flattening filter removed. It is hypothesized that intensity modulated radiation therapy (IMRT) treatments may be carried out after the removal of this component despite it's criticality to standard treatments. Measurements using a scanning water phantom were also performed after the flattening filter had been removed. Both simulated and measured beam profiles showed that dose on the central axis increased, with the Monte Carlo simulations showing an increase by a factor of 2.35 for 6 MV and 4.18 for 10 MV beams. A further consequence of removing the flattening filter was the softening of the photon energy spectrum leading to a steeper reduction in dose at depths greater than the depth of maximum dose. A comparison of the points at the field edge showed that dose was reduced at these points by as much as 5.8% for larger fields. In conclusion, the greater photon fluence is expected to result in shorter treatment times, while the reduction in dose outside of the treatment field is strongly suggestive of more accurate dose delivery to the target.
5. Experimental validation of a coupled neutron-photon inverse radiation transport solver.
SciTech Connect
Mattingly, John K.; Harding, Lee; Mitchell, Dean James
2010-03-01
Forward radiation transport is the problem of calculating the radiation field given a description of the radiation source and transport medium. In contrast, inverse transport is the problem of inferring the configuration of the radiation source and transport medium from measurements of the radiation field. As such, the identification and characterization of special nuclear materials (SNM) is a problem of inverse radiation transport, and numerous techniques to solve this problem have been previously developed. The authors have developed a solver based on nonlinear regression applied to deterministic coupled neutron-photon transport calculations. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5-kg sphere of alpha-phase, weapons-grade plutonium. The source was measured in six different configurations: bare, and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses of 1.27, 2.54, 3.81, 7.62, and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to characterize the solver's ability to correctly infer the configuration of the source from its measured signatures.
6. Dependences of mucosal dose on photon beams in head-and-neck intensity-modulated radiation therapy: a Monte Carlo study
SciTech Connect
Chow, James C.L.; Owrangi, Amir M.
2012-07-01
Dependences of mucosal dose in the oral or nasal cavity on the beam energy, beam angle, multibeam configuration, and mucosal thickness were studied for small photon fields using Monte Carlo simulations (EGSnrc-based code), which were validated by measurements. Cylindrical mucosa phantoms (mucosal thickness = 1, 2, and 3 mm) with and without the bone and air inhomogeneities were irradiated by the 6- and 18-MV photon beams (field size = 1 Multiplication-Sign 1 cm{sup 2}) with gantry angles equal to 0 Degree-Sign , 90 Degree-Sign , and 180 Degree-Sign , and multibeam configurations using 2, 4, and 8 photon beams in different orientations around the phantom. Doses along the central beam axis in the mucosal tissue were calculated. The mucosal surface doses were found to decrease slightly (1% for the 6-MV photon beam and 3% for the 18-MV beam) with an increase of mucosal thickness from 1-3 mm, when the beam angle is 0 Degree-Sign . The variation of mucosal surface dose with its thickness became insignificant when the beam angle was changed to 180 Degree-Sign , but the dose at the bone-mucosa interface was found to increase (28% for the 6-MV photon beam and 20% for the 18-MV beam) with the mucosal thickness. For different multibeam configurations, the dependence of mucosal dose on its thickness became insignificant when the number of photon beams around the mucosal tissue was increased. The mucosal dose with bone was varied with the beam energy, beam angle, multibeam configuration and mucosal thickness for a small segmental photon field. These dosimetric variations are important to consider improving the treatment strategy, so the mucosal complications in head-and-neck intensity-modulated radiation therapy can be minimized.
7. Kinetic Monte Carlo Model of Charge Transport in Hematite (?-Fe2O3)
SciTech Connect
Kerisit, Sebastien N.; Rosso, Kevin M.
2007-09-28
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron-transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parameterized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., ~2% of iron(III) sites, whereas the presence of repulsive defects has a linear effect on the electron polaron diffusion. Attractive defects are found to significantly affect electron polaron diffusion at low polaron to defect ratios due to trapping on nanosecond to microsecond time scales. This work indicates that electrons can diffuse away from the initial site of interfacial electron transfer at a rate that is consistent with measured electrical conductivities but that the presence of certain kinds of defects will severely limit the mobility of donated electrons.
8. Monte Carlo neutral particle transport through a binary stochastic mixture using chord length sampling
Donovan, Timothy J.
A Monte Carlo algorithm is developed to estimate the ensemble-averaged behavior of neutral particles within a binary stochastic mixture. A special case stochastic mixture is examined, in which non-overlapping spheres of constant radius are uniformly mixed in a matrix material. Spheres are chosen to represent the stochastic volumes due to their geometric simplicity and because spheres are a common approximation to a large number of applications. The boundaries of the mixture are impenetrable, meaning that spheres in the stochastic mixture cannot be assumed to overlap the mixture boundaries. The algorithm employs a method called Limited Chord Length Sampling (LCLS). While in the matrix material, LCLS uses chord-length sampling to sample the distance to the next stochastic interface. After a surface crossing into a stochastic sphere, transport is treated explicitly until the particle exits or is killed. This capability eliminates the need to explicitly model a representation of the random geometry of the mixture. The algorithm is first proposed and tested against benchmark results for a two dimensional, fixed source model using stand-alone Monte Carlo codes. The algorithm is then implemented and tested in a test version of the Los Alamos M&barbelow;onte C&barbelow;arlo ?-p&barbelow;article Code MCNP. This prototype MCNP version has the capability to calculate LCLS results for both fixed source and multiplied source (i.e., eigenvalue) problems. Problems analyzed with MCNP range from simple binary mixtures, designed to test LCLS over a range of optical thicknesses, to a detailed High Temperature Gas Reactor fuel element, which tests the value of LCLS in a current problem of practical significance. Comparisons of LCLS and benchmark results include both accuracy and efficiency comparisons. To ensure conservative efficiency comparisons, the statistical basis for the benchmark technique is derived and a formal method for optimizing the benchmark calculations is developed. LCLS results are compared to results obtained through other methods to gauge accuracy and efficiency. The LCLS model is efficient and provides a high degree of accuracy through a wide range of conditions.
9. ITS Version 4.0: Electron/photon Monte Carlo transport codes
SciTech Connect
Halbleib, J.A,; Kensek, R.P.; Seltzer, S.M.
1995-07-01
The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.
10. Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads
Souza, Fabrício M.; Carrara, Thiago L.; Vernek, E.
2011-09-01
We investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using nonequilibrium Green’s function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect.
11. Transparent and Nonflammable Ionogel Photon Upconverters and Their Solute Transport Properties.
PubMed
Murakami, Yoichi; Himuro, Yuki; Ito, Toshiyuki; Morita, Ryoutarou; Niimi, Kazuki; Kiyoyanagi, Noriko
2016-02-01
Photon upconversion based on triplet-triplet annihilation (TTA-UC) is a technology to convert presently wasted sub-bandgap photons to usable higher-energy photons. In this paper, ionogel TTA-UC samples are first developed by gelatinizing ionic liquids containing triplet-sensitizing and light-emitting molecules using an ionic gelator, resulting in transparent and nonflammable ionogel photon upconverters. The photophysical properties of the ionogel samples are then investigated, and the results suggest that the effect of gelation on the diffusion of the solutes is negligibly small. To further examine this suggestion and acquire fundamental insight into the solute transport properties of the samples, the diffusion of charge-neutral solute species over much longer distances than microscopic interpolymer distances is measured by electrochemical potential-step chronoamperometry. The results reveal that the diffusion of solute species is not affected by gelation within the tested gelator concentration range, supporting our interpretation of the initial results of the photophysical investigations. Overall, our results show that the advantage of nonfluidity can be imparted to ionic-liquid-based photon upconverters without sacrificing molecular diffusion, optical transparency, and nonflammability. PMID:26752701
12. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A
Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael H.; Sobolevsky, Nikolai; Thomsen, Bjarne; Bassler, Niels
2015-03-01
The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi-Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi-Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.
13. Update on the Status of the FLUKA Monte Carlo Transport Code
NASA Technical Reports Server (NTRS)
Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.
2004-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.
14. Predicting the timing properties of phosphor-coated scintillators using Monte Carlo light transport simulation.
PubMed
Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R
2014-04-21
Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 3 10 mm crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface. PMID:24694727
15. Predicting the timing properties of phosphor-coated scintillators using Monte Carlo light transport simulation
Roncali, Emilie; Schmall, Jeffrey P.; Viswanath, Varsha; Berg, Eric; Cherry, Simon R.
2014-04-01
Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 3 10 mm3 crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface.
16. Monte Carlo solution for uncertainty propagation in particle transport with a stochastic Galerkin method
SciTech Connect
Franke, B. C.; Prinja, A. K.
2013-07-01
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
17. Monte Carlo simulation of ion transport of the high strain ionomer with conducting powder electrodes
He, Xingxi; Leo, Donald J.
2007-04-01
The transport of charge due to electric stimulus is the primary mechanism of actuation for a class of polymeric active materials known as ionomeric polymer transducers (IPT). At low frequency, strain response is strongly related to charge accumulation at the electrodes. Experimental results demonstrated using conducting powder, such as single-walled carbon nanotubes (SWNT), polyaniline (PANI) powders, high surface area RuO II, carbon black electrodes etc. as an electrode increases the mechanical deformation of the IPT by increasing the capacitance of the material. In this paper, Monte Carlo simulation of a two-dimensional ion hopping model has been built to describe ion transport in the IPT. The shape of the conducting powder is assumed to be a sphere. A step voltage is applied between the electrodes of the IPT, causing the thermally-activated hopping between multiwell energy structures. Energy barrier height includes three parts: the energy height due to the external electric potential, intrinsic energy, and the energy height due to ion interactions. Finite element method software-ANSYS is employed to calculate the static electric potential distribution inside the material with the powder sphere in varied locations. The interaction between ions and the electrodes including powder electrodes is determined by using the method of images. At each simulation step, the energy of each cation is updated to compute ion hopping rate which directly relates to the probability of an ion moving to its neighboring site. Simulation ends when the current drops to constant zero. Periodic boundary conditions are applied when ions hop in the direction perpendicular to the external electric field. When an ion is moved out of the simulation region, its corresponding periodic replica enters from the opposite side. In the direction of the external electric field, parallel programming is achieved in C augmented with functions that perform message-passing between processors using Message Passing Interface (MPI) standard. The effects of conducting powder size, locations and amount are discussed by studying the stationary charge density plots and ion distribution plots.
18. Unidirectional transport in electronic and photonic Weyl materials by Dirac mass engineering
Bi, Ren; Wang, Zhong
2015-12-01
Unidirectional transports have been observed in two-dimensional systems, however, so far they have not been experimentally observed in three-dimensional bulk materials. In this theoretical work, we show that the recently discovered Weyl materials provide a platform for unidirectional transports inside bulk materials. With high experimental feasibility, a complex Dirac mass can be generated and manipulated in photonic Weyl crystals, creating unidirectionally propagating modes observable in transmission experiments. A possible realization in (electronic) Weyl semimetals is also studied. We show in a lattice model that, with a short-range interaction, the desired form of the Dirac mass can be spontaneously generated in a first-order transition.
19. Photonics
Hiruma, Teruo
1993-04-01
After developing various kinds of photodetectors such as phototubes, photomultiplier tubes, image pick up tubes, solid state photodetectors and a variety of light sources, we also started to develop integrated systems utilizing new detectors or imaging devices. These led us to the technology for a single photon counting imaging and detection of picosecond and femtosecond phenomena. Through those experiences, we gained the understanding that photon is a paste of substances, and yet we know so little about photon. By developing various technology for many fields such as analytical chemistry, high energy physics, medicine, biology, brain science, astronomy, etc., we are beginning to understand that the mind and life are based on the same matter, that is substance. Since humankind has so little knowledge about the substance concerning the mind and life, this makes some confusion on these subjects at this moment. If we explore photonics more deeply, many problems we now have in the world could be solved. By creating new knowledge and technology, I believe we will be able to solve the problems of illness, aging, energy, environment, human capability, and finally, the essential healthiness of the six billion human beings in the world.
20. Monte Carlo Benchmark
Energy Science and Technology Software Center (ESTSC)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
1. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning
SciTech Connect
Chetty, Indrin J.; Curran, Bruce; Cygler, Joanna E.; DeMarco, John J.; Ezzell, Gary; Faddegon, Bruce A.; Kawrakow, Iwan; Keall, Paul J.; Liu, Helen; Ma, C.-M. Charlie; Rogers, D. W. O.; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V.
2007-12-15
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.
2. Using FLUKA Monte Carlo transport code to develop parameterizations for fluence and energy deposition data for high-energy heavy charged particles
Brittingham, John; Townsend, Lawrence; Barzilla, Janet; Lee, Kerry
2012-03-01
Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared to well-known data and the calculations of other deterministic and Monte Carlo codes. Results will be presented.
3. A generalized framework for in-line energy deposition during steady-state Monte Carlo radiation transport
SciTech Connect
Griesheimer, D. P.; Stedry, M. H.
2013-07-01
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
4. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media
SciTech Connect
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
5. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting.
PubMed
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A; Choi, Kyoung-Shin
2015-01-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (?2.5?eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350?C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ?0.2?eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984
6. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting
PubMed Central
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin
2015-01-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (∼2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ∼0.2 eV but also increases the majority carrier density and mobility, enhancing electron–hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984
7. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin
2015-10-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (~2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ~0.2 eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge.
8. Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads
Souza, Fabricio M.; Carrara, Thiago L.; Vernek, Edson
2012-02-01
Time-dependent transport in quantum dot system (QDs) has received significant attention due to a variety of new quantum physical phenomena emerging in transient time scale.[1] In the present work [2] we investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using non equilibrium Green's function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon-transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect. [1] F. M. Souza, Phys. Rev. B 76, 205315 (2007). [2] F. M. Souza, T. L. Carrara, and E. Vernek, Phys. Rev. B 84, 115322 (2011).
9. Monte-Carlo-derived insights into dose-kerma-collision kerma inter-relationships for 50?keV-25?MeV photon beams in water, aluminum and copper
Kumar, Sudhir; Deshpande, Deepak D.; Nahum, Alan E.
2015-01-01
The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6-15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in sub-equilibrium small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for \\overline{X} , the distance upstream from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: {{\\overline{X}}\\text{emp}}? 0.5{{R}\\text{csda}}(\\overline{{{E}0}}) , where \\overline{{{E}0}} is the mean initial secondary electron energy. These {{\\overline{X}}\\text{emp}} agree well with exact MC-derived values for photon energies from 5-25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV-25?MeV photons in the three materials, showing close agreement with the MC-derived values.
10. NASA astronaut dosimetry: Implementation of scalable human phantoms and benchmark comparisons of deterministic versus Monte Carlo radiation transport
11. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code
SciTech Connect
O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
12. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 ?k in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
13. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study
SciTech Connect
2015-02-24
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
14. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study
2015-02-01
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
15. Use of single scatter electron monte carlo transport for medical radiation sciences
DOEpatents
Svatos, Michelle M. (Oakland, CA)
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
16. Thermal-to-fusion neutron convertor and Monte Carlo coupled simulation of deuteron/triton transport and secondary products generation
Wang, Guan-bo; Liu, Han-gang; Wang, Kan; Yang, Xin; Feng, Qi-jie
2012-09-01
Thermal-to-fusion neutron convertor has being studied in China Academy of Engineering Physics (CAEP). Current Monte Carlo codes, such as MCNP and GEANT, are inadequate when applied in this multi-step reactions problems. A Monte Carlo tool RSMC (Reaction Sequence Monte Carlo) has been developed to simulate such coupled problem, from neutron absorption, to charged particle ionization and secondary neutron generation. "Forced particle production" variance reduction technique has been implemented to improve the calculation speed distinctly by making deuteron/triton induced secondary product plays a major role. Nuclear data is handled from ENDF or TENDL, and stopping power from SRIM, which described better for low energy deuteron/triton interactions. As a validation, accelerator driven mono-energy 14 MeV fusion neutron source is employed, which has been deeply studied and includes deuteron transport and secondary neutron generation. Various parameters, including fusion neutron angle distribution, average neutron energy at different emission directions, differential and integral energy distributions, are calculated with our tool and traditional deterministic method as references. As a result, we present the calculation results of convertor with RSMC, including conversion ratio of 1 mm 6LiD with a typical thermal neutron (Maxwell spectrum) incidence, and fusion neutron spectrum, which will be used for our experiment.
17. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling.
PubMed
Schaefer, C; Jansen, A P J
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature. PMID:23406093
18. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling
SciTech Connect
Schaefer, C.; Jansen, A. P. J.
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
19. High-power beam transport through a hollow-core photonic bandgap fiber.
PubMed
Jones, D C; Bennett, C R; Smith, M A; Scott, A M
2014-06-01
We investigate the use of a seven-cell hollow-core photonic bandgap fiber for transport of CW laser radiation from a single-mode, narrow-linewidth, high-power fiber laser amplifier. Over 90% of the amplifier output was coupled successfully and transmitted through the fiber in a near-Gaussian mode, with negligible backreflection into the source. 100W of power was successfully transmitted continuously without damage and 160W of power was transmitted briefly before the onset of thermal lensing in the coupling optics. PMID:24875992
20. Program EPICP: Electron photon interaction code, photon test module. Version 94.2
SciTech Connect
Cullen, D.E.
1994-09-01
The computer code EPICP performs Monte Carlo photon transport calculations in a simple one zone cylindrical detector. Results include deposition within the detector, transmission, reflection and lateral leakage from the detector, as well as events and energy deposition as a function of the depth into the detector. EPICP is part of the EPIC (Electron Photon Interaction Code) system. EPICP is designed to perform both normal transport calculations and diagnostic calculations involving only photons, with the objective of developing optimum algorithms for later use in EPIC. The EPIC system includes other modules that are designed to develop optimum algorithms for later use in EPIC; this includes electron and positron transport (EPICE), neutron transport (EPICN), charged particle transport (EPICC), geometry (EPICG), source sampling (EPICS). This is a modular system that once optimized can be linked together to consider a wide variety of particles, geometries, sources, etc. By design EPICP only considers photon transport. In particular it does not consider electron transport so that later EPICP and EPICE can be used to quantitatively evaluate the importance of electron transport when starting from photon sources. In this report I will merely mention where we expect the results to significantly differ from those obtained considering only photon transport from that obtained using coupled electron-photon transport.
NASA Technical Reports Server (NTRS)
Wasilewski, A.; Krys, E.
1985-01-01
Results of Monte-Carlo simulation of electromagnetic cascade development in lead and lead-scintillator sandwiches are analyzed. It is demonstrated that the structure function for core approximation is not applicable in the case in which the primary energy is higher than 100 GeV. The simulation data has shown that introducing an inhomogeneous chamber structure results in subsequent reduction of secondary particles.
2. Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
3. Monte Carlo study of coherent scattering effects of low-energy charged particle transport in Percus-Yevick liquids
Tattersall, W. J.; Cocks, D. G.; Boyle, G. J.; Buckman, S. J.; White, R. D.
2015-04-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002), 10.1016/S0009-2614(02)01177-6], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
4. Monte Carlo study of coherent scattering effects of low-energy charged particle transport in Percus-Yevick liquids.
PubMed
Tattersall, W J; Cocks, D G; Boyle, G J; Buckman, S J; White, R D
2015-04-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems. PMID:25974609
5. Modeling Positron Transport in Gaseous and Soft-condensed Systems with Kinetic Theory and Monte Carlo
Boyle, G.; Tattersall, W.; Robson, R. E.; White, Ron; Dujko, S.; Petrovic, Z. Lj.; Brunger, M. J.; Sullivan, J. P.; Buckman, S. J.; Garcia, G.
2013-09-01
An accurate quantitative understanding of the behavior of positrons in gaseous and soft-condensed systems is important for many technological applications as well as to fundamental physics research. Optimizing Positron Emission Tomography (PET) technology and understanding the associated radiation damage requires knowledge of how positrons interact with matter prior to annihilation. Modeling techniques developed for electrons can also be employed to model positrons, and these techniques can also be extended to account for the structural properties of the medium. Two complementary approaches have been implemented in the present work: kinetic theory and Monte Carlo simulations. Kinetic theory is based on the multi-term Boltzmann equation, which has recently been modified to include the positron-specific interaction processes of annihilation and positronium formation. Simultaneously, a Monte Carlo simulation code has been developed that can likewise incorporate positron-specific processes. Funding support from ARC (CoE and DP schemes).
6. Thermal Scattering Law Data: Implementation and Testing Using the Monte Carlo Neutron Transport Codes COG, MCNP and TART
SciTech Connect
Cullen, D E; Hansen, L F; Lent, E M; Plechaty, E F
2003-05-17
Recently we implemented the ENDF/B-VI thermal scattering law data in our neutron transport codes COG and TART. Our objective was to convert the existing ENDF/B data into double differential form in the Livermore ENDL format. This will allow us to use the ENDF/B data in any neutron transport code, be it a Monte Carlo, or deterministic code. This was approached as a multi-step project. The first step was to develop methods to directly use the thermal scattering law data in our Monte Carlo codes. The next step was to convert the data to double-differential form. The last step was to verify that the results obtained using the data directly are essentially the same as the results obtained using the double differential data. Part of the planned verification was intended to insure that the data as finally implemented in the COG and TART codes, gave the same answer as the well known MCNP code, which includes thermal scattering law data. Limitations in the treatment of thermal scattering law data in MCNP have been uncovered that prevented us from performing this part of our verification.
7. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors
Bauer, Thilo; Jger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy
2015-07-01
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.
8. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes
Plz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S.; Harrendorf, Marco A.; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-01
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPXs MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
9. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors.
PubMed
Bauer, Thilo; Jäger, Christof M; Jordan, Meredith J T; Clark, Timothy
2015-07-28
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves. PMID:26233114
10. A Modified Treatment of Sources in Implicit Monte Carlo Radiation Transport
SciTech Connect
Gentile, N A; Trahan, T J
2011-03-22
We describe a modification of the treatment of photon sources in the IMC algorithm. We describe this modified algorithm in the context of thermal emission in an infinite medium test problem at equilibrium and show that it completely eliminates statistical noise.
11. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP has limited capabilities. Despite WARP's lack of features, its novel algorithm implementations show that high performance can be achieved on a GPU despite the inherently divergent program flow and sparse data access patterns. WARP is not ready for everyday nuclear reactor calculations, but is a good platform for further development of GPU-accelerated Monte Carlo neutron transport. In it's current state, it may be a useful tool for multiplication factor searches, i.e. determining reactivity coefficients by perturbing material densities or temperatures, since these types of calculations typically do not require many flux tallies. (Abstract shortened by UMI.)
12. Monte Carlo evaluation of the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy in pelvic area.
PubMed
Chiavassa, Sophie; Buge, Franois; Herv, Chlo; Delpon, Gregory; Rigaud, Jrome; Lisbona, Albert; Supiot, Sthphane
2015-12-01
The aim of this study was to evaluate the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy (IORT) in pelvic area. A GATE Monte Carlo model of the INTRABEAM was adapted for the study. Simulations were performed in the CT scan of a cadaver considering a homogeneous segmentation (water) and an inhomogeneous segmentation (5 tissues from ICRU44). Measurements were performed in the cadaver using EBT3 Gafchromic films. Impact of inhomogeneities on dose calculation in cadaver was 6% for soft tissues and greater than 300% for bone tissues. EBT3 measurements showed a better agreement with calculation for inhomogeneous media. However, dose discrepancy in soft tissues led to a sub-millimeter (0.65?mm) shift in the effective point dose in depth. Except for bone tissues, the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy in pelvic area was not significant for the studied anatomy. PMID:26420445
13. Tests of the Monte Carlo simulation of the photon-tagger focal-plane electronics at the MAX IV Laboratory
Preston, M. F.; Myers, L. S.; Annand, J. R. M.; Fissum, K. G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.
2014-04-01
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.
14. Monte Carlo simulation of the IRSN CANEL/T400 realistic mixed neutron-photon radiation field.
PubMed
Lacoste, V; Gressier, V
2004-01-01
The calibration of dosemeters and spectrometers in realistic neutron fields simulating those encountered at workplaces is of high necessity to provide true and reliable dosimetric information to the exposed nuclear workers. The CANEL assembly was set-up at IRSN to produce such neutron fields. It comprises a depleted uranium shell, to produce fission neutrons, then iron and water to moderate them and a polyethylene duct. The new presented CANEL facility is used with 3.3 MeV neutrons. Calculations were performed with the MCNP4C code to characterise this mixed neutron-photon expanded radiation field at the position where calibrations are usually performed. The neutron fluence energy and the direction distributions were calculated and the operational quantities were derived from these distributions. The photon fluence and corresponding ambient dose equivalent were also estimated. Comparison with experimental results showed an overall good agreement. PMID:15353634
15. Optical photon transport in powdered-phosphor scintillators. Part II. Calculation of single-scattering transport parameters
SciTech Connect
Poludniowski, Gavin G.; Evans, Philip M.
2013-04-15
Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii) suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.
16. SU-E-J-09: A Monte Carlo Analysis of the Relationship Between Cherenkov Light Emission and Dose for Electrons, Protons, and X-Ray Photons
SciTech Connect
Glaser, A; Zhang, R; Gladstone, D; Pogue, B
2014-06-01
Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.
17. Dosimetry of interface region near closed air cavities for Co-60, 6 MV and 15 MV photon beams using Monte Carlo simulations
PubMed Central
Joshi, Chandra P.; Darko, Johnson; Vidyasagar, P. B.; Schreiner, L. John
2010-01-01
18. Dosimetry of interface region near closed air cavities for Co-60, 6 MV and 15 MV photon beams using Monte Carlo simulations.
PubMed
Joshi, Chandra P; Darko, Johnson; Vidyasagar, P B; Schreiner, L John
2010-04-01
19. Influence of photon energy spectra from brachytherapy sources on Monte Carlo simulations of kerma and dose rates in water and air
SciTech Connect
Rivard, Mark J.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo
2010-02-15
Purpose: For a given radionuclide, there are several photon spectrum choices available to dosimetry investigators for simulating the radiation emissions from brachytherapy sources. This study examines the dosimetric influence of selecting the spectra for {sup 192}Ir, {sup 125}I, and {sup 103}Pd on the final estimations of kerma and dose. Methods: For {sup 192}Ir, {sup 125}I, and {sup 103}Pd, the authors considered from two to five published spectra. Spherical sources approximating common brachytherapy sources were assessed. Kerma and dose results from GEANT4, MCNP5, and PENELOPE-2008 were compared for water and air. The dosimetric influence of {sup 192}Ir, {sup 125}I, and {sup 103}Pd spectral choice was determined. Results: For the spectra considered, there were no statistically significant differences between kerma or dose results based on Monte Carlo code choice when using the same spectrum. Water-kerma differences of about 2%, 2%, and 0.7% were observed due to spectrum choice for {sup 192}Ir, {sup 125}I, and {sup 103}Pd, respectively (independent of radial distance), when accounting for photon yield per Bq. Similar differences were observed for air-kerma rate. However, their ratio (as used in the dose-rate constant) did not significantly change when the various photon spectra were selected because the differences compensated each other when dividing dose rate by air-kerma strength. Conclusions: Given the standardization of radionuclide data available from the National Nuclear Data Center (NNDC) and the rigorous infrastructure for performing and maintaining the data set evaluations, NNDC spectra are suggested for brachytherapy simulations in medical physics applications.
20. Decoupling initial electron beam parameters for Monte Carlo photon beam modelling by removing beam-modifying filters from the beam path
DeSmedt, B.; Reynaert, N.; Flachet, F.; Coghe, M.; Thompson, M. G.; Paelinck, L.; Pittomvils, G.; DeWagter, C.; DeNeve, W.; Thierens, H.
2005-12-01
A new method is presented to decouple the parameters of the incident e- beam hitting the target of the linear accelerator, which consists essentially in optimizing the agreement between measurements and calculations when the difference filter, which is an additional filter inserted in the linac head to obtain uniform lateral dose-profile curves for the high energy photon beam, and flattening filter are removed from the beam path. This leads to lateral dose-profile curves, which depend only on the mean energy of the incident electron beam, since the effect of the radial intensity distribution of the incident e- beam is negligible when both filters are absent. The location of the primary collimator and the thickness and density of the target are not considered as adjustable parameters, since a satisfactory working Monte Carlo model is obtained for the low energy photon beam (6 MV) of the linac using the same target and primary collimator. This method was applied to conclude that the mean energy of the incident e- beam for the high energy photon beam (18 MV) of our Elekta SLi Plus linac is equal to 14.9 MeV. After optimizing the mean energy, the modelling of the filters, in accordance with the information provided by the manufacturer, can be verified by positioning only one filter in the linac head while the other is removed. It is also demonstrated that the parameter setting for Bremsstrahlung angular sampling in BEAMnrc ('Simple' using the leading term of the Koch and Motz equation or 'KM' using the full equation) leads to different dose-profile curves for the same incident electron energy for the studied 18 MV beam. It is therefore important to perform the calculations in 'KM' mode. Note that both filters are not physically removed from the linac head. All filters remain present in the linac head and are only rotated out of the beam. This makes the described method applicable for practical usage since no recommissioning process is required.
1. Consistent treatment of transport properties for five-species air direct simulation Monte Carlo/Navier-Stokes applications
Stephani, K. A.; Goldstein, D. B.; Varghese, P. L.
2012-07-01
A general approach for achieving consistency in the transport properties between direct simulation Monte Carlo (DSMC) and Navier-Stokes (CFD) solvers is presented for five-species air. Coefficients of species diffusion, viscosity, and thermal conductivities are considered. The transport coefficients that are modeled in CFD solvers are often obtained by expressions involving sets of collision integrals, which are obtained from more realistic intermolecular potentials (i.e., ab initio calculations). In this work, the self-consistent effective binary diffusion and Gupta et al.-Yos tranport models are considered. The DSMC transport coefficients are approximated from Chapman-Enskog theory in which the collision integrals are computed using either the variable hard sphere (VHS) and variable soft sphere (VSS) (phenomenological) collision cross section models. The VHS and VSS parameters are then used to adjust the DSMC transport coefficients in order to achieve a best-fit to the coefficients computed from more realistic intermolecular potentials over a range of temperatures. The best-fit collision model parameters are determined for both collision-averaged and collision-specific pairing approaches using the Nelder-Mead simplex algorithm. A consistent treatment of the diffusion, viscosity, and thermal conductivities is presented, and recommended sets of best-fit VHS and VSS collision model parameters are provided for a five-species air mixture.
2. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory
SciTech Connect
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2010-01-01
3. A Monte Carlo study of electron-hole scattering and steady-state minority-electron transport in GaAs
Sadra, K.; Maziar, C. M.; Streetman, B. G.; Tang, D. S.
1988-11-01
We report the first bipolar Monte Carlo calculations of steady-state minority-electron transport in room-temperature p-GaAs including multiband electron-hole scattering with and without hole overlap factors. Our results show how such processes, which make a significant contribution to the minority-electron energy loss rate, can affect steady-state minority-electron transport. Furthermore, we discuss several other issues which we believe should be investigated before present Monte Carlo treatments of electron-hole scattering can provide quantitative information.
4. From force-fields to photons: MD simulations of dye-labeled nucleic acids and Monte Carlo modeling of FRET
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
5. SIMULATION OF ION CONDUCTION IN ?-HEMOLYSIN NANOPORES WITH COVALENTLY ATTACHED ?-CYCLODEXTRIN BASED ON BOLTZMANN TRANSPORT MONTE CARLO MODEL
PubMed Central
Toghraee, Reza; Lee, Kyu-Il; Papke, David; Chiu, See-Wing; Jakobsson, Eric; Ravaioli, Umberto
2009-01-01
Ion channels, as natures solution to regulating biological environments, are particularly interesting to device engineers seeking to understand how natural molecular systems realize device-like functions, such as stochastic sensing of organic analytes. Whats more, attaching molecular adaptors in desired orientations inside genetically engineered ion channels, enhances the system functionality as a biosensor. In general, a hierarchy of simulation methodologies is needed to study different aspects of a biological system like ion channels. Biology Monte Carlo (BioMOCA), a three-dimensional coarse-grained particle ion channel simulator, offers a powerful and general approach to study ion channel permeation. BioMOCA is based on the Boltzmann Transport Monte Carlo (BTMC) and Particle-Particle-Particle-Mesh (P3M) methodologies developed at the University of Illinois at Urbana-Champaign. In this paper, we have employed BioMOCA to study two engineered mutations of ?-HL, namely (M113F)6(M113C-D8RL2)1-?-CD and (M113N)6(T117C-D8RL3)1-?-CD. The channel conductance calculated by BioMOCA is slightly higher than experimental values. Permanent charge distributions and the geometrical shape of the channels gives rise to selectivity towards anions and also an asymmetry in I-V curves, promoting a rectification largely for cations. PMID:20938493
6. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.
1990-01-01
Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.
7. Ionization chamber dosimetry of small photon fields: a Monte Carlo study on stopping-power ratios for radiosurgery and IMRT beams
Sánchez-Doblado, F.; Andreo, P.; Capote, R.; Leal, A.; Perucha, M.; Arráns, R.; Núñez, L.; Mainegra, E.; Lagares, J. I.; Carrasco, E.
2003-07-01
Absolute dosimetry with ionization chambers of the narrow photon fields used in stereotactic techniques and IMRT beamlets is constrained by lack of electron equilibrium in the radiation field. It is questionable that stopping-power ratio in dosimetry protocols, obtained for broad photon beams and quasi-electron equilibrium conditions, can be used in the dosimetry of narrow fields while keeping the uncertainty at the same level as for the broad beams used in accelerator calibrations. Monte Carlo simulations have been performed for two 6 MV clinical accelerators (Elekta SL-18 and Siemens Mevatron Primus), equipped with radiosurgery applicators and MLC. Narrow circular and Z-shaped on-axis and off-axis fields, as well as broad IMRT configured beams, have been simulated together with reference 10 × 10 cm2 beams. Phase-space data have been used to generate 3D dose distributions which have been compared satisfactorily with experimental profiles (ion chamber, diodes and film). Photon and electron spectra at various depths in water have been calculated, followed by Spencer-Attix (Delta = 10 keV) stopping-power ratio calculations which have been compared to those used in the IAEA TRS-398 code of practice. For water/air and PMMA/air stopping-power ratios, agreements within 0.1% have been obtained for the 10 × 10 cm2 fields. For radiosurgery applicators and narrow MLC beams, the calculated sw,air values agree with the reference within +/-0.3%, well within the estimated standard uncertainty of the reference stopping-power ratios (0.5%). Ionization chamber dosimetry of narrow beams at the photon qualities used in this work (6 MV) can therefore be based on stopping-power ratios data in dosimetry protocols. For a modulated 6 MV broad beam used in clinical IMRT, sw,air agrees within 0.1% with the value for 10 × 10 cm2, confirming that at low energies IMRT absolute dosimetry can also be based on data for open reference fields. At higher energies (24 MV) the difference in sw,air was up to 1.1%, indicating that the use of protocol data for narrow beams in such cases is less accurate than at low energies, and detailed calculations of the dosimetry parameters involved should be performed if similar accuracy to that of 6 MV is sought.
8. Mathematical simulations of photon interactions using Monte Carlo analysis to evaluate the uncertainty associated with in vivo K X-ray fluorescence measurements of stable lead in bone
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.
9. Study of the response of a lithium yttrium borate scintillator based neutron rem counter by Monte Carlo radiation transport simulations
Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.
2015-12-01
The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.
10. Enhancements to the Combinatorial Geometry Particle Tracker in the Mercury Monte Carlo Transport Code: Embedded Meshes and Domain Decomposition
SciTech Connect
Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.
11. An improved empirical approach to introduce quantization effects in the transport direction in multi-subband Monte Carlo simulations
Palestri, P.; Lucci, L.; Dei Tos, S.; Esseni, D.; Selmi, L.
2010-05-01
In this paper we propose and validate a simple approach to empirically account for quantum effects in the transport direction of MOS transistors (i.e. source and drain tunneling and delocalized nature of the carrier wavepacket) in multi-subband Monte Carlo simulators, that already account for quantization in the direction normal to the semiconductor-oxide interface by solving the 1D Schrdinger equation in each section of the device. The model has been validated and calibrated against ballistic non-equilibrium Green's function simulations over a wide range of gate lengths, voltage biases and temperatures. The proposed model has just one adjustable parameter and our results show that it can achieve a good agreement with the NEGF approach.
12. Galerkin-based meshless methods for photon transport in the biological tissue.
PubMed
Qin, Chenghu; Tian, Jie; Yang, Xin; Liu, Kai; Yan, Guorui; Feng, Jinchao; Lv, Yujie; Xu, Min
2008-12-01
As an important small animal imaging technique, optical imaging has attracted increasing attention in recent years. However, the photon propagation process is extremely complicated for highly scattering property of the biological tissue. Furthermore, the light transport simulation in tissue has a significant influence on inverse source reconstruction. In this contribution, we present two Galerkin-based meshless methods (GBMM) to determine the light exitance on the surface of the diffusive tissue. The two methods are both based on moving least squares (MLS) approximation which requires only a series of nodes in the region of interest, so complicated meshing task can be avoided compared with the finite element method (FEM). Moreover, MLS shape functions are further modified to satisfy the delta function property in one method, which can simplify the processing of boundary conditions in comparison with the other. Finally, the performance of the proposed methods is demonstrated with numerical and physical phantom experiments. PMID:19065170
13. Correlated Cooper pair transport and microwave photon emission in the dynamical Coulomb blockade
Leppäkangas, Juha; Fogelström, Mikael; Marthaler, Michael; Johansson, Göran
2016-01-01
We study theoretically electromagnetic radiation emitted by inelastic Cooper-pair tunneling. We consider a dc-voltage-biased superconducting transmission line terminated by a Josephson junction. We show that the generated continuous-mode electromagnetic field can be expressed as a function of the time-dependent current across the Josephson junction. The leading-order expansion in the tunneling coupling, similar to the P (E ) theory, has previously been used to investigate the photon emission statistics in the limit of sequential (independent) Cooper-pair tunneling. By explicitly evaluating the system characteristics up to the fourth order in the tunneling coupling, we account for dynamics between consecutively tunneling Cooper pairs. Within this approach we investigate how temporal correlations in the charge transport can be seen in the first- and second-order coherences of the emitted microwave radiation.
14. Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh
SciTech Connect
Drumm, C.R.; Lorenz, J.
1999-03-02
A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers.
15. Coupling of a single diamond nanocrystal to a whispering-gallery microcavity: Photon transport benefitting from Rayleigh scattering
Liu, Yong-Chun; Xiao, Yun-Feng; Li, Bei-Bei; Jiang, Xue-Feng; Li, Yan; Gong, Qihuang
2011-07-01
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan [ScienceSCIEAS0036-807510.1126/science.1152261 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal’s azimuthal position.
16. Coupling of a single diamond nanocrystal to a whispering-gallery microcavity: Photon transport benefitting from Rayleigh scattering
SciTech Connect
Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang
2011-07-15
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.
17. Radial quasiballistic transport in time-domain thermoreflectance studied using Monte Carlo simulations
SciTech Connect
Ding, D.; Chen, X.; Minnich, A. J.
2014-04-07
Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.
18. Radial quasiballistic transport in time-domain thermoreflectance studied using Monte Carlo simulations
Ding, D.; Chen, X.; Minnich, A. J.
2014-04-01
Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.
19. Neutron secondary-particle production cross sections and their incorporation into Monte-Carlo transport codes
SciTech Connect
Brenner, D.J.; Prael, R.E.; Little, R.C.
1987-01-01
Realistic simulations of the passage of fast neutrons through tissue require a large quantity of cross-sectional data. What are needed are differential (in particle type, energy and angle) cross sections. A computer code is described which produces such spectra for neutrons above approx.14 MeV incident on light nuclei such as carbon and oxygen. Comparisons have been made with experimental measurements of double-differential secondary charged-particle production on carbon and oxygen at energies from 27 to 60 MeV; they indicate that the model is adequate in this energy range. In order to utilize fully the results of these calculations, they should be incorporated into a neutron transport code. This requires defining a generalized format for describing charged-particle production, putting the calculated results in this format, interfacing the neutron transport code with these data, and charged-particle transport. The design and development of such a program is described. 13 refs., 3 figs.
20. The effect of voxel size on dose distribution in Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Yani, Sitti; Dirgayussa, I. Gde E.; Rhani, Moh. Fadhillah; Haryanto, Freddy; Arif, Idam
2015-09-01
Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm3, 1 × 1 × 0.5 cm3, and 1 × 1 × 0.8 cm3. The 1 × 109 histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in dmax from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm3 about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm3 about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.
1. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Kns, Tommy; Nicolini, Giorgia; Cozzi, Luca
2007-03-01
A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (? = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (? = 0.035 g cm-3), normal lung (? = 0.20 g cm-3) and cortical bone tissue (? = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 13 cm2) and elongated rectangular (2.8 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, ? index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (? = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.
2. SU-E-CAMPUS-I-02: Estimation of the Dosimetric Error Caused by the Voxelization of Hybrid Computational Phantoms Using Triangle Mesh-Based Monte Carlo Transport
SciTech Connect
2014-06-15
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.
3. Monte-Carlo Simulation of Bacterial Transport in a Heterogeneous Aquifer With Correlated Hydrologic and Reactive Properties
Scheibe, T. D.
2003-12-01
It has been widely observed in field experiments that the apparent rate of bacterial attachment, particularly as parameterized by the collision efficiency in filtration-based models, decreases with transport distance (i.e., exhibits scale-dependency). This effect has previously been attributed to microbial heterogeneity; that is, variability in cell-surface properties within a single monoclonal population. We demonstrate that this effect could also be interpreted as a field-scale manifestation of local-scale correlation between physical heterogeneity (hydraulic conductivity variability) and reaction heterogeneity (attachment rate coefficient variability). A field-scale model of bacterial transport developed for the South Oyster field research site located near Oyster, Virginia, and observations from field experiments performed at that site, are used as the basis for this study. Three-dimensional Monte Carlo simulations of bacterial transport were performed under four alternative scenarios: 1) homogeneous hydraulic conductivity (K) and attachment rate coefficient (Kf), 2) heterogeneous K, homogeneous Kf, 3) heterogeneous K and Kf with local correlation based on empirical and theoretical relationships, and 4) heterogeneous K and Kf without local correlation. The results of the 3D simulations were analyzed using 1D model approximations following conventional methods of field data analysis. An apparent decrease with transport distance of effective collision efficiency was observed only in the case where the local properties were both heterogeneous and correlated. This effect was observed despite the fact that the local collision efficiency was specified as a constant in the 3D model, and can therefore be interpreted as a scale effect associated with the local correlated heterogeneity as manifested at the field scale.
4. The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)
SciTech Connect
1997-10-01
TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the systems external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.
5. Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
PubMed
Hegenbart, Lars; Plz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing. PMID:22217596
6. Monte Carlo study of alpha (?) particles transport in nanoscale gallium arsenide semiconductor materials
Amir, Haider F. Abdul; Chee, Fuei Pien
2012-09-01
Space and ground level electronic equipment with semiconductor devices are always subjected to the deleterious effects by radiation. The study of ion-solid interaction can show the radiation effects of scattering and stopping of high speed atomic particles when passing through matter. This study had been of theoretical interest and of practical important in these recent years, driven by the need to control material properties at nanoscale. This paper is attempted to present the calculations of final 3D distribution of the ions and all kinetic phenomena associated with the ion's energy loss: target damage, sputtering, ionization, and phonon production of alpha (?) particle in Gallium Arsenide(GaAs) material. This calculation is being simulated using the Monte Carlo simulation, SRIM (Stopping and Range of Ions in Matter). The comparison of radiation tolerance between the conventional scale and nanoscale GaAs layer will be discussed as well. From the findings, it is observed that most of the damage formed in the GaAs layer induced by the production of lattice defects in the form of vacancies, defect clusters and dislocations. However, when the GaAs layer is scaled down (nanoscaling), it is found that the GaAs layer can withstand higher radiation energy, in term of displacement damage.
7. High-resolution monte carlo simulation of flow and conservative transport in heterogeneous porous media 1. Methodology and flow results
USGS Publications Warehouse
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.
8. Measurements of photon and neutron leakage from medical linear accelerators and Monte Carlo simulation of tenth value layers of concrete used for intensity modulated radiation therapy treatment
The x ray leakage from the housing of a therapy x ray source is regulated to be <0.1% of the useful beam exposure at a distance of 1 m from the source. The x ray leakage in the backward direction has been measured from linacs operating at 4, 6, 10, 15, and 18 MV using a 100 cm3 ionization chamber and track-etch detectors. The leakage was measured at nine different positions over the rear wall using a 3 x 3 matrix with a 1 m separation between adjacent positions. In general, the leakage was less than the canonical value, but the exact value depends on energy, gantry angle, and measurement position. Leakage at 10 MV for some positions exceeded 0.1%. Electrons with energy greater than about 9 MeV have the ability to produce neutrons. Neutron leakage has been measured around the head of electron accelerators at a distance 1 m from the target at 0, 46, 90, 135, and 180 azimuthal angles; for electron energies of 9, 12, 15, 16, 18, and 20 MeV and 10, 15, and 18 MV x ray photon beam, using a neutron bubble detector of type BD-PND and using Track-Etch detectors. The highest neutron dose equivalent per unit electron dose was at 0 for all electron energies. The neutron leakage from photon beams was the highest between all the machines. Intensity modulated radiation therapy (IMRT) delivery consists of a summation of small beamlets having different weights that make up each field. A linear accelerator room designed exclusively for IMRT use would require different, probably lower, tenth value layers (TVL) for determining the required wall thicknesses for the primary barriers. The first, second, and third TVL of 60Co gamma rays and photons from 4, 6, 10, 15, and 18 MV x ray beams by concrete have been determined and modeled using a Monte Carlo technique (MCNP version 4C2) for cone beams of half-opening angles of 0, 3, 6, 9, 12, and 14.
9. Assessment of uncertainties in the lung activity measurement of low-energy photon emitters using Monte Carlo simulation of ICRP male thorax voxel phantom.
PubMed
Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S
2015-12-01
Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of 0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV. PMID:25468992
10. Monte carlo study of the effect of collimator thickness on T-99m source response in single photon emission computed tomography.
PubMed
2012-05-01
In single photon emission computed tomography (SPECT), the collimator is a crucial element of the imaging chain and controls the noise resolution tradeoff of the collected data. The current study is an evaluation of the effects of different thicknesses of a low-energy high-resolution (LEHR) collimator on tomographic spatial resolution in SPECT. In the present study, the SIMIND Monte Carlo program was used to simulate a SPECT equipped with an LEHR collimator. A point source of (99m)Tc and an acrylic cylindrical Jaszczak phantom, with cold spheres and rods, and a human anthropomorphic torso phantom (4D-NCAT phantom) were used. Simulated planar images and reconstructed tomographic images were evaluated both qualitatively and quantitatively. According to the tabulated calculated detector parameters, contribution of Compton scattering, photoelectric reactions, and also peak to Compton (P/C) area in the obtained energy spectrums (from scanning of the sources with 11 collimator thicknesses, ranging from 2.400 to 2.410 cm), we concluded the thickness of 2.405 cm as the proper LEHR parallel hole collimator thickness. The image quality analyses by structural similarity index (SSIM) algorithm and also by visual inspection showed suitable quality images obtained with a collimator thickness of 2.405 cm. There was a suitable quality and also performance parameters' analysis results for the projections and reconstructed images prepared with a 2.405 cm LEHR collimator thickness compared with the other collimator thicknesses. PMID:23372440
11. A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
12. A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES
SciTech Connect
Schnittman, Jeremy D.; Krolik, Julian H. E-mail: [email protected]
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
13. Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference
Marzouk, Y.; Parno, M.
2014-12-01
We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.
14. Comparison of Two Accelerators for Monte Carlo Radiation Transport Calculations, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p Coprocessor: A Case Study for X-ray CT Imaging Dose Calculation
Liu, Tianyu; Xu, X. George; Carothers, Christopher D.
2014-06-01
Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.
15. Numerical modeling of photon migration in the cerebral cortex of the living rat using the radiative transport equation
2015-03-01
Accurate modeling and efficient calculation of photon migration in biological tissues is requested for determination of the optical properties of living tissues by in vivo experiments. This study develops a calculation scheme of photon migration for determination of the optical properties of the rat cerebral cortex (ca 0.2 cm thick) based on the three-dimensional time-dependent radiative transport equation assuming a homogeneous object. It is shown that the time-resolved profiles calculated by the developed scheme agree with the profiles measured by in vivo experiments using near infrared light. Also, an efficient calculation method is tested using the delta-Eddington approximation of the scattering phase function.
16. Controlling resonant photonic transport along optical waveguides by two-level atoms
SciTech Connect
Yan Conghua; Wei Lianfu; Jia Wenzhi; Shen, Jung-Tsung
2011-10-15
Recent works [Shen et al., Phys. Rev. Lett. 95, 213001 (2005); Zhou et al., Phys. Rev. Lett. 101, 100501 (2008)] showed that the incident photons cannot transmit along an optical waveguide containing a resonant two-level atom (TLA). Here we propose an approach to overcome such a difficulty by using asymmetric couplings between the photons and a TLA. Our numerical results show that the transmission spectrum of the photon depends on both the frequency of the incident photons and the photon-TLA couplings. Consequently, this system can serve as a controllable photon attenuator, by which the transmission probability of the resonantly incident photons can be changed from 0% to 100%. A possible application to explain the recent experimental observations [Astafiev et al., Science 327, 840 (2010)] is also discussed.
17. Monte Carlo portal dosimetry
SciTech Connect
Chin, P.W. . E-mail: [email protected]
2005-10-15
This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.
18. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin.
PubMed
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Webb, R Chad; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A
2014-01-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or 'epidermal', photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively. PMID:25234839
19. Design studies of volume-pumped photolytic systems using a photon transport code
Prelas, M. A.; Jones, G. L.
1982-01-01
The use of volume sources, such as nuclear pumping, presents some unique features in the design of photolytically driven systems (e.g., lasers). In systems such as these, for example, a large power deposition is not necessary. However, certain restrictions, such as self-absorption, limit the ability of photolytically driven systems to scale by volume. A photon transport computer program was developed at the University of Missouri-Columbia to study these limitations. The development of this code is important, perhaps necessary, for the design of photolytically driven systems. With the aid of this code, a photolytically driven iodine laser was designed for utilization with a 3He nuclear-pumped system with a TRIGA reactor as the neutron source. Calculations predict a peak power output of 0.37 kW. Using the same design, it is also anticipated that the system can achieve a 14-kW output using a fast burst-type reactor neutron source, and a 0.65-kW peak output using 0.1 Torr of the alpha emitter radon-220 as part of the fill. The latter would represent a truly portable laser system.
20. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.
2014-09-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.
1. Status of Monte Carlo at Los Alamos
SciTech Connect
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.
2. Monte Carlo investigation of the increased radiation deposition due to gold nanoparticles using kilovoltage and megavoltage photons in a 3D randomized cell model
SciTech Connect
Douglass, Michael; Bezak, Eva; Penfold, Scott
2013-07-15
Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP.Conclusions: The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.
3. Elucidating the electron transport in semiconductors via Monte Carlo simulations: an inquiry-driven learning path for engineering undergraduates
Persano Adorno, Dominique; Pizzolato, Nicola; Fazio, Claudio
2015-09-01
Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations.
4. An investigation of the depth dose in the build-up region, and surface dose for a 6-MV therapeutic photon beam: Monte Carlo simulation and measurements
PubMed Central
Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn
2013-01-01
The percentage depth dose in the build-up region and the surface dose for the 6-MV photon beam from a Varian Clinac 23EX medical linear accelerator was investigated for square field sizes of 5 × 5, 10 × 10, 15 × 15 and 20 × 20 cm2using the EGS4nrc Monte Carlo (MC) simulation package. The depth dose was found to change rapidly in the build-up region, and the percentage surface dose increased proportionally with the field size from approximately 10% to 30%. The measurements were also taken using four common detectors: TLD chips, PFD dosimeter, parallel-plate and cylindrical ionization chamber, and compared with MC simulated data, which served as the gold standard in our study. The surface doses obtained from each detector were derived from the extrapolation of the measured depth doses near the surface and were all found to be higher than that of the MC simulation. The lowest and highest over-responses in the surface dose measurement were found with the TLD chip and the CC13 cylindrical ionization chamber, respectively. Increasing the field size increased the percentage surface dose almost linearly in the various dosimeters and also in the MC simulation. Interestingly, the use of the CC13 ionization chamber eliminates the high gradient feature of the depth dose near the surface. The correction factors for the measured surface dose from each dosimeter for square field sizes of between 5 × 5 and 20 × 20 cm2are introduced. PMID:23104898
5. A feasibility study to calculate unshielded fetal doses to pregnant patients in 6-MV photon treatments using Monte Carlo methods and anatomically realistic phantoms
SciTech Connect
Bednarz, Bryan; Xu, X. George
2008-07-15
A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.
6. Study of water transport phenomena on cathode of PEMFCs using Monte Carlo simulation
Soontrapa, Karn
This dissertation deals with the development of a three-dimensional computational model of water transport phenomena in the cathode catalyst layer (CCL) of PEMFCs. The catalyst layer in the numerical simulation was developed using the optimized sphere packing algorithm. The optimization technique named the adaptive random search technique (ARSET) was employed in this packing algorithm. The ARSET algorithm will generate the initial location of spheres and allow them to move in the random direction with the variable moving distance, randomly selected from the sampling range, based on the Lennard-jones potential of the current and new configuration. The solid fraction values obtained from this developed algorithm are in the range of 0.631 to 0.6384 while the actual processing time can significantly be reduced by 8% to 36% based on the number of spheres. The initial random number sampling range was investigated and the appropriate sampling range value is equal to 0.5. This numerically developed cathode catalyst layer has been used to simulate the diffusion processes of protons, in the form of hydronium, and oxygen molecules through the cathode catalyst layer. The movements of hydroniums and oxygen molecules are controlled by the random vectors and all of these moves has to obey the Lennard-Jones potential energy constrain. Chemical reaction between these two species will happen when they share the same neighborhood and result in the creation of water molecules. Like hydroniums and oxygen molecules, these newly-formed water molecules also diffuse through the cathode catalyst layer. It is important to investigate and study the distribution of hydronium oxygen molecule and water molecules during the diffusion process in order to understand the lifetime of the cathode catalyst layer. The effect of fuel flow rate on the water distribution has also been studied by varying the hydronium and oxygen molecule input. Based on the results of these simulations, the hydronium: oxygen input ratio of 3:2 has been found to be the best choice for this study. To study the effect of metal impurity and gas contamination on the cathode catalyst layer, the cathode catalyst layer structure is modified by adding the metal impurities and the gas contamination is introduced with the oxygen input. In this study, gas contamination has very little effect on the electrochemical reaction inside the cathode catalyst layer because this simulation is transient in nature and the percentage of the gas contamination is small, in the range of 0.0005% to 0.0015% for CO and 0.028% to 0.04% for CO2 . Metal impurities seem to have more effect on the performance of PEMFC because they not only change the structure of the developed cathode catalyst layer but also affect the movement of fuel and water product. Aluminum has the worst effect on the cathode catalyst layer structure because it yields the lowest amount of newly form water and the largest amount of trapped water product compared to iron of the same impurity percentage. For the iron impurity, it shows some positive effect on the life time of the cathode catalyst layer. At the 0.75 wt% of iron impurity, the amount of newly formed water is 6.59% lower than the pure carbon catalyst layer case but the amount of trapped water product is 11.64% lower than the pure catalyst layer. The lifetime of the impure cathode catalyst layer is longer than the pure one because the amount of water that is still trapped inside the pure cathode catalyst layer is higher than that of the impure one. Even though the impure cathode catalyst layer has a longer lifetime, it sacrifices the electrical power output because the electrochemical reaction occurrence inside the impure catalyst layer is lower.
7. Overview of the MCU Monte Carlo Software Package
Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.
2014-06-01
MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.
8. Vectorizing and macrotasking Monte Carlo neutral particle algorithms
SciTech Connect
Heifetz, D.B.
1987-04-01
Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.
9. Monte-Carlo simulation for an aerogel Cherenkov counter
Suda, R.; Watanabe, M.; Enomoto, R.; Iijima, T.; Adachi, I.; Hattori, H.; Kuniya, T.; Ooba, T.; Sumiyoshi, T.; Yoshida, Y.
1998-02-01
We have developed a Monte-Carlo simulation code for an aerogel Cherenkov counter which is operated under a strong magnetic field such as 1.5T. This code consists of two parts: photon transportation inside aerogel tiles, and one-dimensional amplification in a fine-mesh photomultiplier tube. It simulates the output photo-electron yields as accurately as 5% with only a single free parameter. This code is applied to simulations for a B-factory particle identification system.
10. The All Particle Monte Carlo method: Atomic data files
SciTech Connect
Rathkopf, J.A.; Cullen, D.E.; Perkins, S.T.
1990-11-06
Development of the All Particle Method, a project to simulate the transport of particles via the Monte Carlo method, has proceeded on two fronts: data collection and algorithm development. In this paper we report on the status of the data libraries. The data collection is nearly complete with the addition of electron, photon, and atomic data libraries to the existing neutron, gamma ray, and charged particle libraries. The contents of these libraries are summarized.
11. A Monte Carlo neutron transport code for eigenvalue calculations on a dual-GPU system and CUDA environment
SciTech Connect
Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
12. Assessment of Parametric Uncertainty using Markov Chain Monte Carlo Methods for Surface Complexation Models in Groundwater Reactive Transport Modeling
Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.
2010-12-01
Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.
13. Time-correlated photon-counting probe of singlet excitation transport and restricted rotation in Langmuir-Blodgett monolayers
SciTech Connect
Anfinrud, P.A.; Hart, D.E.; Struve, W.S.
1988-07-14
Fluorescence depolarization was monitored by time-correlated single-photon counting in organized monolayers of octadecylrhodamine B (ODRB) in dioleoylphosphatidylcholine (DOL) at air-water interfaces. At low ORDB density, the depolarization was dominated by restricted rotational diffusion. Increases in surface pressure reduced both the angular range and the diffusion constant for rotational motion. At higher ODRB densities, additional depolarization was observed due to electronic excitation transport. A two-dimensional two-particle theory developed by Baumann and Fayer was found to provide an excellent description of the transport dynamics for reduced chromophore densities up to /approximately/ 5.0. The testing of transport theories proves to be relatively insensitive to the orientational distribution assumed for the ODRB transition moments in their two-dimensional systems.
14. The transport character of quantum state in one-dimensional coupled-cavity arrays: effect of the number of photons and entanglement degree
Ma, Shao-Qiang; Zhang, Guo-Feng
2015-12-01
The transport properties of the photons injected into one-dimensional coupled-cavity arrays (CCAs) are studied. It is found that the number of photons cannot change the evolution cycle of the system and the time points at which W states and NOON state are obtained with a relatively higher probability. Transport dynamics in the CCAs exhibits that entanglement-enhanced state transmission is more effective phenomenon, and we show that for a quantum state with the maximum concurrence, it can be transmitted completely without considering the case of photon loss.
15. Monte Carlo fundamentals
SciTech Connect
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
16. Transport calculations for a 14.8 MeV neutron beam in a water phantom
Goetsch, S. J.
A coupled neutron/photon Monte Carlo radiation transport code (MORSE-CG) was used to calculate neutron and photon doses in a water phantom irradiated by 14.8 MeV neutron from the gas target neutron source. The source-collimator-phantom geometry was carefully simulated. Results of calculations utilizing two different statistical estimators (next collision and track length) are presented.
17. Verification by Monte Carlo methods of a power law tissue-air ratio algorithm for inhomogeneity corrections in photon beam dose calculations.
PubMed
Webb, S; Fox, R A
1980-03-01
A Monte Carlo computer program has been used to calculate axial and off-axis depth dose distributions arising from the interaction of an external beam of 60Co radiation with a medium containing inhomogeneities. An approximation for applying the Monte Carlo data to the configuration where the lateral extent of the inhomogeneity is less than the beam area, is also presented. These new Monte Carlo techniques rely on integration over the dose distributions from constituent sub-beams of small area and the accuracy of the method is thus independent of beam size. The power law correction equation (Batho equation) describing the dose distribution in the presence of tissue inhomogeneities is derived in its most general form. By comparison with Monte Carlo reference data, the equation is validated for routine patient dosimetry. It is explained why the Monte Carlo data may be regarded as a fundamental reference point in performing these tests of the extension to the Batho equation. Other analytic correction techniques, e.g. the equivalent radiological path method, are shown to be less accurate. The application of the generalised power law equation in conjunction with CT scanner data is discussed. For ease of presentation, the details of the Monte Carlo techniques and the analytic formula have been separated into appendices. PMID:7384209
18. Guiding Electromagnetic Waves around Sharp Corners: Topologically Protected Photonic Transport in Metawaveguides
Ma, Tzuhsuan; Khanikaev, Alexander B.; Mousavi, S. Hossein; Shvets, Gennady
2015-03-01
The wave nature of radiation prevents its reflections-free propagation around sharp corners. We demonstrate that a simple photonic structure based on a periodic array of metallic cylinders attached to one of the two confining metal plates can emulate spin-orbit interaction through bianisotropy. Such a metawaveguide behaves as a photonic topological insulator with complete topological band gap. An interface between two such structures with opposite signs of the bianisotropy supports topologically protected surface waves, which can be guided without reflections along sharp bends of the interface.
19. Guiding electromagnetic waves around sharp corners: topologically protected photonic transport in metawaveguides.
PubMed
Ma, Tzuhsuan; Khanikaev, Alexander B; Mousavi, S Hossein; Shvets, Gennady
2015-03-27
The wave nature of radiation prevents its reflections-free propagation around sharp corners. We demonstrate that a simple photonic structure based on a periodic array of metallic cylinders attached to one of the two confining metal plates can emulate spin-orbit interaction through bianisotropy. Such a metawaveguide behaves as a photonic topological insulator with complete topological band gap. An interface between two such structures with opposite signs of the bianisotropy supports topologically protected surface waves, which can be guided without reflections along sharp bends of the interface. PMID:25860770
20. MCNP/X TRANSPORT IN THE TABULAR REGIME
SciTech Connect
2007-01-08
The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.
1. An Electron/Photon/Relaxation Data Library for MCNP6
SciTech Connect
2015-08-07
The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.
2. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs
Rodriguez, M.; Sempau, J.; Brualla, L.
2012-05-01
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.
3. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs.
PubMed
Rodriguez, M; Sempau, J; Brualla, L
2012-05-21
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as 'splitting-roulette' was implemented on the Monte Carlo code [Formula: see text] and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and 'selective splitting'. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code [Formula: see text]. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45. PMID:22538321
4. Design and fabrication of hollow-core photonic crystal fibers for high-power ultrashort pulse transportation and pulse compression.
TOXLINE Toxicology Bibliographic Information
Wang YY; Peng X; Alharbi M; Dutin CF; Bradley TD; Gérôme F; Mielke M; Booth T; Benabid F
2012-08-01
We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 μJ, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 μJ laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air.
5. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations
SciTech Connect
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
6. Inclusion of photon production and transport and (e/sup +/e/sup /minus//) pair production in a particle-in-cell code for astrophysical applications
SciTech Connect
Sulkanen, M.E.; Gisler, G.R.
1989-01-01
This present study constitutes the first attempt to include, in a particle-in-cell code, the effects of radiation losses, photon production and transport, and charged-particle production by photons scattering in an intense background magnetic field. We discuss the physics and numerical issues that had to be addressed in including these effects in the ISIS code. Then we present a test simulation of the propagation of a pulse of high-energy photons across an intense magnetic field using this modified version of ISIS. This simulation demonstrates dissipation of the photon pulse with charged-particle production, emission of secondary synchrotron and curvature photons and the concomitant momentum dissipation of the charged particles, and subsequent production of lower-energy pairs. 5 refs.
7. Utilizing Monte-Carlo radiation transport and spallation cross sections to estimate nuclide dependent scaling with altitude
Argento, D.; Reedy, R. C.; Stone, J.
2010-12-01
Cosmogenic Nuclides (CNs) are a critical new tool for geomorphology, allowing researchers to date Earth surface events and measure process rates [1]. Prior to CNs, many of these events and processes had no absolute method for measurement and relied entirely on relative methods [2]. Continued improvements in CN methods are necessary for expanding analytic capability in geomorphology. In the last two decades, significant progress has been made in refining these methods and reducing analytic uncertainties [1,3]. Calibration data and scaling methods are being developed to provide a self consistent platform for use in interpreting nuclide concentration values into geologic data [4]. However, nuclide dependent scaling has been difficult to address due to analytic uncertainty and sparseness in altitude transects. Artificial target experiments are underway, but these experiments take considerable time for nuclide buildup in lower altitudes. In this study, a Monte Carlo method radiation transport code, MCNPX, is used to model the galactic cosmic-ray radiation impinging on the upper atmosphere and track the resulting secondary particles through a model of the Earths atmosphere and lithosphere. To address the issue of nuclide dependent scaling, the neutron flux values determined by the MCNPX simulation are folded in with estimated cross-section values [5,6]. Preliminary calculations indicate that scaling of nuclide production potential in free air seems to be a function of both altitude and nuclide production pathway. At 0 g/cm2 (sea-level) all neutron spallation pathways have attenuation lengths within 1% of 130 g/cm2. However, the differences in attenuation length are exacerbated with increasing altitude. At 530 g/cm2 atmospheric height (~5,500 m), the apparent attenuation lengths for aggregate SiO2(n,x)10Be, aggregate SiO2(n,x)14C and K(n,x)36Cl become 149.5 g/cm2, 151 g/cm2 and 148 g/cm2 respectively. At 700 g/cm2 atmospheric height (~8,400m - close to the highest possible sampling altitude), the apparent attenuation lengths become 171 g/cm2, 174 g/cm2 and 165 g/cm2 respectively, a difference of +/-5%. Based on this preliminary data, there may be up to 6% error in production rate scaling. Proton spallation is a small, yet important component of spallation events. This data will be also be presented along with the neutron results. While the differences between attenuation length for individual nuclides are small at sea-level, they are systematic and exacerbate with altitude. Until now, there has been no numeric analysis of this phenomenon, therefore the global scaling schemes for CNs have been missing an aspect of physics critical for achieving close agreement between empiric calibration data and physics based models. [1] T. J. Dunai, "Cosmogenic Nuclides: Principles, Concepts and Applications in the Earth Surface Sciences", Cambridge University Press, Cambridge, 2010 [2] D. Lal, Annual Rev of Earth Planet Sci, 1988, p355-388 [3] J. Gosse and F. Phillips, Quaternary Science Rev, 2001, p1475-1560 [4] F. Phillips et al.,(Proposal to the National Science Foundation), 2003 [5] K. Nishiizumi etal., Geochimica et Cosmochimica Acta, 2009, p2163-2176 [6] R. C. Reedy, personal com.
8. The role of plasma evolution and photon transport in optimizing future advanced lithography sources
SciTech Connect
Sizyuk, Tatyana; Hassanein, Ahmed
2013-08-28
Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 2030 ?m in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.
9. The effect of biological shielding on fast neutron and photon transport in the VVER-1000 mock-up model placed in the LR-0 reactor.
PubMed
Ko?l, Michal; Cvachovec, Frantiek; Mil?k, Jn; Mravec, Filip
2013-05-01
The paper is intended to show the effect of a biological shielding simulator on fast neutron and photon transport in its vicinity. The fast neutron and photon fluxes were measured by means of scintillation spectroscopy using a 4545 mm(2) and a 1010 mm(2) cylindrical stilbene detector. The neutron spectrum was measured in the range of 0.6-10 MeV and the photon spectrum in 0.2-9 MeV. The results of the experiment are compared with calculations. The calculations were performed with various nuclear data libraries. PMID:23434890
10. Utilization of Monte Carlo Calculations in Radiation Transport Analyses to Support the Design of the U.S. Spallation Neutron Source (SNS)
SciTech Connect
Johnson, J.O.
2000-10-23
The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an {approximately}0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses.
11. Comparison of experimental and Monte-Carlo simulation of MeV particle transport through tapered/straight glass capillaries and circular collimators
Hespeels, F.; Tonneau, R.; Ikeda, T.; Lucas, S.
2015-11-01
This study compares the capabilities of three different passive collimation devices to produce micrometer-sized beams for proton and alpha particle beams (1.7 MeV and 5.3 MeV respectively): classical platinum TEM-like collimators, straight glass capillaries and tapered glass capillaries. In addition, we developed a Monte-Carlo code, based on the Rutherford scattering theory, which simulates particle transportation through collimating devices. The simulation results match the experimental observations of beam transportation through collimators both in air and vacuum. This research shows the focusing effects of tapered capillaries which clearly enable higher transmission flux. Nevertheless, the capillaries alignment with an incident beam is a prerequisite but is tedious, which makes the TEM collimator the easiest way to produce a 50 ?m microbeam.
12. Monte Carlo tests of small-world architecture for coarse-grained networks of the United States railroad and highway transportation systems
Aldrich, Preston R.; El-Zabet, Jermeen; Hassan, Seerat; Briguglio, Joseph; Aliaj, Enela; Radcliffe, Maria; Mirza, Taha; Comar, Timothy; Nadolski, Jeremy; Huebner, Cynthia D.
2015-11-01
Several studies have shown that human transportation networks exhibit small-world structure, meaning they have high local clustering and are easily traversed. However, some have concluded this without statistical evaluations, and others have compared observed structure to globally random rather than planar models. Here, we use Monte Carlo randomizations to test US transportation infrastructure data for small-worldness. Coarse-grained network models were generated from GIS data wherein nodes represent the 3105 contiguous US counties and weighted edges represent the number of highway or railroad links between counties; thus, we focus on linkage topologies and not geodesic distances. We compared railroad and highway transportation networks with a simple planar network based on county edge-sharing, and with networks that were globally randomized and those that were randomized while preserving their planarity. We conclude that terrestrial transportation networks have small-world architecture, as it is classically defined relative to global randomizations. However, this topological structure is sufficiently explained by the planarity of the graphs, and in fact the topological patterns established by the transportation links actually serve to reduce the amount of small-world structure.
13. MORSE Monte Carlo code
SciTech Connect
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
14. Effective QCD and transport description of dilepton and photon production in heavy-ion collisions and elementary processes
Linnyk, O.; Bratkovskaya, E. L.; Cassing, W.
2016-03-01
In this review we address the dynamics of relativistic heavy-ion reactions and in particular the information obtained from electromagnetic probes that stem from the partonic and hadronic phases. The out-of-equilibrium description of strongly interacting relativistic fields is based on the theory of Kadanoff and Baym. For the modeling of the partonic phase we introduce an effective dynamical quasiparticle model (DQPM) for QCD in equilibrium. In the DQPM, the widths and masses of the dynamical quasiparticles are controlled by transport coefficients that can be compared to the corresponding quantities from lattice QCD. The resulting off-shell transport approach is denoted by Parton-Hadron-String Dynamics (PHSD) and includes covariant dynamical transition rates for hadronization and keeps track of the hadronic interactions in the final phase. It is shown that the PHSD captures the bulk dynamics of heavy-ion collisions from lower SPS to LHC energies and thus provides a solid basis for the evaluation of the electromagnetic emissivity, which is calculated on the basis of the same dynamical parton propagators that are employed for the dynamical evolution of the partonic system. The production of direct photons in elementary processes and heavy-ion reactions is discussed and the present status of the photon v2 "puzzle"-a large elliptic flow v2 of the direct photons experimentally observed in heavy-ion collisions-is addressed for nucleus-nucleus reactions at RHIC and LHC energies. The role of hadronic and partonic sources for the photon spectra and the flow coefficients v2 and v3 is considered as well as the possibility to subtract the QGP signal from the experimental observables. Furthermore, the production of e+e- or μ+μ- pairs in elementary processes and A + A reactions is addressed. The calculations within the PHSD from SIS to LHC energies show an increase of the low mass dilepton yield essentially due to the in-medium modification of the ρ-meson and at the lowest energy also due to a multiple regeneration of Δ-resonances. Furthermore, pronounced traces of the partonic degrees-of-freedom are found in the intermediate dilepton mass regime (1.2 GeV < M < 3 GeV) at relativistic energies, which will also shed light on the nature of the very early degrees-of-freedom in nucleus-nucleus collisions.
15. Calculs Monte Carlo en transport d'energie pour le calcul de la dose en radiotherapie sur plateforme graphique hautement parallele
Hissoiny, Sami
Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43 reference geometry. Differences of less than 4% are found compared to the BrachyDose platforms well as TG-43 consensus data. The third objective aims at the use of GPUMCD for dose calculation within MRI-Linac environment. To this end, the effect of the magnetic field on charged particles has been added to the simulation. It was shown that GPUMCD is within a gamma criteria of 2%-2mm of two experiments aiming at highlighting the influence of the magnetic field on the dose distribution. The results suggest that the GPU is an interesting computing platform for dose calculations through Monte Carlo simulations and that software platform GPUMCD makes it possible to achieve fast and accurate results.
16. Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
PubMed Central
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the fast Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
17. Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se
SciTech Connect
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
18. Effect of burst and recombination models for Monte Carlo transport of interacting carriers in a-Se x-ray detectors on Swank noise
SciTech Connect
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se, Med. Phys. 39(1), 308319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
19. Effect of burst and recombination models for Monte Carlo transport of interacting carriers in a-Se x-ray detectors on Swank noise
SciTech Connect
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/μm, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/μm. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
20. Thermal photon, dilepton production, and electric charge transport in a baryon rich strongly coupled QGP from holography
Finazzo, Stefano Ivo; Rougemont, Romulo
2016-02-01
We obtain the thermal photon and dilepton production rates in a strongly coupled quark-gluon plasma (QGP) at both zero and nonzero baryon chemical potentials using a bottom-up Einstein-Maxwell-dilaton holographic model that is in good quantitative agreement with the thermodynamics of (2 +1 )-flavor lattice QCD around the crossover transition for baryon chemical potentials up to 400 MeV, which may be reached in the beam energy scan at RHIC. We find that increasing the temperature T and the baryon chemical potential μB enhances the peak present in both spectra. We also obtain the electric charge susceptibility, the dc and ac electric conductivities, and the electric charge diffusion as functions of T and μB. We find that electric diffusive transport is suppressed as one increases μB. At zero baryon density, we compare our results for the dc electric conductivity and the electric charge diffusion with the latest lattice data available for these observables and find reasonable agreement around the crossover transition. Therefore, our holographic results may be used to constraint the magnitude of the thermal photon and dilepton production rates in a strongly coupled QGP, which we found to be at least 1 order of magnitude below perturbative estimates.
1. Quantum Dot Optical Frequency Comb Laser with Mode-Selection Technique for 1-μm Waveband Photonic Transport System
Naokatsu Yamamoto,; Kouichi Akahane,; Tetsuya Kawanishi,; Redouane Katouf,; Hideyuki Sotobayashi,
2010-04-01
An optical frequency comb was generated from a single quantum dot laser diode (QD-LD) in the 1-μm waveband using an Sb-irradiated InGaAs/GaAs QD active medium. A single-mode-selection technique and interference injection-seeding technique are proposed for selecting the optical mode of a QD optical frequency comb laser (QD-CML). In the 1-μm waveband, a wavelength-tunable single-mode light source and a multiple-wavelength generator of a comb with 100-GHz spacing and ultrafine teeth are successfully demonstrated by applying the optical-mode-selection techniques to the QD-CML. Additionally, by applying the single-mode-selection technique to the QD-CML, a 10-Gbps clear eye opening for multiple-wavelengths in 1-μm waveband photonic transport over a 1.5-km-long holey fiber is obtained.
2. Quantum Dot Optical Frequency Comb Laser with Mode-Selection Technique for 1-µm Waveband Photonic Transport System
Yamamoto, Naokatsu; Akahane, Kouichi; Kawanishi, Tetsuya; Katouf, Redouane; Sotobayashi, Hideyuki
2010-04-01
An optical frequency comb was generated from a single quantum dot laser diode (QD-LD) in the 1-µm waveband using an Sb-irradiated InGaAs/GaAs QD active medium. A single-mode-selection technique and interference injection-seeding technique are proposed for selecting the optical mode of a QD optical frequency comb laser (QD-CML). In the 1-µm waveband, a wavelength-tunable single-mode light source and a multiple-wavelength generator of a comb with 100-GHz spacing and ultrafine teeth are successfully demonstrated by applying the optical-mode-selection techniques to the QD-CML. Additionally, by applying the single-mode-selection technique to the QD-CML, a 10-Gbps clear eye opening for multiple-wavelengths in 1-µm waveband photonic transport over a 1.5-km-long holey fiber is obtained.
3. Monte Carlo Simulations of a Human Phantom Radio-Pharmacokinetic Response on a Small Field of View Scintigraphic Device
Burgio, N.; Ciavola, C.; Santagata, A.; Iurlaro, G.; Montani, L.; Scaf, R.
2006-04-01
The limiting factors for the scintigraphic clinical application are related to i) biosource characteristics (pharmacokinetic of the drug distribution between organs), Detection chain (photons transport, scintillation, analog to digital signal conversion, etc.) Imaging (Signal to Noise ratio, Spatial and Energy Resolution, Linearity etc) In this work, by using Monte Carlo time resolved transport simulations on a mathematical phantom and on a small field of view scintigraphic device, the trade off between the aforementioned factors was preliminary investigated.
4. Two-Dimensional Radiation Transport in Cylindrical Geometry: Ray-Tracing Compared to Monte Carlo Solutions for a Two-Level Atom
Apruzese, J. P.; Giuliani, J. L.
2008-11-01
Radiation plays a critical role in the dynamics of Z-pinch implosions. Modeling of Z-pinch experiments therefore needs to include an accurate but efficient algorithm for photon transport. Such algorithms exist for the one-dimensional (1D) approximation. In the present work, we report progress toward this goal in a 2D (r,z) geometry, intended for use in radiation hydrodynamics calculations of dynamically evolving Z pinches. We have tested a radiation transport algorithm that uses discrete ordinate sets for the ray in 3-space, and the multifrequency integral solution along each ray. The published solutions of Avery et al. [1] for the line source functions are used as a benchmark to ensure the accuracy of our approach. We discuss the coupling between the radiation field and kinetics that results in large departures from LTE, ruling out use of the diffusion approximation. [1] L. W. Avery, L. L. House, and A. Skumanich, JQSRT 9, 519 (1969).
5. Weak second-order splitting schemes for Lagrangian Monte Carlo particle methods for the composition PDF/FDF transport equations
SciTech Connect
Wang Haifeng Popov, Pavel P.; Pope, Stephen B.
2010-03-01
We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.
6. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files
SciTech Connect
Randolph Schwarz; Leland L. Carter; Alysia Schwarz
2005-08-23
Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.
7. Fast Monte Carlo for radiation therapy: the PEREGRINE Project
SciTech Connect
Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.
1997-11-11
The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.
8. Monte Carlo simulation of light transport in turbid medium with embedded object--spherical, cylindrical, ellipsoidal, or cuboidal objects embedded within multilayered tissues.
PubMed
Periyasamy, Vijitha; Pramanik, Manojit
2014-04-01
Monte Carlo modeling of light transport in multilayered tissue (MCML) is modified to incorporate objects of various shapes (sphere, ellipsoid, cylinder, or cuboid) with a refractive-index mismatched boundary. These geometries would be useful for modeling lymph nodes, tumors, blood vessels, capillaries, bones, the head, and other body parts. Mesh-based Monte Carlo (MMC) has also been used to compare the results from the MCML with embedded objects (MCML-EO). Our simulation assumes a realistic tissue model and can also handle the transmission/reflection at the object-tissue boundary due to the mismatch of the refractive index. Simulation of MCML-EO takes a few seconds, whereas MMC takes nearly an hour for the same geometry and optical properties. Contour plots of fluence distribution from MCML-EO and MMC correlate well. This study assists one to decide on the tool to use for modeling light propagation in biological tissue with objects of regular shapes embedded in it. For irregular inhomogeneity in the model (tissue), MMC has to be used. If the embedded objects (inhomogeneity) are of regular geometry (shapes), then MCML-EO is a better option, as simulations like Raman scattering, fluorescent imaging, and optical coherence tomography are currently possible only with MCML. PMID:24727908
9. Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.
PubMed
Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L
2003-02-01
10. Vesicle Photonics
SciTech Connect
Vasdekis, Andreas E.; Scott, E. A.; Roke, Sylvie; Hubbell, J. A.; Psaltis, D.
2013-04-03
Thin membranes, under appropriate boundary conditions, can self-assemble into vesicles, nanoscale bubbles that encapsulate and hence protect or transport molecular payloads. In this paper, we review the types and applications of light fields interacting with vesicles. By encapsulating light-emitting molecules (e.g. dyes, fluorescent proteins, or quantum dots), vesicles can act as particles and imaging agents. Vesicle imaging can take place also under second harmonic generation from vesicle membrane, as well as employing mass spectrometry. Light fields can also be employed to transport vesicles using optical tweezers (photon momentum) or directly pertrurbe the stability of vesicles and hence trigger the delivery of the encapsulated payload (photon energy).
11. Design and fabrication of hollow-core photonic crystal fibers for high-power ultrashort pulse transportation and pulse compression.
PubMed
Wang, Y Y; Peng, Xiang; Alharbi, M; Dutin, C Fourcade; Bradley, T D; Gérôme, F; Mielke, Michael; Booth, Timothy; Benabid, F
2012-08-01
We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 μJ, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 μJ laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air. PMID:22859102
12. Dopamine Transporter Single-Photon Emission Computerized Tomography Supports Diagnosis of Akinetic Crisis of Parkinsonism and of Neuroleptic Malignant Syndrome
PubMed Central
Martino, G.; Capasso, M.; Nasuti, M.; Bonanni, L.; Onofrj, M.; Thomas, A.
2015-01-01
Abstract Akinetic crisis (AC) is akin to neuroleptic malignant syndrome (NMS) and is the most severe and possibly lethal complication of parkinsonism. Diagnosis is today based only on clinical assessments yet is often marred by concomitant precipitating factors. Our purpose is to evidence that AC and NMS can be reliably evidenced by FP/CIT single-photon emission computerized tomography (SPECT) performed during the crisis. Prospective cohort evaluation in 6 patients. In 5 patients, affected by Parkinson disease or Lewy body dementia, the crisis was categorized as AC. One was diagnosed as having NMS because of exposure to risperidone. In all FP/CIT, SPECT was performed in the acute phase. SPECT was repeated 3 to 6 months after the acute event in 5 patients. Visual assessments and semiquantitative evaluations of binding potentials (BPs) were used. To exclude the interference of emergency treatments, FP/CIT BP was also evaluated in 4 patients currently treated with apomorphine. During AC or NMS, BP values in caudate and putamen were reduced by 95% to 80%, to noise level with a nearly complete loss of striatum dopamine transporter-binding, corresponding to the burst striatum pattern. The follow-up re-evaluation in surviving patients showed a recovery of values to the range expected for Parkinsonisms of same disease duration. No binding effects of apomorphine were observed. By showing the outstanding binding reduction, presynaptic dopamine transporter ligand can provide instrumental evidence of AC in Parkinsonism and NMS. PMID:25837755
13. Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code
SciTech Connect
1982-07-01
DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.
14. Ensemble Monte Carlo analysis of subpicosecond transient electron transport in cubic and hexagonal silicon carbide for high power SiC-MESFET devices
Belhadji, Youcef; Bouazza, Benyounes; Moulahcene, Fateh; Massoum, Nordine
2015-05-01
In a comparative framework, an ensemble Monte Carlo was used to elaborate the electron transport characteristics in two different silicon carbide (SiC) polytypes 3C-SiC and 4H-SiC. The simulation was performed using three-valley band structure model. These valleys are spherical and nonparabolic. The aim of this work is to forward the trajectory of 20,000 electrons under high-flied (from 50 kV to 600 kV) and high-temperature (from 200 K to 700 K). We note that this model has already been used in other studies of many Zincblende or Wurtzite semiconductors. The obtained results, compared with results found in many previous studies, show a notable drift velocity overshoot. This last appears in subpicoseconds transient regime and this overshoot is directly attached to the applied electric field and lattice temperature.
15. Retinoblastoma external beam photon irradiation with a special ‘D’-shaped collimator: a comparison between measurements, Monte Carlo simulation and a treatment planning system calculation
Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.
2012-11-01
Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator.
16. Theoretical and experimental investigations of asymmetric light transport in graded index photonic crystal waveguides
SciTech Connect
Giden, I. H. Yilmaz, D.; Turduev, M.; Kurt, H.; Çolak, E.; Ozbay, E.
2014-01-20
To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6 nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5–1576.1 nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz–13.3 GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.
17. Theoretical and experimental investigations of asymmetric light transport in graded index photonic crystal waveguides
Giden, I. H.; Yilmaz, D.; Turduev, M.; Kurt, H.; ?olak, E.; Ozbay, E.
2014-01-01
To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6 nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5-1576.1 nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz-13.3 GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.
18. PENEPMA: a Monte Carlo programme for the simulation of X-ray emission in EPMA
Llovet, X.; Salvat, F.
2016-02-01
The Monte Carlo programme PENEPMA performs simulations of X-ray emission from samples bombarded with electron beams. It is both based on the general-purpose Monte Carlo simulation package PENELOPE, an elaborate system for the simulation of coupled electron-photon transport in arbitrary materials, and on the geometry subroutine package PENGEOM, which tracks particles through complex material structures defined by quadric surfaces. In this work, we give a brief overview of the capabilities of the latest version of PENEPMA along with several examples of its application to the modelling of electron probe microanalysis measurements.
19. MCMini: Monte Carlo on GPGPU
SciTech Connect
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
20. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8135507702827454, "perplexity": 2329.0953563981843}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117244.29/warc/CC-MAIN-20160428161517-00012-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/198319-conditional-entropy-function.html | Math Help - conditional entropy of function
1. conditional entropy of function
how does one show that H(Y l f(X) ) greater than or equal to H(YlX) where f(X) is any function of X?
2. Re: conditional entropy of function
$f(X)$ is a degraded observation of $X$, so we have that
$H(Y|f(X),X) = H(Y|X)$
Since conditioning reduces entropy,
$H(Y|f(X)) \geq H(Y|X)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891863465309143, "perplexity": 2031.2442362602362}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096991.38/warc/CC-MAIN-20150627031816-00064-ip-10-179-60-89.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.