url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://www.r-bloggers.com/bivariate-densities-with-n01-margins/
# Bivariate Densities with N(0,1) Margins February 18, 2014 By (This article was first published on Freakonometrics » R-english, and kindly contributed to R-bloggers) This Monday, in the ACT8595 course, we came back on elliptical distributions and conditional independence (here is an old post on de Finetti’s theorem, and the extension to Hewitt-Savage’s). I have shown simulations, to illustrate those two concepts of dependent variables, but I wanted to spend some time to visualize densities. More specifically what could be the joint density is we assume that margins are $\mathcal{N}(0,1)$ distributions. • The Bivariate Gaussian distribution Here, we consider a Gaussian random vector, with margins $\mathcal{N}(0,1)$, and with correlation $r\in[-1,+1]$. This is the standard graph, with elliptical isodensity curves ```r=.5 library(mnormt) S=matrix(c(1,r,r,1),2,2) f=function(x,y) dmnorm(cbind(x,y),varcov=S) vx=seq(-3,3,length=201) vy=seq(-3,3,length=201) z=outer(vx,vy,f) set.seed(1) X=rmnorm(1500,varcov=S) xhist <- hist(X[,1], plot=FALSE) yhist <- hist(X[,2], plot=FALSE) top <- max(c(xhist\$density, yhist\$density,dnorm(0))) nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE) par(mar=c(3,3,1,1)) image(vx,vy,z,col=rev(heat.colors(101))) points(X,cex=.2) par(mar=c(0,3,1,1)) barplot(xhist\$density, axes=FALSE, ylim=c(0, top), space=0,col="light green") lines((density(X[,1])\$x-xhist\$breaks[1])/diff(xhist\$breaks)[1], dnorm(density(X[,1])\$x),col="red") par(mar=c(3,0,1,1)) barplot(yhist\$density, axes=FALSE, xlim=c(0, top), space=0, horiz=TRUE,col="light green") lines(dnorm(density(X[,2])\$x),(density(X[,2])\$x-yhist\$breaks[1])/ diff(yhist\$breaks)[1],col="red")``` That was the simple part. • The Bivariate Student-t distribution Consider now another elliptical distribution. But we want here to normalize the margins. Thus, instead of a pair $(X,Y)$, we would like to consider the pair $(\Phi^{-1}(T_\nu(X)),\Phi^{-1}(T_\nu(Y)))$, so that the marginal distributions are $\mathcal{N}(0,1)$. The new density is obtained simply since the transformation is a one-to-one increasing transformation. Here, we have ```k=3 r=.5 G=function(x) qnorm(pt(x,df=k)) dg=function(x) dt(x,df=k)/dnorm(qnorm(pt(x,df=k))) Ginv=function(x) qt(pnorm(x),df=k) S=matrix(c(1,r,r,1),2,2) f=function(x,y) dmt(cbind(Ginv(x),Ginv(y)),S=S,df=k)/(dg(x)*dg(y)) vx=seq(-3,3,length=201) vy=seq(-3,3,length=201) z=outer(vx,vy,f) set.seed(1) Z=rmt(1500,S=S,df=k) X=G(Z)``` Because we considered a nonlinear transformation of the margins, the level curves are no longer elliptical. But there is still some kind of symmetry. • The Exchangeable Case with Conditionally Independent Random Variables We did consider the case where $X$ and $Y$ with independent random variables, given $\Theta$, and that both variables are exponentially distributed, with parameter $\Theta$. As we’ve seen in class, it might be difficult to visualize that sample, unless we have log scales on both axis. But instead of a log transformation, why not consider a transformation so that margins will be $\mathcal{N}(0,1)$. The only technical problem is that we do not have the (nonconditional) distributions of the margins. Well, we have them, but they are integral based. From a computational point of view, that’s not a bit deal… Computations might take a while, but we can visualize the density using the following code (here, we assume that  is Gamma distributed) ```a=.6 b=1 h=.0001 G=function(x) qnorm(ifelse(x<0,0,integrate(function(z) pexp(x,z)* dgamma(z,a,b),lower=0,upper=Inf)\$value)) Ginv=function(x) uniroot(function(z) G(z)-x,lower=-40,upper=1e5)\$root dg=function(x) (Ginv(x+h)-Ginv(x-h))/2/h H=function(xy) integrate(function(z) dexp(xy[2],z)*dexp(xy[1],z)* dgamma(z,a,b),lower=0,upper=Inf)\$value f=function(x,y) H(c(Ginv(x),Ginv(y)))*(dg(x)*dg(y)) vx=seq(-3,3,length=151) vy=seq(-3,3,length=151) z=matrix(NA,length(vx),length(vy)) for(i in 1:length(vx)){ for(j in 1:length(vy)){ z[i,j]=f(vx[i],vy[j])}} set.seed(1) Theta=rgamma(1500,a,b) Z=cbind(rexp(1500,Theta),rexp(1500,Theta)) X=cbind(Vectorize(G)(Z[,1]),Vectorize(G)(Z[,2]))``` There is a small technical problem, but no big deal. Here, the joint distribution is quite different. Margins are – one more time – standard Gaussian, but the shape of the joint distribution is quite different, with an asymmetry from the lower (left) tail to the upper (right) tail. More details when we’ll introduce copulas. The only difference will be that the margins will be uniform on the unit interval, and not standard Gaussian. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249824643135071, "perplexity": 1130.5618607951483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00047-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathcs.chapman.edu/~jipsen/structures/doku.php/monoids
## Monoids Abbreviation: Mon ### Definition A monoid is a structure $\mathbf{M}=\langle M,\cdot ,e\rangle$, where $\cdot$ is an infix binary operation, called the monoid product, and $e$ is a constant (nullary operation), called the identity element , such that $\cdot$ is associative: $(x\cdot y)\cdot z=x\cdot (y\cdot z)$ $e$ is an identity for $\cdot$: $e\cdot x=x$, $x\cdot e=x$. ##### Morphisms Let $\mathbf{M}$ and $\mathbf{N}$ be monoids. A morphism from $\mathbf{M}$ to $\mathbf{N}$ is a function $h:Marrow N$ that is a homomorphism: $h(x\cdot y)=h(x)\cdot h(y)$, $h(e)=e$ ### Examples Example 1: $\langle X^{X},\circ ,id_{X}\rangle$, the collection of functions on a sets $X$, with composition, and identity map. Example 1: $\langle M(V)_{n},\cdot ,I_{n}\rangle$, the collection of $n\times n$ matrices over a vector space $V$, with matrix multiplication and identity matrix. Example 1: $\langle \Sigma ^{\ast },\cdot ,\lambda \rangle$, the collection of strings over a set $\Sigma$, with concatenation and the empty string. This is the free monoid generated by $\Sigma$. ### Properties Classtype Variety decidable in polynomial time undecidable undecidable no unbounded no no no no no no no no no ### Finite members $\begin{array}{lr} f(1)= &1\\ f(2)= &2\\ f(3)= &7\\ f(4)= &35\\ f(5)= &228\\ f(6)= &2237\\ f(7)= &31559\\ \end{array}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126765131950378, "perplexity": 542.0187361183499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191986.44/warc/CC-MAIN-20170322212951-00290-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/186379/axiom-of-union?answertab=oldest
Axiom of Union? I'm reading Comprehensive Mathematics for Computer Scientists 1. On the second chapter: Axiomatic Set Theory. He first states the axiom of the empty set, the axiom of equality and then he proceeds to the axiom of union: Axiom 3 (Axiom of Union) If $a$ is a set, then there is a set: $\{$$x | there exists an element b\in a such that x\in b$$\}$. This set is denoted by $\bigcup a$ and is called the union of $a$. Notation 2 If a = {b,c}. or a = {b,c,d}, respectively, one also writes b $\cup$ c, or b $\cup$ c $\cup$ d, respectively, instead of $\cup$a I've learned the definition of Union while I was in school, but it wasn't with axioms, they just gave an intuitive example: $a=\{1,2,3\}$ $b=\{4,5\}$ $a\bigcup b=\{1,2,3,4,5\}$ I can't see how the notion of this intuitive example happens on the axiom of union. In my example, it's easy to understand because there's a mention to another set, where's the mention in this axiom? - In ZFC, any element of a set is itself a set, so the interpretation of $\cup a$ is the union of all the sets in $a$. –  Kris Aug 24 '12 at 20:54 The exposition in that book is rather terse, especially for a book addressed to non-mathematicians (something noted in its Amazon reviews.) There are better examples of the axiom of union at work in Hrbacek and Jech, p. 10, although unlike everything in the answers shown below, the examples of Hrbacek and Jech use "pure" set theory i.e. without urelements like $a$ or $b$. (In layman's terms, this means only the empty set and braces are used to build sets.) It's somewhat insightful to see how the axiom works out in that context as well. –  Respawned Fluff Apr 12 at 3:16 Also if you do have urelements (aka atoms) other than the empty set, then the axiom of union loses the "top level" ones. E.g. $\bigcup \{a,\{b\}\}$ is just $\{b\}$. This is one of the troubles with urelements and why the axiom looks strange with urelements. Hat tip to Tourlakis' book for mentioning this. –  Respawned Fluff Apr 12 at 4:06 The connection between your example and the more general definition is that $\bigcup\{a,b\}=a\cup b$. Written out in all its gory details, this is $$\bigcup\Big\{\{1,2,3\},\{4,5\}\Big\}=\{1,2,3\}\cup\{4,5\}=\{1,2,3,4,5\}\;.$$ Let’s check that against the definition: \begin{align*} &\bigcup\Big\{\{1,2,3\},\{4,5\}\Big\}\\ &\qquad=\left\{x:\text{there exists an element }y\in\Big\{\{1,2,3\},\{4,5\}\Big\}\text{ such that }x\in y\right\}\\ &\qquad=\Big\{x:x\in\{1,2,3\}\text{ or }x\in\{4,5\}\Big\}\\ &\qquad=\{1,2,3\}\cup\{4,5\}\\ &\qquad=\{1,2,3,4,5\}\;. \end{align*} Take a slightly bigger example. Let $a,b$, and $c$ be any sets; then \begin{align*} \bigcup\{a,b,c\}&=\Big\{x:\text{there exists an element }y\in\{a,b,c\}\text{ such that }x\in y\Big\}\\ &=\{x:x\in a\text{ or }x\in b\text{ or }x\in c\}\\ &=a\cup b\cup c\;. \end{align*} One more, even bigger: for $n\in\Bbb N$ let $A_n$ be a set, and let $\mathscr{A}=\{A_n:n\in\Bbb N\}$. Then \begin{align*} \bigcup\mathscr{A}&=\Big\{x:\text{there exists an }n\in\Bbb N\text{ such that }x\in A_n\Big\}\\ &=\{x:x\in A_0\text{ or }x\in A_1\text{ or }x\in A_2\text{ or }\dots\}\\ &=A_0\cup A_1\cup A_2\cup\dots\\ &=\bigcup_{n\in\Bbb N}A_n\;. \end{align*} - If there exists an element $y\in\Big\{\{1,2,3\},\{4,5\}\Big\}$, then shouldn't we have a $\Big\{\{1,2,3\},\{4,5\},y\Big\}$? –  Jesus Christ Aug 24 '12 at 18:15 @Gustavo: No: $y$ is a dummy name used here to stand for any member of the set $\Big\{\{1,2,3\},\{4,5\}\Big\}$. Here the possible values of $y$ are $\{1,2,3\}$ and $\{4,5\}$. –  Brian M. Scott Aug 24 '12 at 18:17 @Gustavo: Yes, $x$ in expressions like $\{x:\text{something}\}$ is also a dummy variable; it can stand for anything that satisfies the condition $\text{something}$. –  Brian M. Scott Aug 24 '12 at 18:34 Then I have to choose one of the sets on: $\{\{1,2,3 \},\{4,5 \}\}$ and then choose an element inside the chosen one? –  Jesus Christ Aug 24 '12 at 18:37 Yep. Your last comment seems to tell me that I should consider all possible options, isn't it? –  Jesus Christ Aug 24 '12 at 18:38 Let $A=\{a,b\}$ (the set whose only elements are $a$ and $b$). Then the union of $a$ and $b$ that you described is what the Axiom of Union produces from $A$. Remark: Informally, let $A$ be a set whose elements are a bunch of plastic bags with stuff in them (so $A$ is a set of sets). Then the set produced by the Axiom of Union from $A$ dumps the stuff contained in the bags into a single bag. (Duplicates are thrown away.) - Oh, then $\bigcup a$ acts like a variable. When he says: "There is a set..." he refers to a set that have no name, it's only $\{$x | such that... $\}$ and not $z=\{$x | such that... $\}$, Then I thought that this nameless set were implicit here $\bigcup a$, if it had a name $z$, it would be written like: $z\bigcup a$. Is this right? –  Jesus Christ Aug 24 '12 at 18:02 @GustavoBandeira: I am having trouble understanding what you mean. Here $\cup$ acts as a unary operator (function). If you apply it to the set $\{a,b,c,d\}$ of sets it produces $a\cup b\cup c\cup d$. It operates similarly on an infinite collection $\{a_1,a_2,a_3,\dots\}$ of sets. –  André Nicolas Aug 24 '12 at 18:11 I wasn't aware that I could write it like LISP. –  Jesus Christ Aug 24 '12 at 18:59 @GustavoBandeira: It is a common notation. In principle, all we use is $\in$ and logical symbols, but in doing doing set theory it is then useful (indeed almost necessary!) to introduce abbreviations for important constructions. –  André Nicolas Aug 24 '12 at 19:56 When we write $a\cup b$ we actually mean $\bigcup\{a,b\}$. This is a shorthand instead of writing long formulas every time we want to talk about the union of two sets. - Yep. Same as I commented here. –  Jesus Christ Aug 24 '12 at 18:04 I've read your comment with attention now. This reminds me of LISP where you can write (+ 2 3). –  Jesus Christ Aug 24 '12 at 19:41 @Gustavo: Think of $\bigcup$ as a LISP function "union": $$(\textrm{union }a\ b\ \ldots)$$ It takes a list of sets and returns their union. The $a\cup b$ notation is a bit like C syntax. –  Asaf Karagila Aug 24 '12 at 20:35 Think of $a$ as a set (or collection, if you like) of other sets. Then $\bigcup a$ is the union of all these sets. So, for instance, in your example: $$\bigcup \lbrace\lbrace 1,2,3\rbrace,\lbrace 4,5\rbrace\rbrace = \lbrace 1,2,3,4,5\rbrace$$ You may think of $A\cup B$ as shorthand for $\bigcup \lbrace A,B\rbrace$. - Yep, it's the same as I pointed here. –  Jesus Christ Aug 24 '12 at 18:03 This axiom talks about a set of sets. This is because the axiom states $b\in a$ and $x\in b$: $x$ in $b$ tells you that $b$ is a set (and is an element of $a$). For example: $a=\{\{1\},\{2,3\}\}$ then the axiom states that $\{1\}\cup\{2,3\}=\{1,2,3\}$ exists. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89206463098526, "perplexity": 820.3518864556513}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929418.92/warc/CC-MAIN-20150521113209-00233-ip-10-180-206-219.ec2.internal.warc.gz"}
https://arxiv.org/abs/1101.5834
math.AG (what is this?) # Title:Thom-Sebastiani & Duality for Matrix Factorizations Abstract: The derived category of a hypersurface has an action by "cohomology operations" k[t], deg t=-2, underlying the 2-periodic structure on its category of singularities (as matrix factorizations). We prove a Thom-Sebastiani type Theorem, identifying the k[t]-linear tensor products of these dg categories with coherent complexes on the zero locus of the sum potential on the product (with a support condition), and identify the dg category of colimit-preserving k[t]-linear functors between Ind-completions with Ind-coherent complexes on the zero locus of the difference potential (with a support condition). These results imply the analogous statements for the 2-periodic dg categories of matrix factorizations. Some applications include: we refine and establish the expected computation of 2-periodic Hochschild invariants of matrix factorizations; we show that the category of matrix factorizations is smooth, and is proper when the critical locus is proper; we show how Calabi-Yau structures on matrix factorizations arise from volume forms on the total space; we establish a version of Knörrer Periodicity for eliminating metabolic quadratic bundles over a base. Comments: 78 pages. Draft Subjects: Algebraic Geometry (math.AG); Category Theory (math.CT) Cite as: arXiv:1101.5834 [math.AG] (or arXiv:1101.5834v1 [math.AG] for this version) ## Submission history From: Anatoly Preygel [view email] [v1] Sun, 30 Jan 2011 23:34:40 UTC (95 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709758520126343, "perplexity": 2387.020857849088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669422.96/warc/CC-MAIN-20191118002911-20191118030911-00143.warc.gz"}
https://www.physicsforums.com/threads/black-holes-at-lhc.229888/
# Black holes at LHC 1. Apr 19, 2008 ### hammertime This may seem like a stupid question that's been brought up several times but it is regarding the possible creation of mini-black holes at the LHC. It's said that these MBH's pose no threat to the planet because of their small size and the fact that they will evaporate by Hawking radiation. However, while there is much mathematical and theoretical evidence pointing towards HR, it has never been physically observed. So how can we be sure that the MBH's will simply evaporate? Basically, the saying is that, if HR is correct, MBH's evaporate. Isn't that a pretty big if? 2. Apr 19, 2008 ### ZapperZ Staff Emeritus Last edited: Apr 19, 2008
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084911704063416, "perplexity": 1114.7017444870166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00245-ip-10-171-10-70.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007/s00500-016-2353-1
Soft Computing , Volume 22, Issue 2, pp 541–570 # Dynamic differential evolution with combined variants and a repair method to solve dynamic constrained optimization problems: an empirical study • María-Yaneli Ameca-Alducin • Efrén Mezura-Montes • Nicandro Cruz-Ramírez Methodologies and Application ## Abstract An empirical study of the algorithm dynamic differential evolution with combined variants with a repair method (DDECV $$+$$ Repair) in the solution of dynamic constrained optimization problems is presented. Unexplored aspects of the algorithm are of particular interest in this work: (1) the role of each one of its elements, (2) its sensitivity to different change frequencies and change severities in the objective function and the constraints, (3) its ability to detect a change and recover after it, besides its diversity handling (percentage of feasible and infeasible solutions) during the search, and (4) its performance with dynamism present in different parts of the problem. Seven performance measures, eighteen recently proposed test problems and eight algorithms found in the specialized literature are considered in four experiments. The statistically validated results indicate that DDECV $$+$$ Repair is robust to change frequency and severity variations, and that it is particularly fast to recover after a change in the environment, but highly depends on its repair method and its memory population to provide competitive results. DDECV $$+$$ Repair shows evidence on the convenience of keeping a proportion of infeasible solutions in the population when solving dynamic constrained optimization problems. Finally, DDECV $$+$$ Repair is highly competitive particularly when dynamism is present in both, objective function and constraints. ## Keywords Differential evolution Constraint handling Dynamic optimization Dynamic constrained optimization problem ## Notes ### Acknowledgments The first author acknowledges support from the Mexican National Council of Science and Technology (CONACyT) through a scholarship to pursue graduate studies at the University of Veracruz. The second author acknowledges support from CONACyT through Project No. 220522. This study was funded by the Mexican National Council of Science and Technology CONACyT (Grant No. 220522). ### Compliance with ethical standards #### Conflict of interest María-Yaneli Ameca-Alducin declares that she has no conflict of interest. Efrén Mezura-Montes declares that he has no conflict of interest. Nicandro Cruz-Ramírez declares that he has no conflict of interest. #### Human and animal rights This article does not contain any studies with human participants or animals performed by any of the authors. ## References 1. Ameca-Alducin MY, Mezura-Montes E, Cruz-Ramirez N (2014) Differential evolution with combined variants for dynamic constrained optimization. In: Evolutionary computation (CEC), 2014 IEEE congress on, pp 975–982. doi: 2. Ameca-Alducin MY, Mezura-Montes E, Cruz-Ramírez N (2015a) Differential evolution with a repair method to solve dynamic constrained optimization problems. In: Proceedings of the companion publication of the 2015 on genetic and evolutionary computation conference. ACM, New York, GECCO companion ’15, pp 1169–1172. doi: 3. Ameca-Alducin MY, Mezura-Montes E, Cruz-Ramírez N (2015b) A repair method for differential evolution with combined variants to solve dynamic constrained optimization problems. In: Proceedings of the 2015 on genetic and evolutionary computation conference. ACM, New York, GECCO ’15, pp 241–248. doi: 4. Aragón V, Esquivel S, Coello C (2013) Artificial immune system for solving dynamic constrained optimization problems. In: Alba E, Nakib A, Siarry P (eds) Metaheuristics for dynamic optimization, studies in computational intelligence, vol 433. Springer, Berlin, pp 225–263. doi: 5. Azzouz R, Bechikh S, Said LB (2015) A dynamic multi-objective evolutionary algorithm using a change severity-based adaptive population management strategy. Soft Comput. doi: Google Scholar 6. Branke J, Schmeck H (2003) Designing evolutionary algorithms for dynamic optimization problems. In: Ghosh A, Tsutsui S (eds) Advances in evolutionary computing, Natural Computing Series. Springer, Berlin, pp 239–262. doi: 7. Bu C, Luo W, Yue L (2016) Continuous dynamic constrained optimization with ensemble of locating and tracking feasible regions strategies. IEEE Trans Evol Comput. doi: Google Scholar 8. Cobb H (1990) An investigation into the use of hypermutation as an adaptive operator in genetic algorithms having continuous, time-dependent nonstationary environments. Technical report, Naval Research Lab, WashingtonGoogle Scholar 9. Cobb H, Grefenstette J (1993) Genetic algorithms for tracking changing environments. In: Forrest S (ed) ICGA. Morgan Kaufmann, Los Altos, pp 523–530Google Scholar 10. Coello Coello CA (2002) Theoretical and numerical constraint handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput Methods Appl Mech Eng 191(11–12):1245–1287 11. du Plessis M (2012) Adaptive multi-population differential evolution for dynamic environments, Ph.D. thesis. Faculty of Engineering, Built Environment and Information Technology, University of PretoriaGoogle Scholar 12. Deb K (2000) An efficient constraint handling method for genetic algorithms. Comput Methods Appl Mech Eng 186(24):311–338 13. Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1):3–18. doi: 14. Filipiak P, Lipinski P (2014) Univariate marginal distribution algorithm with Markov chain predictor in continuous dynamic environments. Springer, Cham, pp 404–411Google Scholar 15. Grefenstette J (1992) Genetic algorithms for changing environments. In: Parallel problem solving from nature 2. Elsevier, Amsterdam, pp 137–144Google Scholar 16. Jiang S, Yang S (2016) Evolutionary dynamic multiobjective optimization: benchmarks and algorithm comparisons. IEEE Trans Cybern. doi: Google Scholar 17. Li C, Yang S, Yang M (2014) An adaptive multi-swarm optimizer for dynamic optimization problems. Evol Comput 22(4):559–594 18. Liu R, Chen Y, Ma W, Mu C, Jiao L (2014b) A novel cooperative coevolutionary dynamic multi-objective optimization algorithm using a new predictive model. Soft Comput 18(10):1913–1929 19. Liu R, Chen Y, Ma W, Mu C, Jiao L (2014b) A novel cooperative coevolutionary dynamic multi-objective optimization algorithm using a new predictive model. Soft Comput 18(10):1913–1929 20. López-Ibáñez M, Stützle T (2012) Automatically improving the anytime behaviour of optimisation algorithms, Technical Report. TR/IRIDIA/2012-012, IRIDIA, Université Libre de Bruxelles, Belgium, published in European Journal of Operations Research Radulescu et al. (2013)Google Scholar 21. Martínez-Peñaloza MG, Mezura-Montes E (2015) Immune generalized differential evolution for dynamic multiobjective optimization problems. In: 2015 IEEE Congress on evolutionary computation (CEC), pp 1918–1925. doi: 22. Mezura-Montes E (ed) (2009) Constraint-handling in evolutionary optimization, studies in computational intelligence, vol 198. Springer, BerlinGoogle Scholar 23. Mezura-Montes E, Coello CAC (2011) Constraint-handling in nature-inspired numerical optimization: past, present and future. Swarm Evol Comput 1(4):173–194 24. Mezura-Montes E, Miranda-Varela ME, del Carmen Gómez-Ramón R (2010) Differential evolution in constrained numerical optimization. An empirical study. Inf Sci 180(22):4223–4262 25. Michalewicz Z, Nazhiyath G (1995) Genocop III: a co-evolutionary algorithm for numerical optimization problems with nonlinear constraints. In: Evolutionary computation, IEEE international conference on, vol 2, pp 647–651. doi: 26. Michalewicz Z, Schoenauer M (1996) Evolutionary algorithms for constrained parameter optimization problems. Evol Comput 4(1):1–32 27. Mukherjee R, Debchoudhury S, Swagatam D (2016) Modified differential evolution with locality induced genetic operators for dynamic optimization. Eur J Oper Res 253(2):337–355 28. Nguyen TT, Yao X (2009) Benchmarking and solving dynamic constrained problems. In: Evolutionary computation, 2009. CEC ’09. IEEE congress on, pp 690–697. doi: 29. Nguyen T, Yao X (2010) Detailed experimental results of GA, RIGA, HYPERm and GA + Repair on the G24 set of benchmark problems. Technical report, School Computer Science, University of Birmingham, Birmingham. http://www.staff.livjm.ac.uk/enrtngu1/Papers/DCOPfulldata 30. Nguyen T, Yao X (2012) Continuous dynamic constrained optimization: the challenges. IEEE Trans Evol Comput 16(6):769–786. doi: 31. Nguyen T, Yao X (2013) Evolutionary optimization on continuous dynamic constrained problems—an analysis. In: Yang S, Yao X (eds) Evolutionary computation for dynamic optimization problems, studies in computational intelligence, vol 490. Springer, Berlin, pp 193–217. doi: 32. Nguyen T, Yang S, Branke J (2012) Evolutionary dynamic optimization: a survey of the state of the art. Swarm Evol Comput 6:1–24 33. Nguyen TT, Yang S, Branke J, Yao X (2013) chap Evolutionary dynamic optimization: methodologies. In: Evolutionary computation for dynamic optimization problems. Springer, Berlin, pp 39–64Google Scholar 34. Pal K, Saha C, Das S (2013a) Differential evolution and offspring repair method based dynamic constrained optimization. In: Panigrahi B, Suganthan P, Das S, Dash S (eds) Swarm, evolutionary, and memetic computing, Lecture notes in Computer Science, vol 8297. Springer, Berlin, pp 298–309. doi: 35. Pal K, Saha C, Das S, Coello-Coello C (2013b) Dynamic constrained optimization with offspring repair based gravitational search algorithm. In: Evolutionary computation (CEC), 2013 IEEE congress on, pp 2414–2421. doi: 36. Pekdemir H, Topcuoglu HR (2016) Enhancing fireworks algorithms for dynamic optimization problems. In: 2016 IEEE congress on evolutionary computation (CEC), pp 4045–4052Google Scholar 37. Price K, Storn R, Lampinen J (2005) Differential evolution: a practical approach to global optimization (Natural Computing Series). Springer, SecaucusGoogle Scholar 38. Radulescu A, López-Ibáñez M, Stützle T (2013) Automatically improving the anytime behaviour of multiobjective evolutionary algorithms. In: Purshouse R, Fleming P, Fonseca CM, Greco S, Shaw J (eds) Evolutionary multi-criterion optimization, Lecture notes in Computer Science, vol 7811. Springer, Berlin, pp 825–840. doi: 39. Rashedi E, Nezamabadi H, Saryazdi S (2009) Gsa: a gravitational search algorithm. Inf Sci 179(13):2232–2248 40. Richter H (2009a) Change detection in dynamic fitness landscapes: an immunological approach. In: Nature biologically inspired computing, 2009. NaBIC 2009. World Congress on, pp 719–724. doi: 41. Richter H (2009b) Detecting change in dynamic fitness landscapes. In: Evolutionary computation. CEC ’09. IEEE congress on, pp 1613–1620Google Scholar 42. Rohlfshagen P, Yao X (2013) chap Evolutionary dynamic optimization: challenges and perspectives. In: Evolutionary computation for dynamic optimization problems. Springer, Berlin, pp 65–84Google Scholar 43. Sharma A, Sharma D (2012a) chap ICHEA—a constraint guided search for improving evolutionary algorithms. In: Neural information processing: 19th international conference, ICONIP 2012, Doha, Qatar, Proceedings. Part I. Springer, Berlin, pp 269–279Google Scholar 44. Sharma A, Sharma D (2012b) chap Solving dynamic constraint optimization problems using ICHEA. In: Neural information processing: 19th international conference, ICONIP 2012. Doha, proceedings, Part III. Springer, Berlin, pp 434–444Google Scholar 45. Singh H, Isaacs A, Nguyen T, Ray T, Yao X (2009) Performance of infeasibility driven evolutionary algorithm (IDEA) on constrained dynamic single objective optimization problems. In: Evolutionary computation, 2009. CEC ’09. IEEE Congress on, pp 3127–3134. doi: 46. Trojanowski K, Michalewicz Z (1999) Searching for optima in non-stationary environments. In: Evolutionary computation, 1999. CEC 99. Proceedings of the 1999 Congress on, vol 3, p 1850. doi: 47. Umenai Y, Uwano F, Tajima Y, Nakata M, Sato H, Takadama K (2016) A modified cuckoo search algorithm for dynamic optimization problems. In: 2016 IEEE Congress on evolutionary computation (CEC), pp 1757–1764Google Scholar 48. Yu X, Wu X (2016) A multi-point local search algorithm for continuous dynamic optimization. In: 2016 IEEE Congress on evolutionary computation (CEC), pp 2736–2743Google Scholar 49. Zhang W, Yen GG, Wang X (2016) An immune inspired framework for optimization in dynamic environment. In: 2016 IEEE congress on evolutionary computation (CEC), pp 1800–1807Google Scholar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7629518508911133, "perplexity": 18406.81556455122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865023.41/warc/CC-MAIN-20180523004548-20180523024548-00469.warc.gz"}
http://fricas.github.io/api/FunctionSpacePrimitiveElement.html
# FunctionSpacePrimitiveElement(R, F)¶ primitiveElement(a1, a2) returns [a, q1, q2, q] such that k(a1, a2) = k(a), ai = qi(a), and q(a) = 0. The minimal polynomial for a2 may involve a1, but the minimal polynomial for a1 may not involve a2; This operations uses resultant. primitiveElement([a1, ..., an]) returns [a, [q1, ..., qn], q] such that then k(a1, ..., an) = k(a), ai = qi(a), and q(a) = 0. This operation uses the technique of spadglossSee{groebner bases}{Groebner basis}.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5293546915054321, "perplexity": 19208.67723018817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425082.56/warc/CC-MAIN-20170725062346-20170725082346-00126.warc.gz"}
https://www.science.gov/topicpages/e/experiencias+creativas+para.html
#### Sample records for experiencias creativas para 1. EXPERIENCIAS RELACIONADAS A UNA INTERVENCIÓN PARA REDUCIR EL ESTIGMA RELACIONADO AL VIH/SIDA ENTRE ESTUDIANTES DE MEDICINA EN PUERTO RICO PubMed Central Cintrón-Bou, Francheska; Varas-Díaz, Nelson; Marzán-Rodríguez, Melissa; Neilands, Torsten B. 2016-01-01 Existe estigma relacionado al VIH. A las personas con VIH/SIDA-PCVS se les viola sus derechos y obstaculiza su bienestar mental/físico. Profesionales de la salud-PS son fuente de apoyo primordial, sin embargo estos/as le estigmatizan. Es útil adiestrar a PS en relación al estigma social. Implantamos la intervención para reducir el estigma relacionado al VIH/SIDA con 507 estudiantes de medicina. Resultó ser una intervención efectiva, hubo reducción en los niveles de estigma a partir de nuestra intervención y diferencias significativas con el grupo control (p≤.05). Generar espacios de adiestramiento para atender el estigma relacionado al VIH/SIDA es pertinente para la psicología comunitaria porque colaboramos en la reducción de actitudes estigmatizantes que afectan adversamente la prevención de nuevas infecciones, la adherencia al tratamiento antirretroviral y la calidad de vida. PMID:27829690 2. Significant Learning Experiences for English Foreign Language Students (Experiencias significativas para estudiantes de inglés como lengua extranjera) ERIC Educational Resources Information Center Becerra, Luz María; McNulty, Maria 2010-01-01 This action research examines experiences that students in a grade 10 EFL class had with redesigning a grammar-unit into a topic-based unit. Strategies were formulating significant learning goals and objectives, and implementing and reflecting on activities with three dimensions of Dee Fink's (2003) taxonomy of significant learning: the human… 3. En el seno del hogar. Experiencias familiares para desarrollar el alfabetismo (Right at Home. Family Experiences for Building Literacy). ERIC Educational Resources Information Center Hansen, Merrily P.; Armstrong, Gloria This publication, a Spanish translation of "Right at Home," is a family involvement program in the form of easy-to-read cartoon-style letters to be used at home by parents or other family members with their preschool or kindergarten-age children. The book is designed to be used independently by parents, or to be reproduced and distributed to… 4. En el seno del hogar. Experiencias familiares para desarrollar el alfabetismo (Right at Home. Family Experiences for Building Literacy). ERIC Educational Resources Information Center Hansen, Merrily P.; Armstrong, Gloria This publication, a Spanish translation of "Right at Home," is a family involvement program in the form of easy-to-read cartoon-style letters to be used at home by parents or other family members with their preschool or kindergarten-age children. The book is designed to be used independently by parents, or to be reproduced and distributed to… 5. Experiencias en Lenguaje Para su Nino ed Edad Pre-escolar. Parte I: Actividades Para la Casa. (Language Experiences for Your Preschooler. Part I: Activities at Home.) ERIC Educational Resources Information Center New York State Education Dept., Albany. Bureau of Continuing Education Curriculum Development. The purpose of this manuscript (written in Spanish) is to encourage the development of communication skills of preschool children by introducing their parents to a number of learning activities suitable for home use. It is written to be used by an instructor who is working with preschool parents. The activities, which are designed to be… 6. Experiencias en Lenguaje Para su Nino ed Edad Pre-escolar. Parte I: Actividades Para la Casa. (Language Experiences for Your Preschooler. Part I: Activities at Home.) ERIC Educational Resources Information Center New York State Education Dept., Albany. Bureau of Continuing Education Curriculum Development. The purpose of this manuscript (written in Spanish) is to encourage the development of communication skills of preschool children by introducing their parents to a number of learning activities suitable for home use. It is written to be used by an instructor who is working with preschool parents. The activities, which are designed to be… 7. FACTORES SOCIO-ESTRUCTURALES Y EL ESTIGMA HACIA EL VIH/SIDA: EXPERIENCIAS DE PUERTORRIQUEÑOS/AS CON VIH/SIDA AL ACCEDER SERVICIOS DE SALUD PubMed Central RIVERA-DIAZ, MARINILDA; VARAS-DIAZ, NELSON; REYES-ESTRADA, MARCOS; SURO, BEATRIZ; CORIANO, DORALIS 2013-01-01 RESUMEN El estigma relacionado al VIH/SIDA continúa afectando la prestación de servicios de salud y el bienestar físico y mental de las personas con VIH/SIDA (PVS). Recientemente la literatura científica ha señalado la importancia de comprender las manifestaciones de estigma más allá de las interacciones individuales. Por tal razón, investigaciones recientes en y fuera de Puerto Rico enfatizan la importancia de entender cómo factores socio-estructurales (FSE) influyen en los procesos de estigmatización social. Con el propósito de examinar los FSE que influyen en las manifestaciones de estigma relacionado al VIH/SIDA, realizamos y analizamos nueve grupos focales compuestos por hombres y mujeres en tratamiento para el VIH/SIDA que habían tenido experiencias estigmatizantes. Los participantes identificaron FSE relacionados a las manifestaciones de estigma, tales como el uso de viviendas especializadas, descentralización de los servicios de salud y el desarrollo de protocolos administrativos excluyentes en los servicios de salud. Los resultados demuestran la importancia de considerar los FSE en el desarrollo e implementación de intervenciones dirigidas a la población. PMID:24639599 SciTech Connect Cook, Richard D. 2016-05-25 The ParaDIS_lib software is a project that is funded by the DOE ASC Program. Its purpose is to provide visualization and analysis capabilities for the existing ParaDIS parallel dislocation dynamics simulation code. 9. For a Child, Life is a Creative Adventure: Supporting Development and Learning through Art, Music, Movement, and Dialogue. A Guide for Parents and Professionals. = Para los ninos, la vida es una aventura creativa: Como estimular el desarrollo y el aprendizaje por medio de las artes visuales, la musica, el movimiento y el dialogo. Guia para padres de familia y profesionales. ERIC Educational Resources Information Center Cohen, Elena Recognizing that creativity facilitates children's learning and development, the Head Start Program Performance Standards require Head Start programs to include opportunities for creative self-expression. This guide with accompanying videotape, both in English- and Spanish- language versions, encourages and assists adults to support children's… 10. Confieso que Divulgo. Reflexiones y Experiencias de una Astrofísica Rodríguez Hidalgo, I. Este artículo presenta algunas reflexiones en torno a la popularización de la Ciencia, desarrolladas a lo largo de mi trayectoria profesional, un camino inacabado desde la intuición al oficio. Tras revisar las señas de identidad de la divulgación científica, se exponen ideas, experiencias y recursos, cribados por la práctica y su posterior análisis crítico. Se destacan las actividades relacionadas con la Astronomía, que se cuentan entre las más espectaculares y gratificantes. Confessions of a popularizer: This paper presents some author's thoughts about scientific outreach, developed along her professional path, an unfinished way from intuition to trade. First, identity signs of outreach are revised; then, ideas, experiences and resources, sifted by practice and further critical analysis, are reviewed. Activities related to Astronomy, being one of the most spectacular and rewarding, are remarked 1 11. Tiempo para un cambio Woltjer, L. 1987-06-01 En la reunion celebrada en diciembre dei ano pasado informe al Consejo de mi deseo de terminar mi contrato como Director General de la ESO una vez que fuera aprobado el proyecto dei VLT, que se espera sucedera hacia fines de este aAo. Cuando fue renovada mi designacion hace tres aAos, el Consejo conocia mi intencion de no completar los cinco aAos dei contrato debido a mi deseo de disponer de mas tiempo para otras actividades. Ahora, una vez terminada la fase preparatoria para el VLT, Y habiendose presentado el proyecto formalmente al Consejo el dia 31 de marzo, y esperando su muy probable aprobacion antes dei termino de este ano, me parece que el 10 de enero de 1988 presenta una excelente fecha para que se produzca un cambio en la administracion de la ESO. Roy, Arpita; Mahadevan, S.; Chakraborty, A.; Pathan, F. M.; Anandarao, B. G. 2010-01-01 The Physical Research Laboratory Advanced Radial-velocity All-sky Search (PARAS) is an efficient fiber-fed cross-dispersed high-resolution echelle spectrograph that will see first light in early 2010. This instrument is being built at the Physical Research laboratory (PRL) and will be attached to the 1.2m telescope at Gurushikhar Observatory at Mt. Abu, India. PARAS has a single-shot wavelength coverage of 370nm to 850nm at a spectral resolution of R 70000 and will be housed in a vacuum chamber (at 1x10-2 mbar pressure) in a highly temperature controlled environment. This renders the spectrograph extremely suitable for exoplanet searches with high velocity precision using the simultaneous Thorium-Argon wavelength calibration method. We are in the process of developing an automated data analysis pipeline for echelle data reduction and precise radial velocity extraction based on the REDUCE package of Piskunov & Valenti (2002), which is especially careful in dealing with CCD defects, extraneous noise, and cosmic ray spikes. Here we discuss the current status of the PARAS project and details and tests of the data analysis procedure, as well as results from ongoing PARAS commissioning activities. 13. Identificación de Intervenciones para el Desarrollo Positivo de la Juventud PubMed Central Sardiñas, Lili M.; Padilla, Viviana; Aponte, Mari; Boscio, Ana Morales; Pedrogo, Coralee Pérez; Santiago, Betzaida; Morales, Ángela Pérez; Dávila, Paloma Torres; Cesáreo, Marizaida Sánchez 2017-01-01 14. Yo, Ciudadano: Un Curriculo de Experiencias para Educacion Civica. Nivel: Kindergarten (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: Kindergarten). ERIC Educational Resources Information Center Vardeman, Lou Integrating concepts of basic citizenship education with community involvement, this experiential curriculum provides a means for developing decision making and critical thinking skills within the existing social studies curriculum at the kindergarten level. Consisting of 11 lessons, the guide, written in Spanish, introduces the meaning of rules,… 15. Yo Ciudadano: Un Curriculo de Experiencias para Educacion Civica. Nivel: Cuatro (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: Four). ERIC Educational Resources Information Center Lazarine, Dianne Integrating concepts of basic citizenship education with community involvement, this experiential curriculum provides a means for developing decision making and critical thinking skills within the existing fourth grade social studies curriculum. The 11 lessons, translated into into Spanish, cover the following concepts: responsibility in the care… 16. Yo Ciudadano: Un Curriculo de Experiencias para Educacion Civica. Nivel: Cuatro (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: Four). ERIC Educational Resources Information Center Lazarine, Dianne Integrating concepts of basic citizenship education with community involvement, this experiential curriculum provides a means for developing decision making and critical thinking skills within the existing fourth grade social studies curriculum. The 11 lessons, translated into into Spanish, cover the following concepts: responsibility in the care… 17. Yo, Ciudadano: Un Curriculo de Experiencias para Educacion Civica. Nivel: Kindergarten (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: Kindergarten). ERIC Educational Resources Information Center Vardeman, Lou Integrating concepts of basic citizenship education with community involvement, this experiential curriculum provides a means for developing decision making and critical thinking skills within the existing social studies curriculum at the kindergarten level. Consisting of 11 lessons, the guide, written in Spanish, introduces the meaning of rules,… 18. Yo Ciudadano: Un Curriculo de Experiencias para Educacion Civica. Nivel: Uno (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: One). ERIC Educational Resources Information Center Loftin, Richard Integrating concepts of basic citizenship education with community involvement, this experiential curriculum, written in Spanish, provides a means for developing decision making and critical thinking skills within the existing social studies curriculum in grade 1. Using short stories, field trips, and class discussions, the 11 lessons on… 19. Yo Ciudadano: Un Curriculo de Experiencias para Educacion Civica. Nivel: Dos (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: Two). ERIC Educational Resources Information Center Lantz, Jean Integrating concepts of basic citizenship education with community involvement, this experiential curriculum provides a means for developing decision making and critical thinking skills within the existing second grade social studies curriculum. The 10 lessons, translated into Spanish, cover the following concepts: friendly, unfriendly and… 20. Yo Ciudadano: Un Curriculo de Experiencias para Educacion Civica. Nivel: Cinco (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: Five). ERIC Educational Resources Information Center Gutierrez, Merri Integrating concepts of basic citizenship education with community involvement, this experiential curriculum provides a means for developing decision making and critical thinking skills within the existing fifth grade social studies curriculum. The 12 lessons, translated into Spanish, cover the following concepts: responsibility, rules and laws,… 1. Yo Ciudadano: Un Curriculo de Experiencias Para Educacion Civica. Nivel: Tres (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: Three). ERIC Educational Resources Information Center Javora, Angela Integrating concepts of basic citizenship education with community involvement, this experiential curriculum provides a means for developing decision making and critical thinking skills within the existing third grade social studies curriculum. The 10 lessons, translated into Spanish, cover school rules as personal safety measures, consequences of… 2. Yo Ciudadano: Un Curriculo de Experiencias para Educacion Civica. Nivel: Uno (Citizen Me: An Experiential Curriculum for Citizenship Education. Level: One). ERIC Educational Resources Information Center Loftin, Richard Integrating concepts of basic citizenship education with community involvement, this experiential curriculum, written in Spanish, provides a means for developing decision making and critical thinking skills within the existing social studies curriculum in grade 1. Using short stories, field trips, and class discussions, the 11 lessons on… 3. Planar Para Algebras, Reflection Positivity Jaffe, Arthur; Liu, Zhengwei 2017-05-01 We define a planar para algebra, which arises naturally from combining planar algebras with the idea of ZN para symmetry in physics. A subfactor planar para algebra is a Hilbert space representation of planar tangles with parafermionic defects that are invariant under para isotopy. For each ZN, we construct a family of subfactor planar para algebras that play the role of Temperley-Lieb-Jones planar algebras. The first example in this family is the parafermion planar para algebra (PAPPA). Based on this example, we introduce parafermion Pauli matrices, quaternion relations, and braided relations for parafermion algebras, which one can use in the study of quantum information. An important ingredient in planar para algebra theory is the string Fourier transform (SFT), which we use on the matrix algebra generated by the Pauli matrices. Two different reflections play an important role in the theory of planar para algebras. One is the adjoint operator; the other is the modular conjugation in Tomita-Takesaki theory. We use the latter one to define the double algebra and to introduce reflection positivity. We give a new and geometric proof of reflection positivity by relating the two reflections through the string Fourier transform. 4. Experiencias sobre el impacto del Programa de Formación en Ética de la Investigación Biomédica y Psicosocial en el ámbito de la salud mental y la investigación conductual. PubMed Barrios, Liliana Mondragón 2012-01-01 El propósito de este trabajo es presentar el impacto y la integración que los conocimientos adquiridos en el Programa Internacional de Formación en Ética de la Investigación Biomédica y Psicosocial de la Universidad de Chile han tenido en mi experiencia profesional, en el ámbito de la investigación psicosocial en un Instituto de Salud de México. Para este objetivo, expondré tres áreas en las cuales se ha podido evidenciar tal impacto: trabajo en los comités de ética, desarrollo de programas de académicos en bioética e investigación y publicación sobre ética y bioética. El motivo que me llevó a incursionar en el Programa fue que su enseñanza vincula la investigación psicosocial con la ética y la bioética, lo cual me permitió dirigir este tipo de reflexión hacia problemas como violencia, suicidio, adicciones, depresión y salud mental, y a nuevos campos como los estudios comunitarios, con poblaciones en riesgo o vulnerables, en los cuales las diversas implicaciones son difíciles de indagar. 5. Experiencias sobre el impacto del Programa de Formación en Ética de la Investigación Biomédica y Psicosocial en el ámbito de la salud mental y la investigación conductual PubMed Central Barrios, Liliana Mondragón 2012-01-01 El propósito de este trabajo es presentar el impacto y la integración que los conocimientos adquiridos en el Programa Internacional de Formación en Ética de la Investigación Biomédica y Psicosocial de la Universidad de Chile han tenido en mi experiencia profesional, en el ámbito de la investigación psicosocial en un Instituto de Salud de México. Para este objetivo, expondré tres áreas en las cuales se ha podido evidenciar tal impacto: trabajo en los comités de ética, desarrollo de programas de académicos en bioética e investigación y publicación sobre ética y bioética. El motivo que me llevó a incursionar en el Programa fue que su enseñanza vincula la investigación psicosocial con la ética y la bioética, lo cual me permitió dirigir este tipo de reflexión hacia problemas como violencia, suicidio, adicciones, depresión y salud mental, y a nuevos campos como los estudios comunitarios, con poblaciones en riesgo o vulnerables, en los cuales las diversas implicaciones son difíciles de indagar. PMID:22754085 6. Mensaje para alumnos y padres NASA Image and Video Library El astronauta de la NASA José Hernández alienta a los estudiantes a que sigan sus sueños. Hernández también habla acerca del papel que juegan los padres para ayudar a que sus hijos hagan realidad s... 7. Encefalitis por anticuerpos contra el receptor de NMDA: experiencia con seis pacientes pediátricos. Potencial eficacia del metotrexato PubMed Central Bravo-Oro, Antonio; Abud-Mendoza, Carlos; Quezada-Corona, Arturo; Dalmau, Josep; Campos-Guevara, Verónica 2016-01-01 8. Programa de conservacion para aves migratorias neotropicales Treesearch Deborah Finch; Marcia Wilson; Roberto Roca 1992-01-01 Mas de 250 especies de aves terrestres migran a Norte America durante la epoca reproductiva para aprovechar los sistemas templados. No obstante, las aves migratorias neotropicales pasan la mayor parte de su ciclo de vida en los habitat tropicales y subtropicales de paises latinoamericanos y caribefios donde viven en una asociacion cercana con las aves residentes. Para... 9. Functional Expression of Drosophila para Sodium Channels PubMed Central Warmke, Jeffrey W.; Reenan, Robert A.G.; Wang, Peiyi; Qian, Su; Arena, Joseph P.; Wang, Jixin; Wunderler, Denise; Liu, Ken; Kaczorowski, Gregory J.; Ploeg, Lex H.T. Van der; Ganetzky, Barry; Cohen, Charles J. 1997-01-01 The Drosophila para sodium channel α subunit was expressed in Xenopus oocytes alone and in combination with tipE, a putative Drosophila sodium channel accessory subunit. Coexpression of tipE with para results in elevated levels of sodium currents and accelerated current decay. Para/TipE sodium channels have biophysical and pharmacological properties similar to those of native channels. However, the pharmacology of these channels differs from that of vertebrate sodium channels: (a) toxin II from Anemonia sulcata, which slows inactivation, binds to Para and some mammalian sodium channels with similar affinity (Kd ≅ 10 nM), but this toxin causes a 100-fold greater decrease in the rate of inactivation of Para/TipE than of mammalian channels; (b) Para sodium channels are >10-fold more sensitive to block by tetrodotoxin; and (c) modification by the pyrethroid insecticide permethrin is >100-fold more potent for Para than for rat brain type IIA sodium channels. Our results suggest that the selective toxicity of pyrethroid insecticides is due at least in part to the greater affinity of pyrethroids for insect sodium channels than for mammalian sodium channels. PMID:9236205 10. Using ParaPost Tenax fiberglass and ParaCore build-up material to restore severely damaged teeth. PubMed Caicedo, Ricardo; Castellon, Paulino 2005-01-01 This article describes a technique using ParaPost Tenax Fiber White, ParaPost Cement, and ParaPost ParaCore build-up material to restore a tooth with a significant loss of tooth structure. After successful root canal therapy, the posts were bonded in the canals and the core was built using ParaPost ParaCore build-up material. At that point, the tooth was prepared to receive a conventional porcelain-fused-to-metal crown. 11. Calcineurin hydrolysis of para-nitrophenyl phosphorothioate. PubMed Spannaus-Martin, Donna J; Martin, Bruce L 2004-04-01 para-Nitrophenyl phosphorothioate (pNPT) was hydrolyzed by calcineurin at initial rates slightly, but comparable to rates for para-nitrophenyl phosphate (pNPP). Kinetic characterization yielded higher estimates for both Km and Vmax compared to pNPP. Metal ion activation of phosphorothioate hydrolysis was more promiscuous. Unlike the hydrolysis of with pNPP, Ca2+, Mg2+, and Ba2+ activated calcineurin as well as Mn2+. 12. The para-HK/QK correspondence Dyckmanns, Malte; Vaughan, Owen 2017-06-01 We generalise the hyper-Kähler/quaternionic Kähler (HK/QK) correspondence to include para-geometries, and present a new concise proof that the target manifold of the HK/QK correspondence is quaternionic Kähler. As an application, we construct one-parameter deformations of the temporal and Euclidean supergravity c-map metrics and show that they are para-quaternionic Kähler. 13. Process for para-ethyltoluene dehydrogenation SciTech Connect Chu, C.C. 1986-06-03 A process is described of dehydrogenating para-ethyltoluene to selectively form para-methylstyrene comprising contacting to para-ethyltoluene under dehydrogenation reaction conditions with a catalyst composition comprising: (a) from about 30% to 60% by weight of iron oxide, calculated as ferric oxide; (b) from about 13% to 48% by weight of a potassium compound, calculated as potassium oxide; and (c) from about 0% to 5% by weight of a chromium compound, calculated as chromic oxide. The improvement is described comprising dehydrogenating the para-ethyltoluene with a catalyst composition comprising, in addition to the components (a), (b) and (c), a modifying component (d) capable of rendering the para-methylstyrene-containing dehydrogenation reaction effluent especially resistant to the subsequent formation of popcorn polymers when the dehydrogenation of para-ethyltoluene is conducted over the modified catalyst, the modifying component (d) being a bismuth compound present to the extent of from about 1% to 20% by weight of the catalyst composition, calculated as bismuth trioxide. 14. A note on para-holomorphic Riemannian-Einstein manifolds Ida, Cristian; Ionescu, Alexandru; Manea, Adelina 2016-06-01 The aim of this note is the study of Einstein condition for para-holomorphic Riemannian metrics in the para-complex geometry framework. First, we make some general considerations about para-complex Riemannian manifolds (not necessarily para-holomorphic). Next, using a one-to-one correspondence between para-holomorphic Riemannian metrics and para-Kähler-Norden metrics, we study the Einstein condition for a para-holomorphic Riemannian metric and the associated real para-Kähler-Norden metric on a para-complex manifold. Finally, it is shown that every semi-simple para-complex Lie group inherits a natural para-Kählerian-Norden Einstein metric. 15. The ParaScope parallel programming environment NASA Technical Reports Server (NTRS) Cooper, Keith D.; Hall, Mary W.; Hood, Robert T.; Kennedy, Ken; Mckinley, Kathryn S.; Mellor-Crummey, John M.; Torczon, Linda; Warren, Scott K. 1993-01-01 The ParaScope parallel programming environment, developed to support scientific programming of shared-memory multiprocessors, includes a collection of tools that use global program analysis to help users develop and debug parallel programs. This paper focuses on ParaScope's compilation system, its parallel program editor, and its parallel debugging system. The compilation system extends the traditional single-procedure compiler by providing a mechanism for managing the compilation of complete programs. Thus, ParaScope can support both traditional single-procedure optimization and optimization across procedure boundaries. The ParaScope editor brings both compiler analysis and user expertise to bear on program parallelization. It assists the knowledgeable user by displaying and managing analysis and by providing a variety of interactive program transformations that are effective in exposing parallelism. The debugging system detects and reports timing-dependent errors, called data races, in execution of parallel programs. The system combines static analysis, program instrumentation, and run-time reporting to provide a mechanical system for isolating errors in parallel program executions. Finally, we describe a new project to extend ParaScope to support programming in FORTRAN D, a machine-independent parallel programming language intended for use with both distributed-memory and shared-memory parallel computers. 16. Ortho-para-hydrogen equilibration on Jupiter NASA Technical Reports Server (NTRS) Carlson, Barbara E.; Lacis, Andrew A.; Rossow, William B. 1992-01-01 Voyager IRIS observations reveal that the Jovian para-hydrogen fraction is not in thermodynamic equilibrium near the NH3 cloud top, implying that a vertical gradient exists between the high-temperature equilibrium value of 0.25 at depth and the cloud top values. The height-dependent para-hydrogen profile is obtained using an anisotropic multiple-scattering radiative transfer model. A vertical correlation is found to exist between the location of the para-hydrogen gradient and the NH3 cloud, strongly suggesting that paramagnetic conversion on NH3 cloud particle surfaces is the dominant equilibration mechanism. Below the NH3 cloud layer, the para fraction is constant with depth and equal to the high-temperature equilibrium value of 0.25. The degree of cloud-top equilibration appears to depend on the optical depth of the NH3 cloud layer. Belt-zone variations in the para-hydrogen profile seem to be due to differences in the strength of the vertical mixing. 17. Mini-mastoidectomía para anastomosis hipogloso-facial con sección parcial del nervio hipogloso PubMed Central Campero, Álvaro; Ajler, Pablo; Socolovsky, Mariano; Martins, Carolina; Rhoton, Albert 2012-01-01 Introducción: La anastomosis hipogloso-facial es la técnica de elección para la reparación de la parálisis facial cuando no se dispone de un cabo proximal sano del nervio facial. La técnica de anastomosis mediante fresado mastoideo y sección parcial del hipogloso minimiza la atrofia lingual sin sacrificar resultados a nivel facial. Método: La porción mastoidea del nervio facial transcurre por la pared anterior de la AM, a un promedio de 18+/-3 mm de profundidad respecto de la pared lateral. Se debe reconocer la cresta supramastoidea, desde la cual se marca una línea vertical paralela al eje mayor de la AM, 1 cm por detrás de la pared posterior del CAE El fresado se comienza desde la línea medio mastoidea hasta la pared posterior del CAE. Una vez encontrado el nervio facial en el tercio medio del canal mastoideo, el mismo es seguido hacia proximal y distal. Resultados: El abordaje descripto permite acceder al nervio facial intratemporal en su porción mastoidea, y efectuar un fresado óseo sin poner en riesgo al nervio o a estructuras vasculares cercanas. Se trata de un procedimiento técnicamente más sencillo que los abordajes amplios habitualmente utilizados al hueso temporal; no obstante su uso debe ser restringido mayormente a la anastomosis hipogloso-facial. Conclusión: Esta es una técnica relativamente sencilla, que puede ser reproducida por cirujanos sin mayor experiencia en el tema, luego de su paso por el laboratorio de anatomía. PMID:23596555 18. Para-methylstyrene from toluene and acetaldehyde SciTech Connect Innes, R.A.; Occelli, M.L. 1984-08-01 High yields of para-methylstyrene (PMS) were obtained in this study by coupling toluene and acetaldehyde then cracking the resultant 1,1-ditolylethane (DTE) to give equimolar amounts of PMS and toluene. In the first step, a total DTE and ''trimer'' yield of 98% on toluene and 93% on acetaldehyde was obtained using 98% sulfuric acid as catalyst at 5-10/sup 0/C. In the second step, a choline chloride-offretite cracked DTE with 84.0% conversion and 91% selectivity to PMS and toluene. Additional PMS can be obtained by cracking the by-product ''trimer'' formed by coupling DTE and toluene with acetaldehyde. Zeolite Rho was as active but yielded less PMS (86%) and produced more para-ethyltoluene (PET), an undesirable by-product. 19. Para Bombay phenotype--a case report. PubMed Mathai, J; Sulochana, P V; Sathyabhama, S 1997-10-01 Bombay phenotype is peculiar in that red cells are not agglutinated by antisera A, B or H; while serum contains anti A, B and H. Existence of modifying genes at independent loci with variable expression of ABO genes is postulated. We report here a case of partial suppression where antigens could be detected by elution tests and unlike classical Bombay type, normal amount of appropriate blood group substances were present in saliva. This case of para Bombay phenotype was detected as a result of discrepancy in cell and serum group ng. This highlights the importance of both forward and reverse grouping in ABO testing. 20. On q-DEFORMED Para Oscillators and PARA-q Oscillators Kumari, M. Krishna; Shanta, P.; Chaturvedi, S.; Srinivasan, V. Three generalized commutation relations for a single mode of the harmonic oscillator which contains para-bose and q oscillator commutation relations are constructed. These are shown to be inequivalent. The coherent states of the annihilation operator for these three cases are also constructed. 1. Allergic contact dermatitis to para-phenylenediamine. PubMed Jenkins, David; Chow, Elizabeth T 2015-02-01 Exposure to hair dye is the most frequent route of sensitisation to para-phenylenediamine (PPD), a common contact allergen. International studies have examined the profile of PPD, but Australian-sourced information is lacking. Patients are often dissatisfied with advice to stop dyeing their hair. This study examines patients' characteristics, patch test results and outcomes of PPD allergy from a single Australian centre, through a retrospective analysis of patch test data from 2006 to 2013 at the Liverpool Hospital Dermatology Department. It reviews the science of hair dye allergy, examines alternative hair dyes and investigates strategies for hair dyeing. Of 584 patients, 11 were allergic to PPD. Our PPD allergy prevalence rate of 2% is at the lower end of international reported rates. About half these patients also react to para-toluenediamine (PTD). Affected patients experience a significant lifestyle disturbance. In all, 78% tried alternative hair dyes after the patch test diagnosis and more than half continued to dye their hair. Alternative non-PPD hair dyes are available but the marketplace can be confusing. Although some patients are able to tolerate alternative hair dyes, caution is needed as the risk of developing an allergy to other hair dye ingredients, especially PTD, is high. 2. Sensitization to para-tertiary-butylphenolformaldehyde resin. PubMed Massone, L; Anonide, A; Borghi, S; Usiglio, D 1996-03-01 Phenolformaldehyde resins, especially the para-tertiary-butylphenolformaldehyde resin (PTBP-FR), are widely used in industry and in numerous materials of everyday use, such as glues, adhesives, or inks. They can cause many occupational and nonoccupational cases of dermatitis. Forty-one patients with positive patch test results to PTBP-FR were selected for this study. They were patch-tested with a series of chemically related compounds and cross-reactions were noted. Phenolformaldehyde resin (PF-R) was frequently positive (65.8%), whereas other compounds gave a much smaller number of positive results. Cases of occupational exposure (24.4%), location of the dermatitis (hands were involved in 46.3% of cases), and possible sources of exposure (shoes were the responsible agent in 12.2% of cases) were evaluated. Phenolformaldehyde resins are an important cause of contact dermatitis and must be studied chemically and clinically to improve the prognosis of sensitized patients. 3. [Laparoscopic treatment of para-esophageal hernias]. PubMed Collet, D; Wagner, T; Sa Cunha, A; Rault, A; Masson, B 2006-10-01 This retrospective study aims at analyzing the functional results obtained in patients operated by laparoscopy for a para-esophageal hernia. From 1994 to 2004, 38 patients underwent a laparoscopic procedure for a symptomatic para-esophageal hiatal hernia of at least 3/4 of the proximal stomach: 27 females and 11 males, mean age 65 years (extreme: 22-84). There was no case on emergency, 4 patients had have at least one episode of intrathoracic volvulus. The operation consisted in gastric reduction into the abdominal cavity, excision of the sac, suture of the crura reinforced with a mesh in 6 patients and the construction of a gastric wrap. A postoperative barium swallow was performed on POD 3 in order to confirm the anatomical result. Mean operating time was 157 minutes (75-480), no case was converted into laparotomy. Four postoperative complications were observed (morbidity 10.8%): one gastric perforation diagnosed on POD 1, 2 severe dysphagias linked to the wrap, and one atelectasia. There was no death in this series. Functional results were evaluated by the mean of a questionnaire in 33 patients who had a follow up more than 6 months. Thirty-three questionnaires have been sent, 3 patients were lost and one was dead. Among the 29 patients analyzed, 14 were very satisfied, 11 were satisfied and 3 were deceived by the operation. Best results are obtained in patients with GERD, dysphagia or postprandial cardiothoracic symptoms. These results compared to the published data allow us to discuss about indications of surgery, the necessity to removal the hernia sac, and the advantages to reinforce the crura by the mean of a non absorbable mesh. 4. Time domain para hydrogen induced polarization. PubMed Ratajczyk, Tomasz; Gutmann, Torsten; Dillenberger, Sonja; Abdulhussaein, Safaa; Frydel, Jaroslaw; Breitzke, Hergen; Bommerich, Ute; Trantzschel, Thomas; Bernarding, Johannes; Magusin, Pieter C M M; Buntkowsky, Gerd 2012-01-01 Para hydrogen induced polarization (PHIP) is a powerful hyperpolarization technique, which increases the NMR sensitivity by several orders of magnitude. However the hyperpolarized signal is created as an anti-phase signal, which necessitates high magnetic field homogeneity and spectral resolution in the conventional PHIP schemes. This hampers the application of PHIP enhancement in many fields, as for example in food science, materials science or MRI, where low B(0)-fields or low B(0)-homogeneity do decrease spectral resolution, leading to potential extinction if in-phase and anti-phase hyperpolarization signals cannot be resolved. Herein, we demonstrate that the echo sequence (45°-τ-180°-τ) enables the acquisition of low resolution PHIP enhanced liquid state NMR signals of phenylpropiolic acid derivatives and phenylacetylene at a low cost low-resolution 0.54 T spectrometer. As low field TD-spectrometers are commonly used in industry or biomedicine for the relaxometry of oil-water mixtures, food, nano-particles, or other systems, we compare two variants of para-hydrogen induced polarization with data-evaluation in the time domain (TD-PHIP). In both TD-ALTADENA and the TD-PASADENA strong spin echoes could be detected under conditions when usually no anti-phase signals can be measured due to the lack of resolution. The results suggest that the time-domain detection of PHIP-enhanced signals opens up new application areas for low-field PHIP-hyperpolarization, such as non-invasive compound detection or new contrast agents and biomarkers in low-field Magnetic Resonance Imaging (MRI). Finally, solid-state NMR calculations are presented, which show that the solid echo (90y-τ-90x-τ) version of the TD-ALTADENA experiment is able to convert up to 10% of the PHIP signal into visible magnetization. 5. The para-Bombay phenotype in Chinese persons. PubMed Lin-Chu, M; Broadberry, R E; Tsai, S J; Chiou, P W 1987-01-01 The para-Bombay phenotype occurs more frequently in Oriental than in white populations. This report describes the immunohematologic findings in 20 cases of the para-Bombay phenotype detected over a period of about 15 months in the Chinese population of Taiwan. 6. Cooling by Para-to-Ortho-Hydrogen Conversion NASA Technical Reports Server (NTRS) Sherman, A.; Nast, T. 1983-01-01 Catalyst speeds conversion, increasing capacity of solid hydrogen cooling system. In radial-flow catalytic converter, para-hydrogen is converted to equilibrium mixture of para-hydrogen and ortho-hydrogen as it passes through porous cylinder of catalyst. Addition of catalyst increases capacity of hydrogen sublimation cooling systems for radiation detectors. 7. Requisitos para utilizar el enlace | Smokefree Español Cancer.gov Espanol.smokefree.gov ofrece apoyo y recursos para norteamericanos que hablan español y quieren dejar de fumar. Este sitio en la red fue creada por la División de Investigación para el Control del Tabaco del Instituto Nacional del Cáncer. 8. A nontraumatic para-aortic lymphocele complicating nephrolithiasis. PubMed Hyson, E A; Belleza, N A; Lowman, R M 1977-09-01 Many cases of traumatic para-aortic lymphocele have been reported. Recently, a case of nontraumatic para-aortic lymphocele was investigated. The etiologic consideration for this lymphocele formation is either a localized inflammatory process, or fibrosis induced by prior passage of calculi. 9. Towards a double field theory on para-Hermitian manifolds Vaisman, Izu 2013-12-01 In a previous paper, we have shown that the geometry of double field theory has a natural interpretation on flat para-Kähler manifolds. In this paper, we show that the same geometric constructions can be made on any para-Hermitian manifold. The field is interpreted as a compatible (pseudo-)Riemannian metric. The tangent bundle of the manifold has a natural, metric-compatible bracket that extends the C-bracket of double field theory. In the para-Kähler case, this bracket is equal to the sum of the Courant brackets of the two Lagrangian foliations of the manifold. Then, we define a canonical connection and an action of the field that correspond to similar objects of double field theory. Another section is devoted to the Marsden-Weinstein reduction in double field theory on para-Hermitian manifolds. Finally, we give examples of fields on some well-known para-Hermitian manifolds. 10. Towards a double field theory on para-Hermitian manifolds SciTech Connect Vaisman, Izu 2013-12-15 In a previous paper, we have shown that the geometry of double field theory has a natural interpretation on flat para-Kähler manifolds. In this paper, we show that the same geometric constructions can be made on any para-Hermitian manifold. The field is interpreted as a compatible (pseudo-)Riemannian metric. The tangent bundle of the manifold has a natural, metric-compatible bracket that extends the C-bracket of double field theory. In the para-Kähler case, this bracket is equal to the sum of the Courant brackets of the two Lagrangian foliations of the manifold. Then, we define a canonical connection and an action of the field that correspond to similar objects of double field theory. Another section is devoted to the Marsden-Weinstein reduction in double field theory on para-Hermitian manifolds. Finally, we give examples of fields on some well-known para-Hermitian manifolds. 11. Experiencias y repercusión de una formación en ética de investigación PubMed Central Rupaya, Carmen Rosa García 2012-01-01 El presente artículo tiene como propósito describir los logros y repercusiones de la capacitación en ética de la investigación que brinda el Centro Interdisciplinario de Estudios en Bioética de la Universidad de Chile, sirviendo de estímulo, motivación y orientación a profesionales que requieren conocer y aplicar las normas y el raciocinio conducente a la deliberación de los problemas en esta disciplina. Asimismo, describe cómo este conocimiento genera un efecto multiplicador en aspectos tales como la participación en un comité de ética de la investigación (CEI), organización de cursos y creación y desarrollo de líneas de investigación, que repercuten en publicaciones realizadas con estudiantes de posgrado. Relata además los contenidos y estrategias didácticas que pueden ser empleados en cursos de ética y bioética para estudiantes de estomatología y concluye mencionando la aplicación práctica de esta capacitación en los ámbitos docente, institucional y de investigación. PMID:24482556 12. Inclusion of Astronomy Themes in an Inovative Approach of Informal Physics Teaching for High School Students. (Spanish Title: Inclusión de Temas Astronómicos en Uma Abordaje Innovadora de la Enseñanza Informal de Física Para Estudiantes de Secumdaria.) Inclusão de Temas Astronômicos Numa Abordagem Inovadora do Ensino Informal de Física Para Estudantes do Ensino Médio Tiara Mota, Aline; de Morais Bonomini, Iracema Ariel; Meloni Martins Rosado, Ricardo 2009-12-01 The current work reports on an experience on Astronomy education at the Federal University of Itajubá through an extra-curricular course offered for High School students. This initiative was motivated by the low attention paid to the Astronomy subjects at this stage of the Brazilian Formal Education, in spite that the National Curricular Parameters (PCN and PCN+, in Brazil) point out the importance of their inclusion Este artículo relata una experiencia en la enseñanza de la astronomía efectuada en la Universidad Federal de Itajubá en la forma de un curso de extensión orientado para los estudiantes del colegio secundario. Esta iniciativa surgió de constatar la poca atención dada a la Astronomía en esta etapa de la Educación formal brasileña, a pesar que los Parámetros Curriculares Nacionales (PCN y PCN+, en Brasil) destacan la importancia de su inclusión. Este artigo relata uma experiência em ensino de Astronomia realizada na Universidade Federal de Itajubá na forma de um curso de extensão voltado para alunos do Ensino Médio. Esta iniciativa surgiu da pouca atenção que se dá à Astronomia nesta etapa da Educação embora os Parâmetros Curriculares Nacionais (PCN e PCN+) apontem a importância de sua inclusão. 13. a New Equation of State for Solid para-HYDROGEN Wang, Lecheng; Le Roy, Robert J.; Roy, Pierre-Nicholas 2015-06-01 Solid para-H_2 is a popular accommodating host for impurity spectroscopy due to its unique softness and the spherical symmetry of para-H_2 in its J}=0 rotational level. To simulate the properties of impurity-doped solid para-H_2, a reliable model for the soft' pure solid para-H_2 at different pressures is highly desirable. While a couple of experimental and theoretical studies aimed at elucidating the equation of state (EOS) of solid para-H_2 have been reported, the calculated EOS was shown to be heavily dependent on the potential energy surface (PES) between two para-H_2 that was used in the simulations. The current study also demonstrates that different choices of the parameters governing the Quantum Monte Carlo simulation could produce different EOS curves. To obtain a reliable model for pure solid para-H_2, we used a new 1-D para-H_2 PES reported by Faruk et al. that was obtained by averaging over Hinde's highly accurate 6-D H_2--H_2 PES. The EOS of pure solid para-H_2 was calculated using the PIMC algorithm with periodic boundary conditions (PBC). To precisely determine the equilibrium density of solid para-H_2, both the value of the PIMC time step (τ) and the number of particles in the PBC cell were extrapolated to convergence. The resulting EOS agreed well with experimental observations, and the hcp structured solid para-H_2 was found to be more stable than the fcc one at 4.2K, in agreement with experiment. The vibrational frequency shift of para-H_2 as a function of the density of the pure solid was also calculated, and the value of the shift at the equilibrium density is found to agree well with experiment. T. Momose, H. Honshina, M. Fushitani and H. Katsuki, Vib. Spectrosc. 34, 95(2004). M. E. Fajardo, J. Phys. Chem. A 117, 13504 (2013). I. F. Silvera, Rev. Mod. Phys. 52, 393(1980). F. Operetto and F. Pederiva, Phys. Rev. B 73, 184124(2006). T. Omiyinka and M. Boninsegni, Phys. Rev. B 88, 024112(2013). N. Faruk, M. Schmidt, H. Li, R. J. Le Roy, and P 14. ParaDiS-FEM dislocation dynamics simulation code primer SciTech Connect Tang, M; Hommes, G; Aubry, S; Arsenlis, A 2011-09-27 The ParaDiS code is developed to study bulk systems with periodic boundary conditions. When we try to perform discrete dislocation dynamics simulations for finite systems such as thin films or cylinders, the ParaDiS code must be extended. First, dislocations need to be contained inside the finite simulation box; Second, dislocations inside the finite box experience image stresses due to the free surfaces. We have developed in-house FEM subroutines to couple with the ParaDiS code to deal with free surface related issues in the dislocation dynamics simulations. This primer explains how the coupled code was developed, the main changes from the ParaDiS code, and the functions of the new FEM subroutines. 15. Cooling by conversion of para to ortho-hydrogen NASA Technical Reports Server (NTRS) Sherman, A. (Inventor) 1983-01-01 The cooling capacity of a solid hydrogen cooling system is significantly increased by exposing vapor created during evaporation of a solid hydrogen mass to a catalyst and thereby accelerating the endothermic para-to-ortho transition of the vapor to equilibrium hydrogen. Catalyst such as nickel, copper, iron or metal hydride gels of films in a low pressure drop catalytic reactor are suitable for accelerating the endothermic para-to-ortho conversion. 16. Ortho- and para-hydrogen in neutron thermalization SciTech Connect Daemen, L. L.; Brun, T. O. 1998-01-01 The large difference in neutron scattering cross-section at low neutron energies between ortho- and para-hydrogen was recognized early on. In view of this difference (more than an order of magnitude), one might legitimately ask whether the ortho/para ratio has a significant effect on the neutron thermalization properties of a cold hydrogen moderator. Several experiments performed in the 60s and early 70s with a variety of source and (liquid hydrogen) moderator configurations attempted to investigate this. The results tend to show that the ortho/para ratio does indeed have an effect on the energy spectrum of the neutron beam produced. Unfortunately, the results are not always consistent with each other and much unknown territory remains to be explored. The problem has been approached from a computational standpoint, but these isolated efforts are far from having examined the ortho/para-hydrogen problem in neutron moderation in all its complexity. Because of space limitations, the authors cannot cover, even briefly, all the aspects of the ortho/para question here. This paper will summarize experiments meant to investigate the effect of the ortho/para ratio on the neutron energy spectrum produced by liquid hydrogen moderators. 17. ParaText : scalable text modeling and analysis. SciTech Connect Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M. 2010-06-01 Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language. 18. A Comparative Usage-Based Approach to the Reduction of the Spanish and Portuguese Preposition "Para" ERIC Educational Resources Information Center 2013-01-01 This study examines the frequency effect of two-word collocations involving "para" "to," "for" (e.g. "fui para," "para que") on the reduction of "para" to "pa" (in Spanish) and "pra" (in Portuguese). Collocation frequency effects demonstrate that language speakers… 19. A Comparative Usage-Based Approach to the Reduction of the Spanish and Portuguese Preposition "Para" ERIC Educational Resources Information Center 2013-01-01 This study examines the frequency effect of two-word collocations involving "para" "to," "for" (e.g. "fui para," "para que") on the reduction of "para" to "pa" (in Spanish) and "pra" (in Portuguese). Collocation frequency effects demonstrate that language speakers… 20. Hox and ParaHox genes: a review on molluscs. PubMed Biscotti, Maria Assunta; Canapa, Adriana; Forconi, Mariko; Barucca, Marco 2014-12-01 Hox and ParaHox genes are involved in patterning the anterior-posterior body axis in metazoans during embryo development. Body plan evolution and diversification are affected by variations in the number and sequence of Hox and ParaHox genes, as well as by their expression patterns. For this reason Hox and ParaHox gene investigation in the phylum Mollusca is of great interest, as this is one of the most important taxa of protostomes, characterized by a high morphological diversity. The comparison of the works reviewed here indicates that species of molluscs, belonging to different classes, share a similar composition of Hox and ParaHox genes. Therefore evidence suggests that the wide morphological diversity of this taxon could be ascribed to differences in Hox gene interactions and expressions and changes in the Hox downstream genes rather than to Hox cluster composition. Moreover the data available on Hox and ParaHox genes in molluscs compared with those of other Lophotrochozoa shed light on the complex and controversial evolutionary histories that these genes have undergone within protostomes. 1. Detection of the MW Transition Between Ortho and Para States Kanamori, Hideto; Dehghani, Zeinab Tafti; Mizoguchi, Asao; Endo, Yasuki 2017-06-01 Thorough the detailed analysis of the hyperfine resolved rotational transitions, we have been pointed out that there exists not a little interaction between ortho and para states in the molecular Hamiltonian of S_2Cl_2. Using the ortho-para mixed molecular wavefunctions derived from the Hamiltonian, we calculated the transition moment and frequency of the ortho-para forbidden transitions in the cm- and mm-wave region, and picked up some promising candidate transitions for the spectroscopic detection. In the experiment, the S_2Cl_2 vapor with Ar buffer gas in a supersonic jet condition was used with FTMW spectrometer at National Chiao Tung University. As a result, seven hyperfine resolved rotational transitions in the cm-wave region were detected as the ortho-para transition at the predicted frequency within the experimental error range. The observed intensity was 10^{-3} smaller than that of an allowed transition, which is also consistent with the prediction. This is the first time the electric dipole transition between ortho and para states has been detected in a free isolated molecule. A. Mizoguchi, S. Ota, H. Kanamori, Y. Sumiyoshi, and Y. Endo, J. Mol. Spectrosc, 250, 86 (2008) Z. T. Dehghani, S. Ota, A. Mizoguchi and H. Kanamori, J. Phys. Chem. A, 117(39), 10041, (2013) 2. Evolution of invertebrate deuterostomes and Hox/ParaHox genes. PubMed Ikuta, Tetsuro 2011-06-01 Transcription factors encoded by Antennapedia-class homeobox genes play crucial roles in controlling development of animals, and are often found clustered in animal genomes. The Hox and ParaHox gene clusters have been regarded as evolutionary sisters and evolved from a putative common ancestral gene complex, the ProtoHox cluster, prior to the divergence of the Cnidaria and Bilateria (bilaterally symmetrical animals). The Deuterostomia is a monophyletic group of animals that belongs to the Bilateria, and a sister group to the Protostomia. The deuterostomes include the vertebrates (to which we belong), invertebrate chordates, hemichordates, echinoderms and possibly xenoturbellids, as well as acoelomorphs. The studies of Hox and ParaHox genes provide insights into the origin and subsequent evolution of the bilaterian animals. Recently, it becomes apparent that among the Hox and ParaHox genes, there are significant variations in organization on the chromosome, expression pattern, and function. In this review, focusing on invertebrate deuterostomes, I first summarize recent findings about Hox and ParaHox genes. Next, citing unsolved issues, I try to provide clues that might allow us to reconstruct the common ancestor of deuterostomes, as well as understand the roles of Hox and ParaHox genes in the development and evolution of deuterostomes. 3. β-Cyclodextrin- para-aminosalicylic acid inclusion complexes Roik, N. V.; Belyakova, L. A.; Oranskaya, E. I. 2010-11-01 Complex formation of β-cyclodextrin with para-aminosalicylic acid in buffer solutions is studied by UV spectroscopy. It is found that the stoichiometric proportion of the components in the β-cyclodextrin-para-aminosalicylic acid inclusion complex is 1:1. The Ketelar equation is used to calculate the stability constants of the inclusion complexes at different temperatures. The thermodynamic parameters of the complex formation process (ΔG, ΔH, ΔS) are calculated using the van't Hoff equation. The 1:1 β-cyclodextrin-para-aminosalicylic acid inclusion complex is prepared in solid form and its characteristics are determined by IR spectroscopic and x-ray diffraction techniques. 4. Quantum simulation of driven para-Bose oscillators Alderete, C. Huerta; Rodríguez-Lara, B. M. 2017-01-01 Quantum mechanics allows paraparticles with mixed Bose-Fermi statistics that have not been experimentally confirmed. We propose a trapped-ion scheme whose effective dynamics are equivalent to a driven para-Bose oscillator of even order. Our mapping suggest highly entangled vibrational and internal ion states as the laboratory equivalent of quantum simulated parabosons. Furthermore, we show the generation and reconstruction of coherent oscillations and para-Bose analogs of Gilmore-Perelomov coherent states from population inversion measurements in the laboratory frame. Our proposal, apart from demonstrating an analog quantum simulator of para-Bose oscillators, provides a quantum state engineering tool that foreshadows the potential use of paraparticle dynamics in the design of quantum information systems. 5. Una técnica para filtrar patrones de fringing Ostrov, P. G. Se presenta una nueva técnica para filtrar los patrones de fringing producidos en los CCDs tipo RCA. El método consiste en construir un mapa con los ángulos de inclinación de las franjas en cada punto de la imagen. Este mapa es ulteriormente utilizado para alinear con el patrón de interferencia una ventana estrecha, sobre la que se aplica un filtro de mediana. Este procedimiento permite eliminar la mayor parte del ruido del patrón de fringing sin destruirlo. 6. Ortho and para-armalcolite samples in Apollo 17. NASA Technical Reports Server (NTRS) Haggerty, S. E. 1973-01-01 Two paragenetically contrasting forms of armalcolite are present in basalts from the Apollo 17 Taurus-Littrow landing site. These armalcolites differ in optical properties, in crystal habit and in their distribution between coarse and fine grained rocks. It is proposed to call the two armalcolite forms ortho-armalcolite and para-armalcolite. Texural relationships and the evidence of experimental melting show that ortho-armalcolite is always the first crystalline phase to appear from unusually titanium rich magmas. The origin of para-armalcolite is not yet fully understood. 7. Desarrollo y validación de una nueva tecnología, basada en arginina al 1.5%, un compuesto de calcio insoluble y fluoruro, para el uso diario en la prevención y tratamiento de la caries dental. PubMed Cummins, D 2013-10-22 8. On some examples of para-Hermite and para-Kähler Einstein spaces with Λ ≠ 0 2017-02-01 Spaces equipped with two complementary (distinct) congruences of self-dual null strings and at least one congruence of anti-self-dual null strings are considered. It is shown that if such spaces are Einsteinian then the vacuum Einstein field equations can be reduced to a single nonlinear partial differential equation of the second order. Different forms of these equations are analyzed. Finally, several new explicit metrics of the para-Hermite and para-Kähler Einstein spaces with Λ ≠ 0 are presented. Some relation of that metrics to a modern approach to mechanical issues is discussed. 9. Utilice en forma segura los productos con cebo para roedores EPA Pesticide Factsheets Si se usan de manera inadecuada, los productos con veneno para ratas y ratones podrían hacerle daño a usted, a sus hijos o a sus mascotas. Siempre que use pesticidas lea la etiqueta del producto y siga todas las indicaciones. 10. Analyzing and Visualizing Cosmological Simulations with ParaView SciTech Connect Woodring, Jonathan; Heitmann, Katrin; Ahrens, James P; Fasel, Patricia; Hsu, Chung-Hsing; Habib, Salman; Pope, Adrian 2011-01-01 The advent of large cosmological sky surveys - ushering in the era of precision cosmology - has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to a trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper, we introduce new analysis features implemented within ParaView, a fully parallel, open-source visualization toolkit, to analyze large N-body simulations. A major aspect of ParaView is that it can live and operate on the same machines and utilize the same parallel power as the simulation codes themselves. In addition, data movement is in a serious bottleneck now and will become even more of an issue in the future; an interactive visualization and analysis tool that can handle data in situ is fast becoming essential. The new features in ParaView include particle readers and a very efficient halo finder that identifies friends-of-friends halos and determines common halo properties, including spherical overdensity properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to programming languages like Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions. 11. The total neutron cross section of liquid para-hydrogen Celli, M.; Rhodes, N.; Soper, A. K.; Zoppi, M. 1999-12-01 We have measured, using the pulsed neutron source ISIS, the total neutron cross section of liquid para-hydrogen in the vicinity of the triple point. The experimental results compare only qualitatively with the results of the Young and Koppel theory. However, a much better agreement is found once modifications are included in the model which effectively take into account the intermolecular interactions. 12. "Espanol para ti": A Video Program That Works. ERIC Educational Resources Information Center Steele, Elena; Johnson, Holly 2000-01-01 Describes the development of "Espanol para ti," a video program for teaching Spanish at the elementary school level. The program was designed for use in Clark County, Nevada elementary schools and is taught by a certified Spanish teacher via video twice a week, utilizing comprehensible input through visuals, games, and songs that are conducive to… 13. Sistemas Correctores de Campo Para EL Telescopio Cassegrain IAC80 Galan, M. J.; Cobos, F. J. 1987-05-01 El proyecto de instrumentación de mayor importancia que ha tenido el Instituto de Astrofisica de Canarias en los últimos afios ha sido el diseflo y construcción del te1escopio IAC8O. Este requería del esfuerzo con junto en mec´nica, óptica y electrónica, lo que facilitó la estructuración y el crecimiento de los respectivos grupos de trabajo, que posteriormente se integraron en departamentos En su origen (1977), el telescopio IAC80 fue concebido como un sistema clásico tipo Cassegrain, con una razón focal F/i 1.3 para el sistema Casse grain y una razón focal F/20 para el sistema Coudé. Posteriormente, aunque se mantuvo la filosofia de que el sistema básico fuera el F/11.3, se consideró conveniente el diseño de secundarios para razones focales F/16 y F/32, y se eliminó el de F/20. Sin embargo, dada la importancia relativa que un foco estrictamente fotográfico tiene en un telescopio moderno, diseñado básicamente para fotometría fotoeléctrica y con un campo util mínimamente de 40 minutos de arco, se decídió Ilevar a cabo el diseño de un secundario F/8 con un sistema corrector de campo, pero que estuviera formado únicamente por lentes con superficies esféricas para que asl su construcción fuera posible en España ó en México. La creciente utilización de detectores bidimensionales para fines de investigación astron6mica y la viabilidad de que en un futuro cercano éstos tengan un área sensible cada vez mayor, hicieron atractiva la idea de tener diseñado un sistema corrector de campo para el foco primario (F/3), con un campo útil mínimo de un grado, y también con la limitante de que sus componentes tuvieron sólamente supérficies esféricas. Ambos diseños de los sis-temas correctores de campo se llevaron a cabo, en gran medida, como parte de un proyecto de colaboración e intercambio en el área de diseño y evaluación de sistemas ópticos. 14. Kit para aplicar la metodología de Lean en el gobierno EPA Pesticide Factsheets Este Kit para comenzar a aplicar la metodología Lean (Gobierno optimizado) ofrece información para ayudar a las agencias de protección ambiental a planificar e implementar iniciativas Lean exitosas. 15. Para-acetabular periarthritis calcarea: its radiographic manifestations. PubMed Kawashima, A; Murayama, S; Ohuchida, T; Russell, W J 1988-01-01 On retrospective reviews of radiographs, periarthritis calcarea was distinguished from os acetabula by interval radiographic progression and regression. Among 59 men and 51 women, there were 137 instances of para-acetabular calcifications and ossifications, which were morphologically classified as 58 discrete, 58 amorphous, and 21 segmented types. Correlations with other radiographic abnormalities, symptoms, signs, and laboratory abnormalities were sought, but not established. Out of 93 serially imaged opacities, 90 changed, including 37 of the 40 instances (92.5%) of the discrete type and 53 instances (100%) of the amorphous and segmented types--due to periarthritis calcarea. At least 43 of 90 densities were newly developed. Mean age at first detection was 47.7 years. Three of the discrete densities were unchanged and represented os acetabula. Thus, recognition of para-acetabular periarthritis calcarea is not only of academic importance; it can facilitate proper treatment as well. 16. Focal para-hisian atrial tachycardia with dual exits PubMed Central Lawrance Jesuraj, M.; Sharada, K.; Sridevi, C.; Narasimhan, C. 2013-01-01 Focal atrial tachycardias (AT) in the right atrium (RA) tend to cluster around the crista terminalis, coronary sinus (CS) region, tricuspid annulus, and para-hisian region. In most cases, the AT focus can be identified by careful activation mapping, and completely eliminated by radiofrequency (RF) catheter ablation. However, RF ablation near the His bundle (HB) carries a risk of inadvertent damage to the atrioventricular (AV) conduction system. Here we describe a patient with an AT originating in the vicinity of the AV node, which was successfully ablated earlier from non-coronary aortic cusp (NCC), and recurred with an exit from para-hisian location. Respiratory excursions of the catheter were associated with migration to the area of HIs. This was successfully ablated during controlled apnoea, using 3D electroanatomic mapping. PMID:23993015 17. Conversion of para and ortho hydrogen in the Jovian planets NASA Technical Reports Server (NTRS) Massie, S. T.; Hunten, D. M. 1982-01-01 A mechanism is proposed which partially equilibrates the para and ortho rotational levels of molecular hydrogen in the atmospheres of Jupiter, Saturn, and Uranus. Catalytic reactions between the free-radical surface sites of aerosol particles and hydrogen modecules yield significant equilibration near 1 bar pressure, if the efficiency of conversion per collision is between 10 to the -8th and 10 to the -10th and the effective eddy mixing coefficient is 10,000 sq cm/sec. At lower pressures the ortho-para ratio retains the value at the top of the cloud layer, except for a very small effect from conversion in the thermosphere. The influence of conversion on the specific heat and adiabatic lapse rate is also investigated. The effect is found to be generally small, though is can rise to 10% inside the aerosol layer. 18. Production Ratio for Para- and Ortho-Ps in Photodetachment of Ps^- Igarashi, Akinori 2017-01-01 Para- and ortho-Ps atoms are formed in the photodetachment of positronium negative ion. Since the lifetime against the pair annihilation is much shorter for para-Ps( ns) than for ortho-Ps( ns), the production ratio of para- and ortho-Ps atoms is important for the photodetachment experiments. We have derived the ratio explicitly. 19. Synthesis of High Molecular Weight Para-Phenylene PBI DTIC Science & Technology 1974-11-01 give high molecular weight m-phenylene PBI (Reference 7). The polymer was completely soluble in methanesulfonic acid and 98% formic acid . Polymer with...mono- mer is a white crystalline solid which can be quantitatively hydrolized in an acid medium to give the free TAB. Stoichiometric quantities of IX...WEIGHT "PARA"-PHENYLENE PBI TECHNICAL REPORT AFML-TR-74-199 NOVEMBER 1974 Distribution limited to U.S.Government agencies only, test and evaluation 20. Photodissociation dynamics of the ortho- and para-xylyl radicals Pachner, Kai; Steglich, Mathias; Hemberger, Patrick; Fischer, Ingo 2017-08-01 The photodissociation dynamics of the C8H9 isomers ortho- and para-xylyl are investigated in a free jet. The xylyl radicals are generated by flash pyrolysis from 2-(2-methylphenyl)- and 2-(4-methylphenyl) ethyl nitrite and are excited into the D3 state. REMPI- spectra show vibronic structure and the origin of the transition is identified at 32 291 cm-1 for the para- and at 32 132 cm-1 for the ortho-isomer. Photofragment H-atom action spectra show bands at the same energy and thus confirm H-atom loss from xylyl radicals. To gain further insight into the photodissociation dynamics, velocity map images of the hydrogen atom photofragments are recorded. Their angular distribution is isotropic and the translational energy release is in agreement with a dissociation to products in their electronic ground state. Photodissociation of para-xylyl leads to the formation of para-xylylene (C8H8), while the data for ortho-xylyl agree much better with the isomer benzocyclobutene as the dominant molecular fragment rather than ortho-xylylene. In computations we identified a new pathway for the reaction ortho-xylyl → benzocyclobutene + H with a barrier of 3.39 eV (27 340 cm-1), which becomes accessible at the employed excitation energy. It proceeds via a combination of scissoring and rotational motion of the -CH2 and -CH3 groups. However, the observed rate constants measured by delaying the excitation and ionization laser with respect to each other are significantly faster than computed ones, indicating intrinsic non-RRKM behaviour. A comparably high value of around 30% of the excess energy is released as translation of the H-atom photofragment. 1. Relative Contributions of Agricultural Drift, Para-Occupational ... EPA Pesticide Factsheets Background: Increased pesticide concentrations in house dust in agricultural areas have been attributed to several exposure pathways, including agricultural drift, para-occupational, and residential use. Objective: To guide future exposure assessment efforts, we quantified relative contributions of these pathways using meta-regression models of published data on dust pesticide concentrations. Methods: From studies in North American agricultural areas published from 1995-2015, we abstracted dust pesticide concentrations reported as summary statistics (e.g., geometric means (GM)). We analyzed these data using mixed-effects meta-regression models that weighted each summary statistic by its inverse variance. Dependent variables were either the log-transformed GM (drift) or the log-transformed ratio of GMs from two groups (para-occupational, residential use). Results: For the drift pathway, predicted GMs decreased sharply and nonlinearly, with GMs 64% lower in homes 250 m versus 23 m from fields (inter-quartile range of published data) based on 52 statistics from 7 studies. For the para-occupational pathway, GMs were 2.3 times higher (95% confidence interval [CI]: 1.5-3.3; 15 statistics, 5 studies) in homes of farmers who applied pesticides more versus less recently or frequently. For the residential use pathway, GMs were 1.3 (95%CI: 1.1-1.4) and 1.5 (95%CI: 1.2-1.9) times higher in treated versus untreated homes, when the probability that a pesticide was used for 2. TELEMEDICINA: UN DESAFÍO PARA AMÉRICA LATINA PubMed Central Litewka, Sergio 2011-01-01 La telemedicina es una tendencia creciente en la prestación de los servicios médicos. Aunque la eficacia de esta práctica no ha estado bien establecida, es probable que los países en desarrollo compartirán este nuevo paradigma con los desarrollados. Los defensores de la telemedicina en América Latina sostienen que será una herramienta útil para reducir las disparidades y mejorar la accesibilidad de atención de salud. Aunque América Latina quizá se convierta en un lugar para la investigación e investigación de estos procedimientos, no está claro cómo la telemedicina podría contribuir a mejorar la accesibilidad para las poblaciones desfavorecidas, o coexistir con sistemas de atención de salud públicos crónicamente enfermos. Telemedicine is a growing trend in the provision of medical services. Although the effectiveness of this practice has not been well established, it is likely that developing countries will share this new paradigm with developed ones. Supporters of telemedicine in Latin America maintain that it will be a useful tool for reducing disparities and improving health care accessibility. Although Latin America might become a place for research and investigation of these procedures, it is not clear how telemedicine could contribute to improving accessibility for disadvantaged populations, or coexist with chronically ill-funded public healthcare systems. PMID:21625326 3. TELEMEDICINA: UN DESAFÍO PARA AMÉRICA LATINA. PubMed Litewka, Sergio 2005-01-01 La telemedicina es una tendencia creciente en la prestación de los servicios médicos. Aunque la eficacia de esta práctica no ha estado bien establecida, es probable que los países en desarrollo compartirán este nuevo paradigma con los desarrollados. Los defensores de la telemedicina en América Latina sostienen que será una herramienta útil para reducir las disparidades y mejorar la accesibilidad de atención de salud. Aunque América Latina quizá se convierta en un lugar para la investigación e investigación de estos procedimientos, no está claro cómo la telemedicina podría contribuir a mejorar la accesibilidad para las poblaciones desfavorecidas, o coexistir con sistemas de atención de salud públicos crónicamente enfermos.Telemedicine is a growing trend in the provision of medical services. Although the effectiveness of this practice has not been well established, it is likely that developing countries will share this new paradigm with developed ones. Supporters of telemedicine in Latin America maintain that it will be a useful tool for reducing disparities and improving health care accessibility. Although Latin America might become a place for research and investigation of these procedures, it is not clear how telemedicine could contribute to improving accessibility for disadvantaged populations, or coexist with chronically ill-funded public healthcare systems. 4. Determination of the Ratio of Ortho Hydrogen and Para Hydrogen Zhou, D.; Ihas, G. G.; Sullivan, N. S. 2003-03-01 The two different quantum states of hydrogen, ortho-hydrogen and para-hydrogen, possess different properties. The accurate determination of the ortho/para ratio in gaseous, liquid and solid state is important both for research needs and for applications in cryogenic engineering, such as H2 production, transport and storage. NMR can determine the ratio1 accurately, but it is cumbersome and often not practical. Cryogenic applications need a simple and reliable method. We report on the development of a thermal conductivity gauge employing a pure metal thin film that serves as both heater and thermometer for the detection of ortho-para hydrogen ratios in the gaseous state. This ratio-meter has been tested and found to have a nearly pressure-independent voltage response over a broad pressure range with a constant current. The thermal conductivity of hydrogen and nitrogen was measured and found to agree quantitatively with published data. The new development will be presented. *Thanks to Larry Phelps, Bill Malphurs, Stephen Wood, David Hernandez. # Supported by NASA Contract NAG3-2750 Ref. 1. D. Zhou, C. M. Edwards, and N. S. Sullivan, Phys. Rev. Lett. 62, 1528 (1989) 5. Surgical treatment of para-oesophageal hiatal hernia. PubMed Central Rogers, M. L.; Duffy, J. P.; Beggs, F. D.; Salama, F. D.; Knowles, K. R.; Morgan, W. E. 2001-01-01 The development of laparoscopic antireflux surgery has stimulated interest in laparoscopic para-oesophageal hiatal hernia repair. This review of our practice over 10 years using a standard transthoracic technique was undertaken to establish the safety and effectiveness of the open technique to allow comparison. Sixty patients with para-oesophageal hiatal hernia were operated on between 1989 and 1999. There were 38 women and 22 men with a median age of 69.5 years. There were 47 elective and 13 emergency presentations. Operation consisted of a left thoracotomy, hernia reduction and crural repair. An antireflux procedure was added in selected patients. There were no deaths among the elective cases and one among the emergency cases. Median follow-up time was 19 months. There was one recurrence (1.5%). Seven patients (12%) required a single oesophagoscopy and dilatation up to 2 years postoperatively but have been asymptomatic since. Two patients (3%) developed symptomatic reflux which has been well controlled on proton-pump inhibitors. Transthoracic para-oesophageal hernia repair can be safely performed with minimal recurrence. PMID:11777134 6. Para rubber seed oil: new promising unconventional oil for cosmetics. PubMed Lourith, Nattaya; Kanlayavattanakul, Mayuree; Sucontphunt, Apirada; Ondee, Thunnicha 2014-01-01 Para rubber seed was macerated in petroleum ether and n-hexane, individually, for 30 min. The extraction was additionally performed by reflux and soxhlet for 6 h with the same solvent and proportion. Soxhlet extraction by petroleum ether afforded the greatest extractive yield (22.90 ± 0.92%). Although antioxidant activity by means of 1, 1-diphenyl-2-picrylhydrazyl (DPPH) assay was insignificantly differed in soxhleted (8.90 ± 1.15%) and refluxed (9.02 ± 0.71%) by n-hexane, soxhlet extraction by n-hexane was significantly (p < 0.05) potent scavenged 2,2'-azino-bis(3-ethylbenzothaiazoline)-6-sulfonic acid) or ABTS radical with trolox equivalent antioxidant capacity (TEAC) of 66.54 ± 6.88 mg/100 g oil. This extract was non cytotoxic towards normal human fibroblast cells. In addition, oleic acid and palmitic acid were determined at a greater content than in the seed of para rubber cultivated in Malaysia, although linoleic and stearic acid contents were not differed. This bright yellow extract was further evaluated on other physicochemical characters. The determined specific gravity, refractive index, iodine value, peroxide value and saponification value were in the range of commercialized vegetable oils used as cosmetic raw material. Therefore, Para rubber seed oil is highlighted as the promising ecological ingredient appraisal for cosmetics. Transforming of the seed that is by-product of the important industrial crop of Thailand into cosmetics is encouraged accordingly. 7. [Genetic analysis of an individual with para-Bombay phenotype]. PubMed Lin, Jia-jin; Huang, Ying; Zhu, Sui-yong 2013-04-01 To study genetic characteristics of an individual with para-Bombay phenotype and her family members. ABO and H antigens were detected with routine serological techniques.The entire coding region of FUT1 gene was amplified by polymerase chain reaction (PCR). PCR products was purified with enzymes digestion and directly sequenced. The RBCs of the proband did not agglutinate with H antibody. The proband therefore has a para-Bombay phenotype (Bmh). Direct sequencing indicated the FUT1 sequence of the proband contained a homozygous 547-552 del AG and heterozygous 814A>G mutation, which gave rise to two haplotypes of 547-552delAG, 547-552delAG and 814A>G. The ABO blood type of the proband' s mother and sisters were all B.Sequencing of the FUT1 gene has found heterozygous 547-552 del AG, 814A>G mutations in the mother and elder sister, and heterozygous 547-552 del AG mutation in her younger sister. The FUT1 547-552 del AG and 814 A>G mutations of the proband were inherited from her mother. A complex mutation of the FUT1 gene consisting of 547-55 del AG and 814 A>G has been identified in an individual with para-Bombay phenotype. 8. Providing Meaningful Learning for Students of the Sixth Grade of Middle School: a Study on the Moon Phases. (Breton Title: Propiciando Aprendizagem Significativa Para Alunos do Sexto Ano do Ensino Fundamental: um Estudo sobre as Fases da Lua.) Propiciando el Aprendizaje Significativo Para Alumnos del Sexto Nivel de la Educación General Básica: un Estudio sobre Las Fases de la Luna Darroz, Luiz Marcelo; Samudio Pérez, Carlos Ariel; da Rosa, Cleci Werner; Heineck, Renato 2012-07-01 We relate in this article a didactic experience studying the moon phases with a group of middle school students of a private school of the municipality of Passo Fundo, RS. Based on David Ausubel's Meaningful Learning Theory, we have sought to develop a proposal following a didactic model which simulates the phases of the Moon, as based on the previous conceptions of the students. The signs of learning were evidenced by means of memory registries of the activity. From the obtained results we believe that the proposal achieved its goals, since the students were able to identify, differentiate and transfer the phenomenon of the moon phases to new contexts. Thus, it is concluded that a methodology focused on a meaningful content for the students is fundamental to the construction and genuine grasping of what is being learned. Neste artigo, relata-se uma experiência didática de estudo das fases da Lua com uma turma do 6° ano do Ensino Fundamental, de uma escola privada do município de Passo Fundo, RS. Tendo como fundamentação teórica a Teoria da Aprendizagem Significativa de David Ausubel, buscou-se desenvolver a proposta a partir de um modelo didático que simula as fases da Lua e com base nas concepções prévias dos estudantes. Os indícios da aprendizagem foram constatados através de registros de memórias da atividade. Pelos resultados apresentados, acredita-se que a proposta alcançou seus objetivos, uma vez que os estudantes conseguiram identificar, diferenciar e transferir o fenômeno das fases da Lua para novos contextos. Assim, conclui-se que uma metodologia com enfoque em um conteúdo significativo ao estudante é fundamental para a construção e compreensão genuína do que está sendo aprendido. En este artículo se relata una experiencia didáctica de estudio de las fases de la Luna con una clase de 6º año de la educación general básica de una escuela privada del municipio de Passo Fundo, RS. Teniendo como fundamentación teórica la Teor 9. The Regulation of para-Nitrophenol Degradation in Pseudomonas putida DLL-E4. PubMed Chen, Qiongzhen; Tu, Hui; Luo, Xue; Zhang, Biying; Huang, Fei; Li, Zhoukun; Wang, Jue; Shen, Wenjing; Wu, Jiale; Cui, Zhongli 2016-01-01 10. The Regulation of para-Nitrophenol Degradation in Pseudomonas putida DLL-E4 PubMed Central Chen, Qiongzhen; Tu, Hui; Luo, Xue; Zhang, Biying; Huang, Fei; Li, Zhoukun; Wang, Jue; Shen, Wenjing; Wu, Jiale; Cui, Zhongli 2016-01-01 11. Studies on barium bis-para-nitrophenolate para-nitrophenol tetra hydrate NLO single crystal by unidirectional growth method Uthrakumar, R.; Vesta, C.; Jose, M.; Sugandhi, K.; Krishnan, S.; Jerome Das, S. 2010-08-01 The unidirectional crystal growth technique has been employed for the bulk growth of semi-organic nonlinear optical barium bis-para-nitrophenolate para-nitrophenol tetra hydrate single crystals along the (2 2 0) direction with almost high solute-crystal conversion efficiency. The grown crystal was subjected to single crystal and powder XRD analyses in order to confirm the crystal identity. Optical absorption studies reveal very high transmittance in the entire visible and near IR region. The presence of various functional groups is confirmed by FTIR analysis. Low dielectric loss at high frequency region is indicative of enhanced optical quality with lesser defects. Photoconductivity measurements carried out on the grown crystal reveal the negative photoconducting nature. 12. ORTHO-PARA SELECTION RULES IN THE GAS-PHASE CHEMISTRY OF INTERSTELLAR AMMONIA SciTech Connect Faure, A.; Hily-Blant, P.; Le Gal, R.; Rist, C. 2013-06-10 The ortho-para chemistry of ammonia in the cold interstellar medium is investigated using a gas-phase chemical network. Branching ratios for the primary reaction chain involved in the formation and destruction of ortho- and para-NH{sub 3} were derived using angular momentum rules based on the conservation of the nuclear spin. We show that the 'anomalous' ortho-to-para ratio of ammonia ({approx}0.7) observed in various interstellar regions is in fact consistent with nuclear spin selection rules in a para-enriched H{sub 2} gas. This ratio is found to be independent of temperature in the range 5-30 K. We also predict an ortho-to-para ratio of {approx}2.3 for NH{sub 2}. We conclude that a low ortho-to-para ratio of H{sub 2} naturally drives the ortho-to-para ratios of nitrogen hydrides below the statistical values. 13. Amazon Land Wars in the South of Para NASA Technical Reports Server (NTRS) Simmons, Cynthia S.; Walker, Robert T.; Arima, Eugenio Y.; Aldrich, Stephen P.; Caldas, Marcellus M. 2007-01-01 The South of Para, located in the heart of the Brazilian Amazon, has become notorious for violent land struggle. Although land conflict has a long history in Brazil, and today impacts many parts of the country, violence is most severe and persistent here. The purpose of this article is to examine why. Specifically, we consider how a particular Amazonian place, the so-called South of Para has come to be known as Brazil's most dangerous badland. We begin by considering the predominant literature, which attributes land conflict to the frontier expansion process with intensified struggle emerging in the face of rising property values and demand for private property associated with capitalist development. From this discussion, we distill a concept of the frontier, based on notions of property rights evolution and locational rents. We then empirically test the persistence of place-based violence in the region, and assess the frontier movement through an analysis of transportation costs. The findings from the analyses indicate that the prevalent theorization of frontier violence in Amazonia does little to explain its persistent and pervasive nature in the South of Para. To fill this gap in understanding, we develop an explanation based the geographic conception of place, and we use contentious politics theory heuristically to elucidate the ways in which general processes interact with place specific history to engender a landscape of violence. In so doing, we focus on environmental, cognitive, and relational mechanisms (and implicated structures), and attempt to deploy them in an explanatory framework that allows direct observation of the accumulating layers of the region's tragic history. We end by placing our discussion within a political ecological context, and consider the implications of the Amazon Land War for the environment. 14. Amazon Land Wars in the South of Para NASA Technical Reports Server (NTRS) Simmons, Cynthia S.; Walker, Robert T.; Arima, Eugenio Y.; Aldrich, Stephen P.; Caldas, Marcellus M. 2007-01-01 The South of Para, located in the heart of the Brazilian Amazon, has become notorious for violent land struggle. Although land conflict has a long history in Brazil, and today impacts many parts of the country, violence is most severe and persistent here. The purpose of this article is to examine why. Specifically, we consider how a particular Amazonian place, the so-called South of Para has come to be known as Brazil's most dangerous badland. We begin by considering the predominant literature, which attributes land conflict to the frontier expansion process with intensified struggle emerging in the face of rising property values and demand for private property associated with capitalist development. From this discussion, we distill a concept of the frontier, based on notions of property rights evolution and locational rents. We then empirically test the persistence of place-based violence in the region, and assess the frontier movement through an analysis of transportation costs. The findings from the analyses indicate that the prevalent theorization of frontier violence in Amazonia does little to explain its persistent and pervasive nature in the South of Para. To fill this gap in understanding, we develop an explanation based the geographic conception of place, and we use contentious politics theory heuristically to elucidate the ways in which general processes interact with place specific history to engender a landscape of violence. In so doing, we focus on environmental, cognitive, and relational mechanisms (and implicated structures), and attempt to deploy them in an explanatory framework that allows direct observation of the accumulating layers of the region's tragic history. We end by placing our discussion within a political ecological context, and consider the implications of the Amazon Land War for the environment. 15. INTERVENCIÓN EDUCATIVA EFECTIVA EN VIH PARA MUJERES PubMed Central Miner, Sarah; Poupin, Lauren; Bernales, Margarita; Ferrer, Lilian; Cianelli, Rosina 2016-01-01 16. RVA: A Plugin for ParaView 3.14 SciTech Connect 2015-09-04 RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between 17. Para-hydrogen narrow filament evaporation at low temperature Elizarova, T. G.; Gogolin, A. A.; Montero, S. 2012-11-01 Undercooling of liquid para-hydrogen (pH2) below its freezing point at equilibrium (13.8 K) has been shown recently in flowing micro-filaments evaporating in low density background gas [M. Kühnel et al, Phys. Rev. Lett. 106, 245301 (2011)]. An hydrodynamical model accounting for this process is reported here. Analytical expressions for the local temperature T of a filament, averaged over its cross section, are obtained as a function of distance z to the nozzle. Comparison with the experiment is shown. It is shown also that the thermocapillary forces induce a parabolic profile of velocity across the jet. 18. RVA: A Plugin for ParaView 3.14 SciTech Connect 2015-09-04 RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between 19. Biodegradation of Para Amino Acetanilide by Halomonas sp. TBZ3 PubMed Central 2015-01-01 Background: Aromatic compounds are known as a group of highly persistent environmental pollutants. Halomonas sp. TBZ3 was isolated from the highly salty Urmia Lake of Iran. In this study, characterization of a new Halomonas isolate called Halomonas sp. TBZ3 and its employment for biodegradation of para-amino acetanilide (PAA), as an aromatic environmental pollutant, is described. Objectives: This study aimed to characterize the TBZ3 isolate and to elucidate its ability as a biodegradative agent that decomposes PAA. Materials and Methods: Primarily, DNA-DNA hybridization between TBZ3, Halomonas denitrificans DSM18045T and Halomonas saccharevitans LMG 23976T was carried out. Para-amino acetanilide biodegradation was assessed using spectrophotometry and confirmed by gas chromatography-mass spectroscopy (GC-MS). Parameters effective on biodegradation of PAA were optimized by the Response Surface Methodology (RSM). Results: The DNA-DNA hybridization experiments between isolate TBZ3, H. denitrificans and H. saccharevitans revealed relatedness levels of 57% and 65%, respectively. According to GC-MS results, TBZ3 degrades PAA to benzene, hexyl butanoate, 3-methyl-1-heptanol and hexyl hexanoate. Temperature 32.92°C, pH 6.76, and salinity 14% are the optimum conditions for biodegradation with a confidence level of 95% (at level α = 0.05). Conclusions: According to our results, Halomonas sp. TBZ3 could be considered as a biological agent for bioremediation of PAA and possibly other similar aromatic compounds. PMID:26495103 20. CYP96T1 of Narcissus sp. aff. pseudonarcissus Catalyzes Formation of the Para-Para' C-C Phenol Couple in the Amaryllidaceae Alkaloids. PubMed Kilgore, Matthew B; Augustin, Megan M; May, Gregory D; Crow, John A; Kutchan, Toni M 2016-01-01 The Amaryllidaceae alkaloids are a family of amino acid derived alkaloids with many biological activities; examples include haemanthamine, haemanthidine, galanthamine, lycorine, and maritidine. Central to the biosynthesis of the majority of these alkaloids is a C-C phenol-coupling reaction that can have para-para', para-ortho', or ortho-para' regiospecificity. Through comparative transcriptomics of Narcissus sp. aff. pseudonarcissus, Galanthus sp., and Galanthus elwesii we have identified a para-para' C-C phenol coupling cytochrome P450, CYP96T1, capable of forming the products (10bR,4aS)-noroxomaritidine and (10bS,4aR)-noroxomaritidine from 4'-O-methylnorbelladine. CYP96T1 was also shown to catalyzed formation of the para-ortho' phenol coupled product, N-demethylnarwedine, as less than 1% of the total product. CYP96T1 co-expresses with the previously characterized norbelladine 4'-O-methyltransferase. The discovery of CYP96T1 is of special interest because it catalyzes the first major branch in Amaryllidaceae alkaloid biosynthesis. CYP96T1 is also the first phenol-coupling enzyme characterized from a monocot. 1. CYP96T1 of Narcissus sp. aff. pseudonarcissus Catalyzes Formation of the Para-Para' C-C Phenol Couple in the Amaryllidaceae Alkaloids PubMed Central Kilgore, Matthew B.; Augustin, Megan M.; May, Gregory D.; Crow, John A.; Kutchan, Toni M. 2016-01-01 The Amaryllidaceae alkaloids are a family of amino acid derived alkaloids with many biological activities; examples include haemanthamine, haemanthidine, galanthamine, lycorine, and maritidine. Central to the biosynthesis of the majority of these alkaloids is a C-C phenol-coupling reaction that can have para-para', para-ortho', or ortho-para' regiospecificity. Through comparative transcriptomics of Narcissus sp. aff. pseudonarcissus, Galanthus sp., and Galanthus elwesii we have identified a para-para' C-C phenol coupling cytochrome P450, CYP96T1, capable of forming the products (10bR,4aS)-noroxomaritidine and (10bS,4aR)-noroxomaritidine from 4′-O-methylnorbelladine. CYP96T1 was also shown to catalyzed formation of the para-ortho' phenol coupled product, N-demethylnarwedine, as less than 1% of the total product. CYP96T1 co-expresses with the previously characterized norbelladine 4′-O-methyltransferase. The discovery of CYP96T1 is of special interest because it catalyzes the first major branch in Amaryllidaceae alkaloid biosynthesis. CYP96T1 is also the first phenol-coupling enzyme characterized from a monocot. PMID:26941773 2. Oropouche Virus. 3. Entomological Observations from Three Epidemics in Para, Brazil, 1975, DTIC Science & Technology 1979-10-06 by block number) OROPOUCHE VIRUS, CULICOIDES PARAENSIS, EPIDEMICS, BRAZIL, PARA I&2 ABSThAC (Cantmus = revers e e* I nessamy and idewtty by block nmbe...8217)URBAN EPIDEMTCS OF OROPOUCHE ORO FEVER IN THREE MUNICIPALITIES IN PARA, BR.tZIL WERE SrUDIED IN 1975. CULICOIDES PARAENSIS GOELDI WERE COLLECTED...Medicine and Hygiene OROPOUCHE VIRUS III. ENTOMOLOGICAL OBSERVATIONS FROM THREE EPIDEMICS IN PARA, BRAZIL, 1975* DONALD R. ROBERTS, ALFRED L. HOCH 3. Crystal growth and DFT insight on sodium para-nitrophenolate para-nitrophenol dihydrate single crystal for NLO applications Selvakumar, S.; Boobalan, Maria Susai; Anthuvan Babu, S.; Ramalingam, S.; Leo Rajesh, A. 2016-12-01 Single crystals of sodium para-nitrophenolate para-nitrophenol dihydrate (SPPD) were grown by slow evaporation technique and its structure has been studied by FT-IR, FT-Raman and single crystal X-ray diffraction techniques. The optical and electrical properties were characterized by UV-Vis spectrum, and dielectric studies respectively. SPPD was thermally stable up to 128 °C as determined by TG-DTA curves. Using the Kurtz-Perry powder method, the second-harmonic generation efficiency was found to be five times to that of KDP. Third-order nonlinear response was studied using Z-scan technique with a He-Ne laser (632.8 nm) and NLO parameters such as intensity dependent refractive index, nonlinear absorption coefficient and third-order susceptibility were also estimated. The molecular geometry from X-ray experiment in the ground state has been compared using density functional theory (DFT) with appropriate basis set. The first-order hyperpolarizability also calculated using DFT approaches. Stability of the molecule arising from hyperconjugative interactions leading to its nonlinear optical activity and charge delocalization were analyzed using natural bond orbital technique. HOMO-LUMO energy gap value suggests the possibility of charge transfer within the molecule. Based on optimized ground state geometries, Natural bond orbital (NBO) analysis was performed to study donor-acceptor interactions. 4. Ortho- and para-hydrogen in dense clouds, protoplanets, and planetary atmospheres NASA Technical Reports Server (NTRS) Decampli, W. M.; Cameron, A. G. W.; Bodenheimer, P.; Black, D. C. 1978-01-01 If ortho- and para-hydrogen achieve a thermal ratio on dynamical time scales in a molecular hydrogen cloud, then the specific heat is high enough in the temperature range 35-70 K to possibly induce hydrodynamic collapse. The ortho-para ratio in many interstellar cloud fragments is expected to meet this condition. The same may have been true for the primitive solar nebula. Detailed hydrodynamic and hydrostatic calculations are presented that show the effects of the assumed ortho-para ratio on the evolution of Jupiter during its protoplanetary phase. Some possible consequences of a thermalized ortho-para ratio in the atmospheres of the giant planets are also discussed. 5. Nuevos sistemas de frecuencia intermedia para el IAR Olalde, J. C.; Perilli, D.; Larrarte, J. J. 6. Para-phenylenediamine allergy: current perspectives on diagnosis and management PubMed Central Mukkanna, Krishna Sumanth; Stone, Natalie M; Ingram, John R 2017-01-01 Para-phenylenediamine (PPD) is the commonest and most well-known component of hair dyes. Oxidative hair dyes and dark henna temporary tattoos contain PPD. Individuals may be sensitized to PPD by temporary henna tattooing in addition to dyeing their hair. PPD allergy can cause severe reactions and may result in complications. In recent years, frequency of positive patch test reactions to PPD has been increasing. Cross-sensitization to other contact allergens may occur, in particular to other hair dye components. Hairdressers are at a high risk for PPD allergy and require counseling regarding techniques to minimize exposure and protective measures while handling hair dye. We focus this review on the current perspectives of diagnosis and management of PPD allergy. PMID:28176912 7. Pneumatic protection applied to an airbag for para-gliders 1998-02-01 We present a theory of pneumatic protection based on the laws of thermodynamics, elasticity and fluid mechanics. A general pneumatic protection system is made up of several communicating compartments, the differences in pressure of the compartments generating a transfer of mass and energy between them. The transfer offers interesting possibilities to improve the performance of the system. An example of this type of protection in aerial sport is the airbag for para-gliders, it is used in this paper to illustrate the theory. As the pressure in the airbag depends uniquely on its volume, the geometric model in the theory can be simplified. Experiments carried out with crash-test dummies equipped with sensors have confirmed the theoretical predictions. 8. Substrate mediated smooth growth of para-sexiphenyl on graphene Poelsema, Bene; Hlawacek, Gregor; Khokhar, Fawad S.; van Gastel, Raoul; Teichert, Christian 2010-03-01 We report on the layer-by-layer growthof lying para-sexiphenyl (6P) molecules on metal supported graphene flakes. The formation of multilayers has been monitored in situ by means of LEEM. μ-LEED has been used to reveal a bulk-like structure of the submonolayer, monolayer and multilayer regime. Graphene is a flexible, highly conductive and transparent electrode material, making it a promising technological substrate for organic semiconductors. 6P is a blue light emitting molecule with a high charge carrier mobility. The combination of an established deposition technique with the unique properties of organic semiconductors and graphene is an enabler for future flexible and cost efficient devices based on small conjugated molecules. 9. Para-Hydrogen-Enhanced Gas-Phase Magnetic Resonance Imaging SciTech Connect Bouchard, Louis-S.; Kovtunov, Kirill V.; Burt, Scott R.; Anwar,M. Sabieh; Koptyug, Igor V.; Sagdeev, Renad Z.; Pines, Alexander 2007-02-23 Herein, we demonstrate magnetic resonance imaging (MRI) inthe gas phase using para-hydrogen (p-H2)-induced polarization. A reactantmixture of H2 enriched in the paraspin state and propylene gas is flowedthrough a reactor cell containing a heterogenized catalyst, Wilkinson'scatalyst immobilized on modified silica gel. The hydrogenation product,propane gas, is transferred to the NMR magnet and is spin-polarized as aresult of the ALTADENA (adiabatic longitudinal transport and dissociationengenders net alignment) effect. A polarization enhancement factor of 300relative to thermally polarized gas was observed in 1D1H NMR spectra.Enhancement was also evident in the magnetic resonance images. This isthe first demonstration of imaging a hyperpolarized gaseous productformed in a hydrogenation reaction catalyzed by a supported catalyst.This result may lead to several important applications, includingflow-through porous materials, gas-phase reaction kinetics and adsorptionstudies, and MRI in low fields, all using catalyst-free polarizedfluids. 10. Electron impact ionization dynamics of para-benzoquinone Jones, D. B.; Ali, E.; Ning, C. G.; Colgan, J.; Ingólfsson, O.; Madison, D. H.; Brunger, M. J. 2016-10-01 Triple differential cross sections (TDCSs) for the electron impact ionization of the unresolved combination of the 4 highest occupied molecular orbitals (4b3g, 5b2u, 1b1g, and 2b3u) of para-benzoquinone are reported. These were obtained in an asymmetric coplanar geometry with the scattered electron being observed at the angles -7.5°, -10.0°, -12.5° and -15.0°. The experimental cross sections are compared to theoretical calculations performed at the molecular 3-body distorted wave level, with a marginal level of agreement between them being found. The character of the ionized orbitals, through calculated momentum profiles, provides some qualitative interpretation for the measured angular distributions of the TDCS. 11. ParaText : scalable text analysis and visualization. SciTech Connect Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M. 2010-07-01 Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis. 12. Luttinger parameter of quasi-one-dimensional para -H2 Ferré, G.; Gordillo, M. C.; Boronat, J. 2017-02-01 We have studied the ground-state properties of para-hydrogen in one dimension and in quasi-one-dimensional configurations using the path-integral ground-state Monte Carlo method. This method produces zero-temperature exact results for a given interaction and geometry. The quasi-one-dimensional setup has been implemented in two forms: the inner channel inside a carbon nanotube coated with H2 and a harmonic confinement of variable strength. Our main result is the dependence of the Luttinger parameter on the density within the stable regime. Going from one dimension to quasi-one dimension, keeping the linear density constant, produces a systematic increase of the Luttinger parameter. This increase is, however, not enough to reach the superfluid regime and the system always remain in the quasicrystal regime, according to Luttinger liquid theory. 13. Sponges, Tubules and Modulated Phases of Para-Antinematic Membranes Fournier, J. B.; Galatola, P. 1997-10-01 We theoretically analyze the behavior of membranes presenting a nematic susceptibility, induced by the presence of anisotropic phospholipids having a quadrupolar nematic symmetry. This kind of anisotropic phospholipids is either naturally found in some biological membranes, or can be chemically tailored by linking pairs of single surfactants at the level of their polar heads, giving rise to so-called “gemini” surfactants. We predict that such membranes can acquire a non-zero paranematic order induced by the membrane curvature, which in turn produces curvature instabilities. We call the resulting paranematic order para-antinematic, since it is opposite on opposite sides of the membrane. We find phase transitions toward sponges (L3), tubules, or modulated “egg-carton” phases. 14. Ortho-Para Mixing Hyperfine Interaction in the H2O+ Ion and Nuclear Spin Equilibration Tanaka, Keiichi; Harada, Kensuke; Oka, Takeshi 2013-10-01 The ortho to para conversion of water ion, H2O+, due to the interaction between the magnetic moments of the unpaired electron and protons has been theoretically studied to calculate the spontaneous emission lifetime between the ortho- and para-levels. The electron spin-nuclear spin interaction term, Tab(Sa-Ib + Sb-Ia) mixes ortho (I = 1) and para (I = 0) levels to cause the -forbidden- ortho to para |-I| = 1 transition. The mixing term with Tab = 72.0 MHz is 4 orders of magnitude higher for H2O+ than for its neutral counterpart H2O where the magnetic field interacting with proton spins is by molecular rotation rather than the free electron. The resultant 108 increase of ortho to para conversion rate possibly makes the effect of conversion in H2O+ measurable in laboratories and possibly explains the anomalous ortho to para ratio recently reported by Herschel heterodyne instrument for the far-infrared (HIFI) observation. Results of our calculations show that the ortho - para mixings involving near-degenerate ortho and para levels are high (-10-3), but they tend to occur at high energy levels, -300 K. Because of the rapid spontaneous emission, such high levels are not populated in diffuse clouds unless the radiative temperature of the environment is very high. The low-lying 101 (para) and 111 (ortho) levels of H2O+ are mixed by -10-4 making the spontaneous emission lifetime for the para 101 - ortho 000 transition 520 years and 5200 years depending on the F value of the hyperfine structure. Thus the ortho - para conversion due to the unpaired electron is not likely to seriously affect thermalization of interstellar H2O+ unless either the radiative temperature is very high or number density of the cloud is very low. 15. The Ratio of Ortho- to Para-H2 in Photodissociation Regions NASA Technical Reports Server (NTRS) Sternberg, Amiel; Neufeld, David A. 1999-01-01 We discuss the ratio of ortho- to para-H2 in photodissociation regions (PDRs). We draw attention to an apparent confusion in the literature between the ortho-to-para ratio of molecules in FUV-pumped vibrationally excited states and the total H2 ortho-to-para abundance ratio. These ratios are not the same because the process of FUV pumping of fluorescent H2 emission in PDRs occurs via optically thick absorption lines. Thus gas with an equilibrium ratio of ortho- to para-H2 equal to 3 will yield FUV-pumped vibrationally excited ortho-to-para ratios smaller than 3, because the ortho-H2 pumping rates are preferentially reduced by optical depth effects. Indeed, if the ortho and para pumping lines are on the "square root" part of the curve of growth, then the expected ratio of ortho and para vibrational line strengths is 3(sup 1/2) approximately 1.7, close to the typically observed value. Thus, contrary to what has sometimes been stated in the literature, most previous measurements of the ratio of ortho- to para-H2 in vibrationally excited states are entirely consistent with a total ortho-to-para ratio of 3, the equilibrium value for temperatures greater than 200 K. We present an analysis and several detailed models that illustrate the relationship between the total ratios of ortho- to para-H2 and the vibrationally excited ortho-to-para ratios in PDRs. Recent Infrared Space Observatory measurements of pure rotational and vibrational H2 emissions from the PDR in the star-forming region S140 provide strong observational support for our conclusions. 16. The Ratio of Ortho- to Para-H2 in Photodissociation Regions NASA Technical Reports Server (NTRS) Sternberg, Amiel; Neufeld, David A. 1999-01-01 We discuss the ratio of ortho- to para-H2 in photodissociation regions (PDRs). We draw attention to an apparent confusion in the literature between the ortho-to-para ratio of molecules in FUV-pumped vibrationally excited states and the total H2 ortho-to-para abundance ratio. These ratios are not the same because the process of FUV pumping of fluorescent H2 emission in PDRs occurs via optically thick absorption lines. Thus gas with an equilibrium ratio of ortho- to para-H2 equal to 3 will yield FUV-pumped vibrationally excited ortho-to-para ratios smaller than 3, because the ortho-H2 pumping rates are preferentially reduced by optical depth effects. Indeed, if the ortho and para pumping lines are on the "square root" part of the curve of growth, then the expected ratio of ortho and para vibrational line strengths is 3(sup 1/2) approximately 1.7, close to the typically observed value. Thus, contrary to what has sometimes been stated in the literature, most previous measurements of the ratio of ortho- to para-H2 in vibrationally excited states are entirely consistent with a total ortho-to-para ratio of 3, the equilibrium value for temperatures greater than 200 K. We present an analysis and several detailed models that illustrate the relationship between the total ratios of ortho- to para-H2 and the vibrationally excited ortho-to-para ratios in PDRs. Recent Infrared Space Observatory measurements of pure rotational and vibrational H2 emissions from the PDR in the star-forming region S140 provide strong observational support for our conclusions. 17. Ortho-para mixing hyperfine interaction in the H2O+ ion and nuclear spin equilibration. PubMed Tanaka, Keiichi; Harada, Kensuke; Oka, Takeshi 2013-10-03 The ortho to para conversion of water ion, H2O(+), due to the interaction between the magnetic moments of the unpaired electron and protons has been theoretically studied to calculate the spontaneous emission lifetime between the ortho- and para-levels. The electron spin-nuclear spin interaction term, Tab(SaΔIb + SbΔIa) mixes ortho (I = 1) and para (I = 0) levels to cause the "forbidden" ortho to para |ΔI| = 1 transition. The mixing term with Tab = 72.0 MHz is 4 orders of magnitude higher for H2O(+) than for its neutral counterpart H2O where the magnetic field interacting with proton spins is by molecular rotation rather than the free electron. The resultant 10(8) increase of ortho to para conversion rate possibly makes the effect of conversion in H2O(+) measurable in laboratories and possibly explains the anomalous ortho to para ratio recently reported by Herschel heterodyne instrument for the far-infrared (HIFI) observation. Results of our calculations show that the ortho ↔ para mixings involving near-degenerate ortho and para levels are high (∼10(-3)), but they tend to occur at high energy levels, ∼300 K. Because of the rapid spontaneous emission, such high levels are not populated in diffuse clouds unless the radiative temperature of the environment is very high. The low-lying 101 (para) and 111 (ortho) levels of H2O(+) are mixed by ∼10(-4) making the spontaneous emission lifetime for the para 101 → ortho 000 transition 520 years and 5200 years depending on the F value of the hyperfine structure. Thus the ortho ↔ para conversion due to the unpaired electron is not likely to seriously affect thermalization of interstellar H2O(+) unless either the radiative temperature is very high or number density of the cloud is very low. USDA-ARS?s Scientific Manuscript database En este articulo se reporta por primera vez para el Peru una especies del genero Nielsonia Young, 1977, de material procedente del Departamento de Tumbes. El genero ha sido reportada anteriormente de Ecuador, como unico registro para Sudamerica, y America Central. El unico especimen hembra encontra... 19. Guía para la evaluación del riesgo de los polinizadores EPA Pesticide Factsheets La Guía para la evaluación del riesgo de los polinizadores de la EPA es parte de una estrategia de la evaluación de los riesgos que presentan los pesticidas para las abejas a fin de mejorar la protección de los polinizadores. 20. Rate of para-aortic lymph node micrometastasis in patients with locally advanced cervical cancer PubMed Central Zand, Behrouz; Euscher, Elizabeth D.; Soliman, Pamela T.; Schmeler, Kathleen M.; Coleman, Robert L.; Frumovitz, Michael; Jhingran, Anuja; Ramondetta, Lois M.; Ramirez, Pedro T. 2014-01-01 1. The parA resolvase performs site-specific genomic excision in Arabidopsis USDA-ARS?s Scientific Manuscript database We have designed a site-specific excision detection system in Arabidopsis to study the in planta activity of the small serine recombinase ParA. Using a transient expression assay as well as stable transgenic plant lines, we show that the ParA recombinase is catalytically active and capable of perfo... EPA Pesticide Factsheets La EPA emitió una propuesta para la revisión de la norma para la Certificación de Aplicadores de Plaguicidas. La norma ayudará a mantener nuestras comunidades seguras, salvaguardar el medio ambiente y reducir el riesgo a los que aplican los plaguicidas. 3. Overproduction and localization of Mycobacterium tuberculosis ParA and ParB proteins PubMed Central Maloney, Erin; Madiraju, Murty; Rajagopalan, Malini 2011-01-01 SUMMARY The ParA and ParB family proteins are required for accurate partitioning of replicated chromosomes. The Mycobacterium tuberculosis genome contains parB, parA and two parA homologs, Rv1708 and Rv3213c. It is unknown if parA and its homologs are functionally related. To understand the roles of ParA and ParB proteins in M. tuberculosis cell cycle, we have evaluated the consequences of their overproduction and visualized their localization patterns in M. smegmatis. We show that cells overproducing of ParA, Rv1708 and Rv3213c and ParB are filamentous and multinucleoidal indicating defects in cell cycle progression. Visualization of green-fluorescent protein fusions of ParA and its homologues showed similar localization patterns with foci at poles, quarter-cell, midcell positions and spiral-like structures indicating that they are functionally related. On the other hand, the ParBGFP fusion protein localized only to the cell poles. The cyan and yellow fluorescent fusion proteins of ParA and ParB, respectively, colocalized at the cell poles indicating that these proteins interact and possibly associate with the chromosomal origin of replication. Collectively our results suggest that the M. tuberculosis Par proteins play important roles in cell cycle progression. PMID:20006309 4. Protection of weanling hamsters from experimental infection with wild-type parainfluenza virus type 3 (para 3) by cold-adapted mutants of para 3. PubMed Crookshanks-Newman, F K; Belshe, R B 1986-02-01 Parainfluenza virus type 3 (para 3) was adapted to replicate at 20 degrees C, a nonpermissive temperature for wild-type (wt) para 3. Serial passage at 20 degrees C resulted in the generation of cold-adapted (ca) and temperature-sensitive (ts) mutants. These mutant viruses have been characterized both in vitro and in vivo [Belshe and Hissom (1982): Journal of Medical Virology 10:235-242; Crookshanks and Belshe (1984): Journal of Medical Virology 13:243-249]. We now report the evaluation of three mutants (clone 1150, passaged 12 times in the cold [cp12], clone 1146, passaged 18 times in the cold [cp18], and clone 1328, passaged 45 times in the cold [cp45]) for their ability to protect hamsters from infection by wild-type para 3. Ether-anesthetized male syrian hamsters were intranasally vaccinated with either wt para 3 (clone 127) or one of the ca para 3 mutants and on day 28 post-vaccination; each animal was intranasally challenged with 10(5.0) pfu of wt para 3. On days 1, 2, 3, and 4 post-challenge, 4 to 13 hamsters from each group were sacrificed, and the quantity of para 3 in the nasal turbinates and lungs was determined. Wt virus induced protection from challenge. cp12, cp18, and cp45 reduced the peak titer of wt replication in the lungs by greater than 100-fold, tenfold, and tenfold, respectively. The duration of virus replication was shortened also by intranasal vaccination with the mutants. These data give evidence of an inverse relationship between the degree of protection induced by vaccination with cold-adapted mutants and the number of passages of the virus in the cold. 5. DYNA3D/ParaDyn Regression Test Suite Inventory SciTech Connect Lin, J I 2011-01-25 The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of production release 10.1 in September 2010. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark in the corresponding column. The definition of ''feature'' has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to a particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds, compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented. 6. Conformation of ionizable poly Para phenylene ethynylene in dilute solutions DOE PAGES Wijesinghe, Sidath; Maskey, Sabina; Perahia, Dvora; ... 2015-11-03 The conformation of dinonyl poly para phenylene ethynylenes (PPEs) with carboxylate side chains, equilibrated in solvents of different quality is studied using molecular dynamics simulations. PPEs are of interest because of their tunable electro-optical properties, chemical diversity, and functionality which are essential in wide range of applications. The polymer conformation determines the conjugation length and their assembly mode and affects electro-optical properties which are critical in their current and potential uses. The current study investigates the effect of carboxylate fraction on PPEs side chains on the conformation of chains in the dilute limit, in solvents of different quality. The dinonylmore » PPE chains are modeled atomistically, where the solvents are modeled both implicitly and explicitly. Dinonyl PPEs maintained a stretched out conformation up to a carboxylate fraction f of 0.7 in all solvents studied. The nonyl side chains are extended and oriented away from the PPE backbone in toluene and in implicit good solvent whereas in water and implicit poor solvent, the nonyl side chains are collapsed towards the PPE backbone. Thus, rotation around the aromatic ring is fast and no long range correlations are seen within the backbone.« less 7. Evidence for para dechlorination of polychlorobiphenyls by methanogenic bacteria SciTech Connect Ye, D.; Quensen, J.F.; Tiedje, J.M. 1995-06-01 When microorganisms eluted from upper Hudson River sediment were cultured without any substrate except polychlorobiphenyl (PCB)-free Hudson River sediment, methane formation was the terminal step of the anaerobic food chain. In sediments containing Aroclor 1242, addition of eubacterium-inhibiting antibiotics, which should have directly inhibited fermentative bacteria and thereby should have indirectly inhibited methanogens, resulted in no dechlorination activity or methane production. However, when substrates for methanogenic bacteria were provided along with the antibiotics (to free the methanogens from dependence on eubacteria), concomitant methane production and dechlorination of PCBs were observed. The dechlorination of Aroclor 1242 was from the para positions, a pattern distinctly different from, and more limited than, the pattern observed with untreated or pasteurized inocula. Both methane production and dechlorination in cultures amended with antibiotics plus methanogenic substrates were inhibited by 2-bromoethanesulfonic acid. These results suggest that the methanogenic bacteria are among the physiological groups capable of anaerobic dechlorination of PCBs, but that the dechlorination observed with methanogenic bacteria is less extensive than the dechlorination observed with more complex anaerobic consortia. 27 refs., 5 figs., 1 tab. 8. DYNA3D/ParaDyn Regression Test Suite Inventory SciTech Connect Lin, Jerry I. 2016-09-01 The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of preliminary release 16.1 in September 2016. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark (√) in the corresponding column. The definition of “feature” has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to a particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors, except problems involving features only available in serial mode. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds; compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented. 9. Para-hydrogen induced polarization in heterogeneous hydrogenationreactions SciTech Connect Koptyug, Igor V.; Kovtunov, Kirill; Burt, Scott R.; Anwar, M.Sabieh; Hilty, Christian; Han, Song-I; Pines, Alexander; Sagdeev, Renad Z. 2007-01-31 We demonstrate the creation and observation ofpara-hydrogen-induced polarization in heterogeneous hydrogenationreactions. Wilkinson's catalyst, RhCl(PPh3)3, supported on eithermodified silica gel or a polymer, is shown to hydrogenate styrene intoethylbenzene and to produce enhanced spin polarizations, observed throughNMR, when the reaction was performed with H2 gas enriched in the paraspinisomer. Furthermore, gaseous phase para-hydrogenation of propylene topropane with two catalysts, the Wilkinson's catalyst supported onmodified silica gel and Rh(cod)(sulfos) (cod = cycloocta-1,5-diene;sulfos) - O3S(C6H4)CH2C(CH2PPh2)3) supported on silica gel, demonstratesheterogeneous catalytic conversion resulting in large spin polarizations.These experiments serve as a direct verification of the mechanism ofheterogeneous hydrogenation reactions involving immobilized metalcomplexes and can be potentially developed into a practical tool forproducing catalyst-free fluids with highly polarized nuclear spins for abroad range of hyperpolarized NMR and MRI applications. 10. Para-nitrobenzyl esterases with enhanced activity in aqueous and nonaqueous media DOEpatents Arnold, F.H.; Moore, J.C. 1999-05-25 A method is disclosed for isolating and identifying modified para-nitrobenzyl esterases which exhibit improved stability and/or esterase hydrolysis activity toward selected substrates and under selected reaction conditions relative to the unmodified para-nitrobenzyl esterase. The method involves preparing a library of modified para-nitrobenzyl esterase nucleic acid segments (genes) which have nucleotide sequences that differ from the nucleic acid segment which encodes for unmodified para-nitrobenzyl esterase. The library of modified para-nitrobenzyl nucleic acid segments is expressed to provide a plurality of modified enzymes. The clones expressing modified enzymes are then screened to identify which enzymes have improved esterase activity by measuring the ability of the enzymes to hydrolyze the selected substrate under the selected reaction conditions. Specific modified para-nitrobenzyl esterases are disclosed which have improved stability and/or ester hydrolysis activity in aqueous or aqueous-organic media relative to the stability and/or ester hydrolysis activity of unmodified naturally occurring para-nitrobenzyl esterase. 43 figs. 11. Para-nitrobenzyl esterases with enhanced activity in aqueous and nonaqueous media DOEpatents Arnold, Frances H.; Moore, Jeffrey C. 1998-01-01 A method for isolating and identifying modified para-nitrobenzyl esterases which exhibit improved stability and/or esterase hydrolysis activity toward selected substrates and under selected reaction conditions relative to the unmodified para-nitrobenzyl esterase. The method involves preparing a library of modified para-nitrobenzyl esterase nucleic acid segments (genes) which have nucleotide sequences that differ from the nucleic acid segment which encodes for unmodified para-nitrobenzyl esterase. The library of modified para-nitrobenzyl nucleic acid segments is expressed to provide a plurality of modified enzymes. The clones expressing modified enzymes are then screened to identify which enzymes have improved esterase activity by measuring the ability of the enzymes to hydrolyze the selected substrate under the selected reaction conditions. Specific modified para-nitrobenzyl esterases are disclosed which have improved stability and/or ester hydrolysis activity in aqueous or aqueous-organic media relative to the stability and/or ester hydrolysis activity of unmodified naturally occurring para-nitrobenzyl esterase. 12. Para-nitrobenzyl esterases with enhanced activity in aqueous and nonaqueous media DOEpatents Arnold, Frances H.; Moore, Jeffrey C. 1999-01-01 A method for isolating and identifying modified para-nitrobenzyl esterases which exhibit improved stability and/or esterase hydrolysis activity toward selected substrates and under selected reaction conditions relative to the unmodified para-nitrobenzyl esterase. The method involves preparing a library of modified para-nitrobenzyl esterase nucleic acid segments (genes) which have nucleotide sequences that differ from the nucleic acid segment which encodes for unmodified para-nitrobenzyl esterase. The library of modified para-nitrobenzyl nucleic acid segments is expressed to provide a plurality of modified enzymes. The clones expressing modified enzymes are then screened to identify which enzymes have improved esterase activity by measuring the ability of the enzymes to hydrolyze the selected substrate under the selected reaction conditions. Specific modified para-nitrobenzyl esterases are disclosed which have improved stability and/or ester hydrolysis activity in aqueous or aqueous-organic media relative to the stability and/or ester hydrolysis activity of unmodified naturally occurring para-nitrobenzyl esterase. 13. Para-nitrobenzyl esterases with enhanced activity in aqueous and nonaqueous media DOEpatents Arnold, F.H.; Moore, J.C. 1998-04-21 A method is disclosed for isolating and identifying modified para-nitrobenzyl esterases. These enzymes exhibit improved stability and/or esterase hydrolysis activity toward selected substrates and under selected reaction conditions relative to the unmodified para-nitrobenzyl esterase. The method involves preparing a library of modified para-nitrobenzyl esterase nucleic acid segments (genes) which have nucleotide sequences that differ from the nucleic acid segment which encodes for unmodified para-nitrobenzyl esterase. The library of modified para-nitrobenzyl nucleic acid segments is expressed to provide a plurality of modified enzymes. The clones expressing modified enzymes are then screened to identify which enzymes have improved esterase activity by measuring the ability of the enzymes to hydrolyze the selected substrate under the selected reaction conditions. Specific modified para-nitrobenzyl esterases are disclosed which have improved stability and/or ester hydrolysis activity in aqueous or aqueous-organic media relative to the stability and/or ester hydrolysis activity of unmodified naturally occurring para-nitrobenzyl esterase. 43 figs. 14. Females prefer carotenoid colored males as mates in the pentamorphic livebearing fish, Poecilia parae. PubMed Bourne, Godfrey R; Breden, Felix; Allen, Teresa C 2003-09-01 The first results of female preference and chosen male mating success in a new model organism, the pentamorphic livebearing fish, Poecilia parae, are presented. Poecilia parae is a relative of the guppy, P. reticulata, and is assumed to have similar reproductive behavior. We tested the hypothesis that P. parae females, like female guppies, prefer carotenoid colored males as mates. Here we show that the time a female spent with males was significantly greater for carotenoid coloration in red and yellow melanzona, but time with these two morphs did not differ. The preferred red and yellow males mated significantly more often with their choosing females than did the non-preferred blue and parae males. The few blue melanzona and parae males that mated did so without performing courtship displays. Some females mated with all phenotypes including immaculata males during open group trials. Female P. parae clearly preferred males with carotenoid coloration, thereby corroborating the hypothesis. Alternative male mating tactics by blue melanzona, parae, and immaculata morphs and promiscuous mating by females also resembled features of reproductive behaviors exhibited by guppies. 15. Quantum fluctuations increase the self-diffusive motion of para-hydrogen in narrow carbon nanotubes. PubMed Kowalczyk, Piotr; Gauden, Piotr A; Terzyk, Artur P; Furmaniak, Sylwester 2011-05-28 Quantum fluctuations significantly increase the self-diffusive motion of para-hydrogen adsorbed in narrow carbon nanotubes at 30 K comparing to its classical counterpart. Rigorous Feynman's path integral calculations reveal that self-diffusive motion of para-hydrogen in a narrow (6,6) carbon nanotube at 30 K and pore densities below ∼29 mmol cm(-3) is one order of magnitude faster than the classical counterpart. We find that the zero-point energy and tunneling significantly smoothed out the free energy landscape of para-hydrogen molecules adsorbed in a narrow (6,6) carbon nanotube. This promotes a delocalization of the confined para-hydrogen at 30 K (i.e., population of unclassical paths due to quantum effects). Contrary the self-diffusive motion of classical para-hydrogen molecules in a narrow (6,6) carbon nanotube at 30 K is very slow. This is because classical para-hydrogen molecules undergo highly correlated movement when their collision diameter approached the carbon nanotube size (i.e., anomalous diffusion in quasi-one dimensional pores). On the basis of current results we predict that narrow single-walled carbon nanotubes are promising nanoporous molecular sieves being able to separate para-hydrogen molecules from mixtures of classical particles at cryogenic temperatures. 16. Contextual Factors that Foster or Inhibit Para-Teacher Professional Development: The Case of an Indian, Non-Governmental Organization ERIC Educational Resources Information Center Raval, Harini; McKenney, Susan; Pieters, Jules 2012-01-01 The appointment of para-professionals to overcome skill shortages and/or make efficient use of expensive resources is well established in both developing and developed countries. The present research concerns para-teachers in India. The literature on para-teachers is dominated by training for special needs settings, largely in developed societies.… 17. Irradiation of Pelvic and Para-Aortic Nodes in Carcinoma of the Cervix. PubMed Rotman; Aziz; Eifel 1994-01-01 Extended-field irradiation offers a significant chance of cure for patients with para-aortic node metastases if pelvic disease can be controlled. Prognosis is best for patients with microscopic para-aortic disease or with a single enlarged node. Complications of extended-field irradiation can be minimized with careful radiation therapy technique that uses multiple fields and high-energy beams of 18 MV or greater and by avoiding transperitoneal surgical staging. Although the role of prophylactic para-aortic irradiation is still being defined, randomized trials suggest that extended fields do benefit some patients with locoregionally advanced disease. 18. Endovascular Treatment of a Ruptured Para-Anastomotic Aneurysm of the Abdominal Aorta SciTech Connect Sfyroeras, Giorgos S.; Lioupis, Christos Bessias, Nikolaos; Maras, Dimitris; Pomoni, Maria; Andrikopoulos, Vassilios 2008-07-15 We report a case of a ruptured para-anastomotic aortic aneurysm treated with implantation of a bifurcated stent-graft. A 72-year-old patient, who had undergone aortobifemoral bypass for aortoiliac occlusive disease 16 years ago, presented with a ruptured para-anastomotic aortic aneurysm. A bifurcated stent-graft was successfully deployed into the old bifurcated graft. This is the first report of a bifurcated stent-graft being placed through an 'end-to-side' anastomosed old aortobifemoral graft. Endovascular treatment of ruptured para-anastomotic aortic aneurysms can be accomplished successfully, avoiding open surgery which is associated with increased mortality and morbidity. 19. Measurement of the formaldehyde ortho to para ratio in three molecular clouds NASA Technical Reports Server (NTRS) Kahane, C.; Lucas, R.; Frerking, M. A.; Langer, W. D.; Encrenaz, P. 1984-01-01 Observations of ortho and para H2CO in two types of clouds, a warm cloud (Orion A) and two cold clouds (L183 and TMC1), are presented. The ortho to para ratio in Orion deduced from the H2(C-13)O data is about three, while that for TMC1 is about one and that for L183 is 1-2. The former value is in agreement with the value calculated from chemical models of ortho and para H2CO production. The values for the cold clouds are consistent with thermal equilibrium at a temperature slightly smaller than 10 K. 20. All the three ParaHox genes are present in Nuttallochiton mirandus (Mollusca: polyplacophora): evolutionary considerations. PubMed Barucca, Marco; Biscotti, Maria A; Olmo, Ettore; Canapa, Adriana 2006-03-15 The ParaHox gene cluster contains three homeobox genes, Gsx, Xlox and Cdx and has been demonstrated to be an evolutionary sister of the Hox gene cluster. Among deuterostomes the three genes are found in the majority of taxa, whereas among protostomes they have so far been isolated only in the phylum Sipuncula. We report the partial sequences of all three ParaHox genes in the polyplacophoran Nuttallochiton mirandus, the first species of the phylum Mollusca where all ParaHox genes have been isolated. This finding has phylogenetic implications for the phylum Mollusca and for its relationships with the other lophotrochozoan taxa. 1. Measurement of the formaldehyde ortho to para ratio in three molecular clouds NASA Technical Reports Server (NTRS) Kahane, C.; Lucas, R.; Frerking, M. A.; Langer, W. D.; Encrenaz, P. 1984-01-01 Observations of ortho and para H2CO in two types of clouds, a warm cloud (Orion A) and two cold clouds (L183 and TMC1), are presented. The ortho to para ratio in Orion deduced from the H2(C-13)O data is about three, while that for TMC1 is about one and that for L183 is 1-2. The former value is in agreement with the value calculated from chemical models of ortho and para H2CO production. The values for the cold clouds are consistent with thermal equilibrium at a temperature slightly smaller than 10 K. 2. Time is of the essence for ParaHox homeobox gene clustering PubMed Central 2013-01-01 ParaHox genes, and their evolutionary sisters the Hox genes, are integral to patterning the anterior-posterior axis of most animals. Like the Hox genes, ParaHox genes can be clustered and exhibit the phenomenon of colinearity - gene order within the cluster matching gene activation. Two new instances of ParaHox clustering provide the first examples of intact clusters outside chordates, with gene expression lending weight to the argument that temporal colinearity is the key to understanding clustering. See research articles: http://www.biomedcentral.com/1741-7007/11/68 and http://www.biomedcentral.com/1471-2148/13/129 PMID:23803337 3. Blood transfusion in the para-Bombay phenotype. PubMed 1990-08-01 The H-deficient phenotypes found in Chinese so far, have all been secretors of soluble blood group substances in saliva. The corresponding isoagglutinin activity (e.g. anti-B in OB(Hm) persons) has been found to be weak in all cases. To determine the clinical significance of these weak isoagglutinins 51Cr red cell survival tests were performed on three OB(Hm) individuals transfused with small volumes (4 ml) of groups B and O RBC. Rapid destruction of most of the RBC occurred whether or not the isoagglutinins of the OB(Hm) individuals were indirect antiglobulin test (IAGT) reactive. When a larger volume (54 ml packed RBC) of group B cells (weakly incompatible by IAGT) was transfused to another OB(Hm) individual with IAGT active anti-HI, the survival of the transfused RBC was 93% at 24 h, with 30% of the RBC remaining in the circulation at 28 d in contrast to 76% as would be expected if the survival was normal. Therefore when whole units of blood of normal ABO blood groups, compatible by IAGT, are transfused, the survival is expected to be almost normal. These weak isoagglutinins may not be very clinically significant and we suggest that when para-Bombay blood is not available, the compatibility testing for OA(Hm) persons should be performed with group A and group O packed RBC; OB(Hm) with group B and group O packed RBC: OAB(Hm) with groups A, B, AB and O packed RBC. For cross matching, the indirect antiglobulin test by a prewarmed technique should be used. 4. Management of large para-esophageal hiatal hernias. PubMed Collet, D; Luc, G; Chiche, L 2013-12-01 Para-esophageal hernias are relatively rare and typically occur in elderly patients. The various presenting symptoms are non-specific and often occur in combination. These include symptoms of gastro-esophageal reflux (GERD) in 26 to 70% of cases, microcytic anemia in 17 to 47%, and respiratory symptoms in 9 to 59%. Respiratory symptoms are not completely resolved by surgical intervention. Acute complications such as gastric volvulus with incarceration or strangulation are rare (estimated incidence of 1.2% per patient per year) but gastric ischemia leading to perforation is the main cause of mortality. Only patients with symptomatic hernias should undergo surgery. Prophylactic repair to prevent acute incarceration should only be undertaken in patients younger than 75 in good condition; surgical indications must be discussed individually beyond this age. The laparoscopic approach is now generally accepted. Resection of the hernia sac is associated with a lower incidence of recurrence. Repair of the hiatus can be reinforced with prosthetic material (either synthetic or biologic), but the benefit of prosthetic repair has not been clearly shown. Results of prosthetic reinforcement vary in different studies; it has been variably associated with four times fewer recurrences or with no measurable difference. A Collis type gastroplasty may be useful to lengthen a foreshortened esophagus, but no objective criteria have been defined to support this approach. The anatomic recurrence rate can be as high as 60% at 12years. But most recurrences are asymptomatic and do not affect the quality of life index. It therefore seems more appropriate to evaluate functional results and quality of life measures rather than to gauge success by a strict evaluation of anatomic hernia reduction. 5. La Resonancia J/$\\psi$ y Sus Implicaciones Para La Masa Del W SciTech Connect Sanchez-Hernandez, Alberto 1995-01-01 Es un placer agradecer a mi asesor el doctor Heriberto Castilla Valdez por brindarme sus conocimientos, experiencia y paciencia en el desarrollo de esta tesis; tambien quiero agradecer al profesor H.E. Fisk, por su apoyo econemico en mi estancia en Fermilab. De igual forma agradezco a los doctores Arturo Fernandez Telles, Miguel Angel Perez Angen y Rebeca Juarez Wisozka quienes me introdujeron al campo de la fisica experimental de altas energfas. Agradezco tambien a Maribel Rios Cruz, Ruben Flores Mendieta, Juan Morales Corona, Fabiola Vazquez Valencia, Salvador Carrillo Moreno y Cecilia Uribe Estrada por su amistad y compaiierismo durante el desarrollo de mi maestria. Un especial agradecimiento a Ian Adam y Kina Denisenko por su valiosa ayuda, comentarios y discusiones durante mi estancia en Fermilab. Por ultimo quisiera agradecer a mis profesores, amigos y familiares quienes siempre me apoyaron y alentaron y al Consejo N acional de Ciencia y Tecnologfa asf como al Departamento de Fisica de Cinvestav por su apoyo econemlco. 6. Cold-plasma assisted hydrophobisation of cellulose fibres with styrene and para-halogenated homologues Gaiolas, C.; Costa, A. P.; Santos Silva, M. J.; Belgacem, M. N. 2012-07-01 Cold-plasma-assisted treatment of additive-free hand sheet paper samples with styrene (ST), para-fluorostyrene (FST), para-fluoro-α-methylstyrene (FMST) and para-chloro-α-methylstyrene (ClMST) and para-bromostyrene (BrST) was studied and found that the grafting has occurred efficiently, as established by contact angle measurement. Thus, after solvent extraction of the modified substrates, in order to remove unbounded grafts, the contact angle value of a drop of water deposited at the surface of paper increased from 40° for unmodified substrate to 102, 99, 116, 100 and 107°, for ST-, FST- FMST-, ClMST- and BrST-treated samples, respectively, indicating that the surface has became totally hydrophobic. In fact, the polar component of the surface energy of treated samples decreased from 25 mJ/m2 to practically zero, indicating that treated surfaces were rendered totally non polar. 7. Successful Mnemonics for "por"/"para" and Affirmative Commands with Pronouns. ERIC Educational Resources Information Center Mason, Keith 1992-01-01 Two mnemonic devices, "4A Rule" and "PERFECT," are described to simplify the learning of two grammar points: the placement of object pronouns with respect to commands and the distinction between "por" and "para." (five references) (LB) 8. Genomic organisation of the seven ParaHox genes of coelacanths. PubMed Mulley, John F; Holland, Peter W H 2014-09-01 Human and mouse genomes contain six ParaHox genes implicated in gut and neural patterning. In coelacanths and cartilaginous fish, an additional ParaHox gene exists-Pdx2-that dates back to the genome duplications in early vertebrate evolution. Here we examine the genomic arrangement and flanking genes of all ParaHox genes in coelacanths, to determine the full complement of these genes. We find that coelacanths have seven ParaHox genes in total, in four chromosomal locations, revealing that five gene losses occurred soon after vertebrate genome duplication. Comparison of intergenic sequences reveals that some Pdx1 regulatory regions associated with development of pancreatic islets are older than tetrapods, that Pdx1 and Pdx2 share few if any conserved non-coding elements, and that there is very high sequence conservation between coelacanth species. 9. Lampreys, the jawless vertebrates, contain only two ParaHox gene clusters. PubMed Zhang, Huixian; Ravi, Vydianathan; Tay, Boon-Hui; Tohari, Sumanty; Pillai, Nisha E; Prasad, Aravind; Lin, Qiang; Brenner, Sydney; Venkatesh, Byrappa 2017-08-22 ParaHox genes (Gsx, Pdx, and Cdx) are an ancient family of developmental genes closely related to the Hox genes. They play critical roles in the patterning of brain and gut. The basal chordate, amphioxus, contains a single ParaHox cluster comprising one member of each family, whereas nonteleost jawed vertebrates contain four ParaHox genomic loci with six or seven ParaHox genes. Teleosts, which have experienced an additional whole-genome duplication, contain six ParaHox genomic loci with six ParaHox genes. Jawless vertebrates, represented by lampreys and hagfish, are the most ancient group of vertebrates and are crucial for understanding the origin and evolution of vertebrate gene families. We have previously shown that lampreys contain six Hox gene loci. Here we report that lampreys contain only two ParaHox gene clusters (designated as α- and β-clusters) bearing five ParaHox genes (Gsxα, Pdxα, Cdxα, Gsxβ, and Cdxβ). The order and orientation of the three genes in the α-cluster are identical to that of the single cluster in amphioxus. However, the orientation of Gsxβ in the β-cluster is inverted. Interestingly, Gsxβ is expressed in the eye, unlike its homologs in jawed vertebrates, which are expressed mainly in the brain. The lamprey Pdxα is expressed in the pancreas similar to jawed vertebrate Pdx genes, indicating that the pancreatic expression of Pdx was acquired before the divergence of jawless and jawed vertebrate lineages. It is likely that the lamprey Pdxα plays a crucial role in pancreas specification and insulin production similar to the Pdx of jawed vertebrates. 10. Spanish Coastal Patrol Ships for Argentina and Mexico (Guardacostas Espanoles para Argentina y Mejico), DTIC Science & Technology 1983-12-22 IN TRANS ATION ~TITLE: .SPANISH COASTAL PATROL SHIPS FOR ARGENTINA AND MEXICO GUARDACOSTAS EFPANOLES PARA ARGENTINA Y MEJICO AUTHOR: M; RAMIREZ...SHIPS FOR ARGENTINA AND MEXICO [Ramirez Gabarrus, M.; Guardacostas espaioles para Argentina y Mejico; Tecnologia Militar, No. 4, 1983; pP. 50, 53-54... Mexico , Mr. Alvarez de Vayo, signed a contract with the Mexican War Minister, General Cardenas, to build a series of 10 coastal patrol boats and five 11. Chordate Hox and ParaHox gene clusters differ dramatically in their repetitive element content. PubMed Osborne, Peter W; Ferrier, David E K 2010-02-01 The ParaHox and Hox gene clusters control aspects of animal anterior-posterior development and are related as paralogous evolutionary sisters. Despite this relationship, it is not clear if the clusters operate in similar ways, with similar constraints. To compare clusters, we examined the transposable-element (TE) content of amphioxus and mammalian ParaHox and Hox clusters. Chordate Hox clusters are known to be largely devoid of TEs, possibly due to gene regulation and constraints on clustering in these animals. Here, we describe several novel amphioxus TEs and show that the amphioxus ParaHox cluster is a hotspot for TE insertion. TE contents of mammalian ParaHox loci are at background levels, in stark contrast to chordate Hox clusters. This marks a significant difference between Hox and ParaHox clusters. The presence of so many potentially disruptive elements implies selection constrains these ParaHox clusters as they have not dispersed despite 500 My of evolution for each lineage. 12. para-C-H Borylation of Benzene Derivatives by a Bulky Iridium Catalyst. PubMed Saito, Yutaro; Segawa, Yasutomo; Itami, Kenichiro 2015-04-22 A highly para-selective aromatic C-H borylation has been accomplished. By a new iridium catalyst bearing a bulky diphosphine ligand, Xyl-MeO-BIPHEP, the C-H borylation of monosubstituted benzenes can be affected with para-selectivity up to 91%. This catalytic system is quite different from the usual iridium catalysts that cannot distinguish meta- and para-C-H bonds of monosubstituted benzene derivatives, resulting in the preferred formation of meta-products. The para-selectivity increases with increasing bulkiness of the substituent on the arene, indicating that the regioselectivity of the present reaction is primarily controlled by steric repulsion between substrate and catalyst. Caramiphen, an anticholinergic drug used in the treatment of Parkinson's disease, was converted into five derivatives via our para-selective borylation. The present [Ir(cod)OH]2/Xyl-MeO-BIPHEP catalyst represents a unique, sterically controlled, para-selective, aromatic C-H borylation system that should find use in streamlined, predictable chemical synthesis and in the rapid discovery and optimization of pharmaceuticals and materials. 13. Diffusion Monte Carlo Study of Para-Diiodobenzene Polymorphism Revisited. PubMed Hongo, Kenta; Watson, Mark A; Iitaka, Toshiaki; Aspuru-Guzik, Alán; Maezono, Ryo 2015-03-10 We revisit our investigation of the diffusion Monte Carlo (DMC) simulation of para-diiodobenzene (p-DIB) molecular crystal polymorphism. [See J. Phys. Chem. Lett. 2010, 1, 1789-1794.] We perform, for the first time, a rigorous study of finite-size effects and choice of nodal surface on the prediction of polymorph stability in molecular crystals using fixed-node DMC. Our calculations are the largest that are currently feasible using the resources of the K-computer and provide insights into the formidable challenge of predicting such properties from first principles. In particular, we show that finite-size effects can influence the trial nodal surface of a small (1 × 1 × 1) simulation cell considerably. Therefore, we repeated our DMC simulations with a 1 × 3 × 3 simulation cell, which is the largest such calculation to date. We used a density functional theory (DFT) nodal surface generated with the PBE functional, and we accumulated statistical samples with ∼6.4 × 10(5) core hours for each polymorph. Our final results predict a polymorph stability that is consistent with experiment, but they also indicate that the results in our previous paper were somewhat fortuitous. We analyze the finite-size errors using model periodic Coulomb (MPC) interactions and kinetic energy corrections, according to the CCMH scheme of Chiesa, Ceperley, Martin, and Holzmann. We investigate the dependence of the finite-size errors on different aspect ratios of the simulation cell (k-mesh convergence) in order to understand how to choose an appropriate ratio for the DMC calculations. Even in the most expensive simulations currently possible, we show that the finite size errors in the DMC total energies are much larger than the energy difference between the two polymorphs, although error cancellation means that the polymorph prediction is accurate. Finally, we found that the T-move scheme is essential for these massive DMC simulations in order to circumvent population explosions and 14. The PARA-suite: PAR-CLIP specific sequence read simulation and processing PubMed Central Kloetgen, Andreas; Borkhardt, Arndt; Hoell, Jessica I. 2016-01-01 15. Spontaneous Emission Between - and Para-Levels of Water-Ion H_2O^+ Tanaka, Keiichi; Harada, Kensuke; Nanbu, Shinkoh; Oka, Takeshi 2012-06-01 Nuclear spin conversion interaction of water ion, H_2O^+, has been studied to derive spontaneous emission lifetime between ortho- and para-levels. H_2O^+ is a radical ion with the ^2B_1 electronic ground state. Its off-diagonal electron spin-nuclear spin interaction term, Tab(S_aΔ I_b + S_bΔ I_a), connects para and ortho levels, because Δ I = I_1 - I_2 has nonvanishing matrix elements between I = 0 and 1. The mixing by this term with Tab = 72 MHz predicted by ab initio theory in the MRD-CI/Bk level, is many orders of magnitude larger than for closed shell molecules because of the large magnetic interaction due to the un-paired electron. Using the molecular constants reported by Mürtz et al. by FIR-LMR, we searched for ortho and para coupling channels below 1000 cm-1 with accidental near degeneracy between para and ortho levels. For example, hyperfine components of the 42,2(ortho) and 33,0(para) levels mix by 1.2 × 10-3 due to their near degeneracy (Δ E = 0.417 cm-1), and give the ortho-para spontaneous emission lifetime of about 0.63 year. The most significant low lying 10,1(para) and 11,1(ortho) levels, on the contrary, mix only by 8.7 × 10-5 because of their large separation (Δ E = 16.267 cm-1) and give the spontaneous emission lifetime from 10,1(para) to 00,0(ortho) of about 100 year.These results qualitatively help to understand the observed high ortho- to para- H_2O^+ ratio of 4.8 ± 0.5 toward Sgr B2 but they are too slow to compete with the conversion by collision unless the number density of the region is very low (n ˜1 cm-3) or radiative temperature is very high (T_r > 100 K). M. Staikova, B. Engels, M. Peric, and S.D. Peyerimhoff, Mol. Phys. 80, 1485 (1993) P. Mürtz, L.R. Zink, K.M. Evenson, and J.M. Brown J. Chem. Phys. 109, 9744 (1998). LP. Schilke, et al., A&A 521, L11 (2010). 16. Vesicular erythema multiforme-like reaction to para-phenylenediamine in a henna tattoo. PubMed Sidwell, Rachel U; Francis, Nick D; Basarab, Tamara; Morar, Nilesh 2008-01-01 Allergic contact dermatitis reaction to topical "black henna" tattoo is usually described secondary to the organic dye para-phenylenediamine, a derivative of analine. Allergic contact dermatitis reactions to para-phenylenediamine are well recognized and most commonly involve an eczematous reaction that may become generalized and an acute angio-edema. Only four previous instances have been reported of an erythema multiforme-like reaction to para-phenylenediamine and its derivatives, including only one mild reaction to a tattoo. A vesicular erythema multiforme-like reaction has not been reported. An erythema multiforme-like reaction to contact allergens is usually caused by potent allergens including plant quinolones in Compositae and sesquiterpene lactones in exotic woods, and it is also reported to topical drugs, epoxy resin, metals (particularly nickel), and various chemicals. A generalized vesicular erythema multiforme-like reaction is unusual, and rarely reported. We describe a 6-year-old boy who developed a localized, eczematous and severe generalized vesicular erythema multiforme-like contact allergy to para-phenylenediamine secondary to a henna tattoo. As henna tattoos are becoming increasingly popular, one should be aware of the possibility of such a reaction. This presentation also highlights the call to ban the use of para-phenylenediamine and its derivatives in dyes. 17. Estimation of Carbon Storage in Para Rubber Plantation in Eastern Thailand Charoenjit, K.; Zuddas, P.; Allemand, P. 2012-12-01 This study aims to estimate the carbon stock and sequestration in Para rubber plantation of East Thailand using the THAICHOTE (Thailand Earth Observation System data). For that purpose we identify the area of every stage class Para rubber plantation by the analysis of different image objects (i.e., rule base and multiple regression classifications) and we map the carbon stock and sequestration of each Para rubber class using biomass allometric regressions and carbon content equations. THAICHOTE data include Multispectral image (4 bands at 15x15 m spatial resolution), Panchromatic image (2x2 m spatial resolution) and Stereo image, data acquisition from December 2011-April 2012. The preliminary investigated area is located in Wangchun, (Eastern, Thailand) and covers about 20 Km2. Calibrating the class stage, by image analysis that integrated edge-based segmentation, reflectance, remote sensing indices, texture analysis and canopy height model (CHM), we found that best classification was obtained by multiple regression (accuracy of 80%) compared to rule base logical operation (accuracy 70%) suggesting that manual 3D stereo measurements or Light Detection And Ranging (LiDAR) both are able to construct the CHM. The results of this study indicate that for a total Para rubber biomass of 14,651 tons, the amount of stored carbon is of 7,326 tons. Mature stage of Para rubber plantations exhibits the highest capacity of sequestering with a global flux of 0.21 tons C/ Km2/year. 18. The ortho/para ratio of water vapor in Comet Halley NASA Technical Reports Server (NTRS) Mumma, Michael J.; Larson, Harold P.; Weaver, Harold A. 1986-01-01 The ortho/para ratio of H2O is shown to be an invariant in the cometary coma. The dependence of ortho-para ratio on temperature in thermal equilibrium is given, and the nuclear-spin-temperature is defined. Its relation to the physical temperature of the cometary ices is discussed, and the prospects for using the observed ortho/para ratio to infer properties of the cometary nucleus are explored. The ortho/para ratio in Halley's comet is derived from high resolution infrared spectra of near 2.7 microns wavelength. On UT December 24.1, 1985 it was 2.73 + or - 0.17, and on UT March 22.7, 1986 it was 3.23 + or - 0.37. The nuclear-spin-temperature was 35 K (+9 K, -5 K) pre-perihelion, and less than 40 K post-perihelion, at the 67% confidence limit. Both numbers are consistent with modeled values of the equilibrium temperature of the cometary nucleus at aphelion (47 K). However, at the 95% confidence limit they are also fully consistent with temperatures less than 50 K, corresponding to an ortho/para ratio of about 3.0. 19. The formaldehyde ortho/para ratio as a probe of dark cloud chemistry and evolution NASA Technical Reports Server (NTRS) Dickens, J. E.; Irvine, W. M. 1999-01-01 We present measurements of the H2CO ortho/para ratio toward four star-forming cores, L723, L1228, L1527, and L43, and one quiescent core, L1498. Combining these data with earlier results by Minh et al., three quiescent cores are found to have ortho/para ratios near 3, the ratio of statistical weights expected for gas-phase formation processes. In contrast, ortho/para ratios are 1.5-2.1 in five star-forming cores, suggesting thermalization at a kinetic temperature of 10 K. We attribute modification of the ortho/para ratio in the latter cores to formation and/or equilibration of H2CO on grains with sub-sequent release back into the gas phase due to the increased energy inputs from the forming star and outflow. We see accompanying enhancements in the H2CO abundance relative to H, to support this idea. The results suggest that the formaldehyde ortho/para ratio can differentiate between quiescent cores and those in which low-mass star formation has occurred. 20. Influence of Molecular Oxygen on Ortho-Para Conversion of Water Molecules Valiev, R. R.; Minaev, B. F. 2017-07-01 The mechanism of influence of molecular oxygen on the probability of ortho-para conversion of water molecules and its relation to water magnetization are considered within the framework of the concept of paramagnetic spin catalysis. Matrix elements of the hyperfine ortho-para interaction via the Fermi contact mechanism are calculated, as well as the Maliken spin densities on water protons in H2O and O2 collisional complexes. The mechanism of penetration of the electron spin density into the water molecule due to partial spin transfer from paramagnetic oxygen is considered. The probability of ortho-para conversion of the water molecules is estimated by the quantum chemistry methods. The results obtained show that effective ortho-para conversion of the water molecules is possible during the existence of water-oxygen dimers. An external magnetic field affects the ortho-para conversion rate given that the wave functions of nuclear spin sublevels of the water protons are mixed in the complex with oxygen. 1. The formaldehyde ortho/para ratio as a probe of dark cloud chemistry and evolution NASA Technical Reports Server (NTRS) Dickens, J. E.; Irvine, W. M. 1999-01-01 We present measurements of the H2CO ortho/para ratio toward four star-forming cores, L723, L1228, L1527, and L43, and one quiescent core, L1498. Combining these data with earlier results by Minh et al., three quiescent cores are found to have ortho/para ratios near 3, the ratio of statistical weights expected for gas-phase formation processes. In contrast, ortho/para ratios are 1.5-2.1 in five star-forming cores, suggesting thermalization at a kinetic temperature of 10 K. We attribute modification of the ortho/para ratio in the latter cores to formation and/or equilibration of H2CO on grains with sub-sequent release back into the gas phase due to the increased energy inputs from the forming star and outflow. We see accompanying enhancements in the H2CO abundance relative to H, to support this idea. The results suggest that the formaldehyde ortho/para ratio can differentiate between quiescent cores and those in which low-mass star formation has occurred. 2. Sistemas Correctores de Campo Para EL Telescopio Ritchey-Chretien UNAM212 Cobos, F. J.; Galan, M. J. 1987-05-01 El telescopio UNAM2l2 fue inaugurado hace siete años y concebido para trabajar en las razones focales: f/7.5, F/13.5, F/27 y F/98. El diseño Ritchey-Chretién corresponde a la razón focal F/7.5 y el foco primario (F/2.286) no se consideró como utilizable para fotografía directa. En el Instituto de Astronomía de la UNAM, se diseñó y construyó un sistema corrector de campo para la razón focal F/7.5, que actualmente está en funcionamiento. Dentro de un programa de colaboración en diseflo y evaluación de sistemas ópticos, entre el Instituto de Astrofísica de Canarias y el Instituto de Astronomía de la UNAM, decidimos intentar el diseño de una correctora de campo para el foco primario del tȩlescopio UNAM212 bajo la consideración de que no son insalvables los problemas que implicaría su instalación y de que es muy posible que, en un futuro relativamente cercano, podamom tener un detector bidimenmional tipo Mepsicrón cuya área sensible haga tentadora la idea de construir la cámara directa para foco primario 3. Modelos Teoricos de Linhas de Recombinacao EM Radio Frequencias Para Regioes H II Abraham, Z.; Cancoro, A. C. O. 1987-05-01 Foram feitos modelos de linhas de recombinção provenientes de regiões HII nas frequências de rádio para distintos números quãnticos. Estes modelos consideram regrões H II esfericamente simétricas com variações radiais na densidade e temperatura eletrônica, efeitos de colisoes inelásticas dos eletrons (alargarnento por pressão), e afastarnento do equiliíbrio termodinâmico local. 0 bojetivo é construir o perfil da linha para cada ponto da nuvern e obter o valor médio resultante da sua convoluçã com o feixe da antena de tarnanho comparável corn o tarnanho angular da nuvern para posterIor cornpara o corn 4. Allergic contact dermatitis to para-phenylenediamine in a tattoo: a case report. PubMed Turan, Hakan; Okur, Mesut; Kaya, Ertugrul; Gun, Emrah; Aliagaoglu, Cihangir 2013-06-01 It is highly popular among children and young adults to have temporary henna tattoos on their bodies in different colors and figures. Henna is a greenish natural powder obtained from the flowers and dry leaves of Lawsonia alba plant and its allergenicity is very low. Henna is also used in combination with other coloring substances such as para-phenylenediamine in order to darken the color and create a permanent tattoo effect. Para-phenylenediamine is a substance with high allergenicity potential and may cause serious allergic reactions. Here, we aimed to draw attention to the potential harms of para-phenylenediamine containing temporary tattoos by presenting a child patient who developed allergic contact dermatitis after having a scorpion-shaped temporary tattoo on his forearm. 5. The School of Posture as a postural training method for Paraíba Telecommunications Operators. PubMed Cardia, M C; Soares Màsculo, F 2001-01-01 This work proposes to show the experience of posture training accomplished in the Paraíba State Telecommunication Company, using the knowledge of the Back School. The sample was composed of 12 operators, employees of the company, representing 31% of this population. The model applied in TELPA (Paraíba Telecommunication Company, Brazil) was based on the models of Sherbrooke, Canada, and of the School of Posture of Paraìba Federal University. Fifty-eight point four percent of participants showed a reduction of column pain, 25% improved the quality of the rest and the received training was considered enough for the learning of correct postures at work in 75% of the cases. The whole population approved of the training, and 83.3% of the cases considered that this training influenced their lives very positively. 6. Sobre la terapia génica para enfermedades de la retina. PubMed Fischer, M Dominik 2017-07-11 Las mutaciones en un gran número de genes provocan degeneración de la retina y ceguera sin que exista actualmente cura alguna. En las últimas décadas, la terapia génica para enfermedades de la retina ha evolucionado y se ha convertido en un nuevo y prometedor paradigma terapéutico para estas enfermedades poco comunes. Este artículo refleja las ideas y los conceptos que parten de la ciencia básica hacia la aplicabilidad de la terapia génica en el ámbito clínico. Se describen los avances y las reflexiones actuales sobre la eficacia de los ensayos clínicos en la actualidad y se discuten los posibles obstáculos y soluciones de cara al futuro de la terapia génica para enfermedades de la retina. © 2017 S. Karger AG, Basel. 7. Complications of pelvic and para-aortic lymphadenectomy in patients with endometrial cancer. PubMed Arduino, S; Leo, L; Febo, G; Tessarolo, M; Wierdis, T; Lanza, A 1997-01-01 The International Federation of Gynecology and Obstetrics (FIGO) changed the staging criteria for endometrial cancer in 1988 and adopted a surgical-pathological staging involving also pelvic and/or para-aortic lymphadenectomy. A total of 236 patients were treated for endometrial adenocarcinoma at Department B of the Gynecologic and Obstetrics Institute, University of Turin, between January 1976 and December 1995. Our protocol for surgical staging always entails pelvic and para-aortic lymphadenectomy and a simple total hysterectomy and bilateral adnexectomy with removal of the upper third of the vagina. The aim of this study was to carry out a retrospective evaluation of the morbidity in patients with endometrial cancer after surgical treatment, either TAH-BSO alone or TAH-BSO with pelvic and para-aortic lymphadenectomy. 8. Production and characterization of para-hydrogen gas for matrix isolation infrared spectroscopy Sundararajan, K.; Sankaran, K.; Ramanathan, N.; Gopi, R. 2016-08-01 Normal hydrogen (n-H2) has 3:1 ortho/para ratio and the production of enriched para-hydrogen (p-H2) from normal hydrogen is useful for many applications including matrix isolation experiments. In this paper, we describe the design, development and fabrication of the ortho-para converter that is capable of producing enriched p-H2. The p-H2 thus produced was probed using infrared and Raman techniques. Using infrared measurement, the thickness and the purity of the p-H2 matrix were determined. The purity of p-H2 was determined to be >99%. Matrix isolation infrared spectra of trimethylphosphate (TMP) and acetylene (C2H2) were studied in p-H2 and n-H2 matrices and the results were compared with the conventional inert matrices. 9. Evidence for disequilibrium of ortho and para hydrogen on Jupiter from Voyager IRIS measurements NASA Technical Reports Server (NTRS) Conrath, B. J.; Gierasch, P. J. 1983-01-01 Preliminary results of an analysis of the ortho state/para state ratio (parallel/antiparallel) for molecular H2 in the Jovian atmosphere using Voyager IR spectrometer (IRIS) data are reported. The study was undertaken to expand the understanding of the thermodynamics of a predominantly H2 atmosphere, which takes about 100 million sec to reach equilibrium. IRIS data provided 4.3/cm resolution in the 300-700/cm spectral range dominated by H2 lines. Approximately 600 spectra were examined to detect any disequilibrium between the hydrogen species. The results indicate that the ortho-para ratio is not in an equilibrium state in the upper Jovian troposphere. A thorough mapping of the para-state molecules in the upper atmosphere could therefore aid in mapping the atmospheric flowfield. 10. Evidence for disequilibrium of ortho and para hydrogen on Jupiter from Voyager IRIS measurements NASA Technical Reports Server (NTRS) Conrath, B. J.; Gierasch, P. J. 1983-01-01 Preliminary results of an analysis of the ortho state/para state ratio (parallel/antiparallel) for molecular H2 in the Jovian atmosphere using Voyager IR spectrometer (IRIS) data are reported. The study was undertaken to expand the understanding of the thermodynamics of a predominantly H2 atmosphere, which takes about 100 million sec to reach equilibrium. IRIS data provided 4.3/cm resolution in the 300-700/cm spectral range dominated by H2 lines. Approximately 600 spectra were examined to detect any disequilibrium between the hydrogen species. The results indicate that the ortho-para ratio is not in an equilibrium state in the upper Jovian troposphere. A thorough mapping of the para-state molecules in the upper atmosphere could therefore aid in mapping the atmospheric flowfield. 11. Electrical detection of ortho-para conversion in fullerene-encapsulated water Meier, Benno; Mamone, Salvatore; Concistrè, Maria; Alonso-Valdesueiro, Javier; Krachmalnicoff, Andrea; Whitby, Richard J.; Levitt, Malcolm H. 2015-08-01 Water exists in two spin isomers, ortho and para, that have different nuclear spin states. In bulk water, rapid proton exchange and hindered molecular rotation obscure the direct observation of two spin isomers. The supramolecular endofullerene H2O@C60 provides freely rotating, isolated water molecules even at cryogenic temperatures. Here we show that the bulk dielectric constant of this substance depends on the ortho/para ratio, and changes slowly in time after a sudden temperature jump, due to nuclear spin conversion. The attribution of the effect to ortho-para conversion is validated by comparison with nuclear magnetic resonance and quantum theory. The change in dielectric constant is consistent with an electric dipole moment of 0.51+/-0.05 Debye for an encapsulated water molecule, indicating the partial shielding of the water dipole by the encapsulating cage. The dependence of bulk dielectric constant on nuclear spin isomer composition appears to be a previously unreported physical phenomenon. 12. Quantitative structure–activity relationship analysis of the pharmacology of para-substituted methcathinone analogues PubMed Central Bonano, J S; Banks, M L; Kolanos, R; Sakloth, F; Barnier, M L; Glennon, R A; Cozzi, N V; Partilla, J S; Baumann, M H; Negus, S S 2015-01-01 Background and Purpose Methcathinone (MCAT) is a potent monoamine releaser and parent compound to emerging drugs of abuse including mephedrone (4-CH3 MCAT), the para-methyl analogue of MCAT. This study examined quantitative structure–activity relationships (QSAR) for MCAT and six para-substituted MCAT analogues on (a) in vitro potency to promote monoamine release via dopamine and serotonin transporters (DAT and SERT, respectively), and (b) in vivo modulation of intracranial self-stimulation (ICSS), a behavioural procedure used to evaluate abuse potential. Neurochemical and behavioural effects were correlated with steric (Es), electronic (σp) and lipophilic (πp) parameters of the para substituents. Experimental Approach For neurochemical studies, drug effects on monoamine release through DAT and SERT were evaluated in rat brain synaptosomes. For behavioural studies, drug effects were tested in male Sprague-Dawley rats implanted with electrodes targeting the medial forebrain bundle and trained to lever-press for electrical brain stimulation. Key Results MCAT and all six para-substituted analogues increased monoamine release via DAT and SERT and dose- and time-dependently modulated ICSS. In vitro selectivity for DAT versus SERT correlated with in vivo efficacy to produce abuse-related ICSS facilitation. In addition, the Es values of the para substituents correlated with both selectivity for DAT versus SERT and magnitude of ICSS facilitation. Conclusions and Implications Selectivity for DAT versus SERT in vitro is a key determinant of abuse-related ICSS facilitation by these MCAT analogues, and steric aspects of the para substituent of the MCAT scaffold (indicated by Es) are key determinants of this selectivity. PMID:25438806 13. Calcisponges have a ParaHox gene and dynamic expression of dispersed NK homeobox genes. PubMed Fortunato, Sofia A V; Adamski, Marcin; Ramos, Olivia Mendivil; Leininger, Sven; Liu, Jing; Ferrier, David E K; Adamska, Maja 2014-10-30 Sponges are simple animals with few cell types, but their genomes paradoxically contain a wide variety of developmental transcription factors, including homeobox genes belonging to the Antennapedia (ANTP) class, which in bilaterians encompass Hox, ParaHox and NK genes. In the genome of the demosponge Amphimedon queenslandica, no Hox or ParaHox genes are present, but NK genes are linked in a tight cluster similar to the NK clusters of bilaterians. It has been proposed that Hox and ParaHox genes originated from NK cluster genes after divergence of sponges from the lineage leading to cnidarians and bilaterians. On the other hand, synteny analysis lends support to the notion that the absence of Hox and ParaHox genes in Amphimedon is a result of secondary loss (the ghost locus hypothesis). Here we analysed complete suites of ANTP-class homeoboxes in two calcareous sponges, Sycon ciliatum and Leucosolenia complicata. Our phylogenetic analyses demonstrate that these calcisponges possess orthologues of bilaterian NK genes (Hex, Hmx and Msx), a varying number of additional NK genes and one ParaHox gene, Cdx. Despite the generation of scaffolds spanning multiple genes, we find no evidence of clustering of Sycon NK genes. All Sycon ANTP-class genes are developmentally expressed, with patterns suggesting their involvement in cell type specification in embryos and adults, metamorphosis and body plan patterning. These results demonstrate that ParaHox genes predate the origin of sponges, thus confirming the ghost locus hypothesis, and highlight the need to analyse the genomes of multiple sponge lineages to obtain a complete picture of the ancestral composition of the first animal genome. 14. Superfluid Effects in PARA-H_2 Clusters Probed by CO_2 Rotation-Vibration Transitions Li, Hui; Le Roy, Robert J.; Roy, Pierre-Nicholas; McKellar, A. R. W. 2010-06-01 The prospect of directly observing superfluidity in para-H_2 is a tantalizing but elusive goal. Like ^4He, para-H_2 is a light zero-spin boson. However, H_2-H_2 intermolecular interactions, though weak, are stronger than He-He interactions, and hydrogen is a solid below about 14 K. This makes detection of superfluidity in bulk hydrogen problematical, to say the least. But there are still possibilities for para-H_2 in the form of clusters or in nano-confined environments, and superfluid transition temperatures as high as ˜6 K have been predicted. Spectroscopic observations of (para-H_2)_N-CO_2 clusters were at first very difficult to interpret for N > 5. However, with the help of path integral Monte Carlo simulations and an accurate new H_2-CO_2 intermolecular potential surface which explicitly incorporates dependence on the CO_2 νb{3} asymmetric stretch, it is now possible to achieve a remarkably consistent picture of (para-H_2)_N-CO_2 clusters in the size range N = 1 ˜ 20. By combining the experimental spectroscopic measurements and theoretical simulations, we determine the size evolution of the superfluid response of the CO_2-doped para-H_2 clusters, which peaks for the "magic" number N = 12. V. L. Ginzburg and A. A. Sobyanin, JETP Lett. 15, 343 (1972). A. R. W. McKellar, Paper WH04, 63rd OSU International Symposium on Molecular Spectroscopy, June 16-20, 2008. H. Li, P.-N. Roy, and R. J. Le Roy, J. Chem. Phys., submitted. 15. Bulimia nerviosa MedlinePlus ... para orinar con frecuencia. Por lo general, la autoestima de las mujeres con bulimia está muy ligada ... biología de una persona, su imagen corporal y autoestima, sus experiencias sociales, antecedentes de salud familiares y ... 16. Evaluation of an immobilized cell bioreactor for degradation of meta- and para-nitrobenzoate Peretti, Steven W.; Thomas, Stuart M. 1994-01-01 Meta- and para-nitrobenzoic acid are pollutants found in waste streams from metal-stripping processes using cyanide-free solvents. The Kelly AFB industrial Waste Treatment Plant (IWTP) is currently incapable of removing these compounds from its wastewaters because of the presence of significant quantities of ethylenediamine, a preferred substrate and upper limit of 4-5 hours on the hydraulic residence time in the IWTP. This report describes the enrichment and preliminary characterization of a microbial consortium capable of utilizing both Meta- and Para-Nitrobenzoate as sole carbon sources. 17. Quantum rotation of ortho and para-water encapsulated in a fullerene cage PubMed Central Beduz, Carlo; Carravetta, Marina; Chen, Judy Y.-C.; Concistrè, Maria; Denning, Mark; Frunzi, Michael; Horsewill, Anthony J.; Johannessen, Ole G.; Lawler, Ronald; Lei, Xuegong; Levitt, Malcolm H.; Li, Yongjun; Mamone, Salvatore; Murata, Yasujiro; Nagel, Urmas; Nishida, Tomoko; Ollivier, Jacques; Rols, Stéphane; Rõõm, Toomas; Sarkar, Riddhiman; Turro, Nicholas J.; Yang, Yifeng 2012-01-01 Inelastic neutron scattering, far-infrared spectroscopy, and cryogenic nuclear magnetic resonance are used to investigate the quantized rotation and ortho–para conversion of single water molecules trapped inside closed fullerene cages. The existence of metastable ortho-water molecules is demonstrated, and the interconversion of ortho-and para-water spin isomers is tracked in real time. Our investigation reveals that the ground state of encapsulated ortho water has a lifted degeneracy, associated with symmetry-breaking of the water environment. PMID:22837402 18. Experiments at Scale with In-Situ Visualization Using ParaView/Catalyst in RAGE SciTech Connect Kares, Robert John 2014-10-31 In this paper I describe some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation on the Cielo supercomputer at Los Alamos. The detailed procedures for the creation of the visualizations using ParaView/Catalyst are discussed and several images sequences from the ICF simulation problem produced with the in-situ method are presented. My impressions and conclusions concerning the use of the in-situ visualization method in RAGE are discussed. 19. Quantum rotation of ortho and para-water encapsulated in a fullerene cage. PubMed Beduz, Carlo; Carravetta, Marina; Chen, Judy Y-C; Concistrè, Maria; Denning, Mark; Frunzi, Michael; Horsewill, Anthony J; Johannessen, Ole G; Lawler, Ronald; Lei, Xuegong; Levitt, Malcolm H; Li, Yongjun; Mamone, Salvatore; Murata, Yasujiro; Nagel, Urmas; Nishida, Tomoko; Ollivier, Jacques; Rols, Stéphane; Rõõm, Toomas; Sarkar, Riddhiman; Turro, Nicholas J; Yang, Yifeng 2012-08-07 Inelastic neutron scattering, far-infrared spectroscopy, and cryogenic nuclear magnetic resonance are used to investigate the quantized rotation and ortho-para conversion of single water molecules trapped inside closed fullerene cages. The existence of metastable ortho-water molecules is demonstrated, and the interconversion of ortho-and para-water spin isomers is tracked in real time. Our investigation reveals that the ground state of encapsulated ortho water has a lifted degeneracy, associated with symmetry-breaking of the water environment. 20. In para totale...una cosa da panico...sulla lingua dei giovani in Italia (In para totale...una cosa da panico...The Language of Young People in Italy). ERIC Educational Resources Information Center Marcato, Carla 1997-01-01 Describes and analyzes the language of young people in Italy today. Particular focus is on the expressions using "para" (e.g., "in para totale" = to be very bored or worried) and the phrase "una cosa da panico" (something terrible or its opposite something wonderful). (CFM) 1. In para totale...una cosa da panico...sulla lingua dei giovani in Italia (In para totale...una cosa da panico...The Language of Young People in Italy). ERIC Educational Resources Information Center Marcato, Carla 1997-01-01 Describes and analyzes the language of young people in Italy today. Particular focus is on the expressions using "para" (e.g., "in para totale" = to be very bored or worried) and the phrase "una cosa da panico" (something terrible or its opposite something wonderful). (CFM) 2. [Fut1 gene mutation for para-bombay blood type individual in Fujian Province of China]. PubMed Huang, Hao-Bou; Fan, Li-Ping; Wai, Shi-Jin; Zeng, Feng; Lin, Hai-Yan; Zhang, Rong 2010-10-01 This study was aimed to investigate the molecular mechanisms for para-Bombay blood type individual in Fujian Province of China. The para-Bombay blood type of this individual was identified by routine serological techniques. The full coding region of alpha (1,2) fucosyltransferase (FUT1) gene of this individual was amplified by polymerase chain reaction (PCR), then the PCR product was cloned into T vector. The mutation in coding region of fut1 gene was identified by TA cloning, so as to explore the molecular mechanisms for para-Bombay blood type individual. The results indicated that the full coding region of fut1 gene was successfully amplified by PCR. AG deletion at position 547-552 on 2 homologous chromosomes was detected by TA cloning method, leading to a reading frame shift and a premature stop codon. It is concluded that genetic mutation of fut1 gene in this para-bombay blood type individual was h1h1 homozygotic type. 3. Performance characteristics of magnesium/para-nitrophenol cells in 2:1 magnesium electrolytes SciTech Connect Kumar, G.; Sivashanugam, A.; Sridharan, R. ) 1993-11-01 1 V/1 Ah magnesium/para-nitrophenol (PNP) reserve cells were fabricated and their performance was evaluated in different electrolytes [2M aqueous solutions of Mg(C1O[sub 4])[sub 2], MgCl[sub 2], and MgBr[sub 2 4. Factor Structure of the "Escala de Autoeficacia para la Depresion en Adolescentes" (EADA) ERIC Educational Resources Information Center Diaz-Santos, Mirella; Cumba-Aviles, Eduardo; Bernal, Guillermo; Rivera-Medina, Carmen 2011-01-01 The current concept and measures of self-efficacy for depression in adolescents do not consider developmental and cultural aspects essential to understand and assess this construct in Latino youth. We examined the factor structure of the "Escala de Autoeficacia para la Depresion en Adolescentes" (EADA), a Spanish instrument designed to… 5. Factor Structure of the "Escala de Autoeficacia para la Depresion en Adolescentes" (EADA) ERIC Educational Resources Information Center Diaz-Santos, Mirella; Cumba-Aviles, Eduardo; Bernal, Guillermo; Rivera-Medina, Carmen 2011-01-01 The current concept and measures of self-efficacy for depression in adolescents do not consider developmental and cultural aspects essential to understand and assess this construct in Latino youth. We examined the factor structure of the "Escala de Autoeficacia para la Depresion en Adolescentes" (EADA), a Spanish instrument designed to… 6. Para-Professionals in Further Education: Changing Roles in Vocational Delivery ERIC Educational Resources Information Center Scott, Gill 2005-01-01 Roles and structures within further education colleges seem to be in constant change and development; roles are becoming blurred, and lecturers are taking on more management tasks. Alongside this has been the development of para-professional roles, using non-lecturers to undertake teaching tasks. This can allow for the greater involvement of… 7. Ortho-para conversion of endohedral water in the fullerene C60 at cryogenic temperatures Shugai, Anna; Nagel, U.; Rõõm, T.; Mamone, S.; Concistrè, M.; Meier, B.; Krachmalnicoff, A.; Whitby, R. J.; Levitt, M. H.; Lei, Xuegong; Li, Yongjun; Turro, N. J. 2015-03-01 Water displays the phenomenon of spin isomerism in which the two proton spins either couple to form a triplet (ortho water, I = 1) or a singlet nuclear spin state (para water, I = 0). Here we study the interconversion of para and ortho water. The exact mechanism of this process is still not fully understood. In order to minimize interactions between molecules we use a sample where a single H2O is trapped in the C60 molecular cage (H2O@C60)[email protected]@C60 has long-lived ortho state and ortho-para conversion kinetics is non-exponential at LHeT. We studied mixtures of H2O@C60, D2O@C60 and C60 using IR absorption, NMR and dielectric measurements. We saw the speeding up of the interconversion with the growth of H2O@C60 concentration in C60 or when D2O@C60 was added. At some temperatures the kinetics is exponential. Models are discussed in order to explain the T and concentration dependence of ortho-para interconversion kinetics. This work was supported by institutional research funding IUT23-3 of the Estonian Ministry of Education and Research. 8. Autoguía para el telescopio 2,15 mts de CASLEO Aballay, J. A.; Casagrande, A. R.; Pereyra, P. F.; Marún, A. H. Se está desarrollando un sistema de autoguía para el telescopio de 2,15 mts. El mismo se realizará aprovechando el Offset Guider. Al ocular móvil de éste se vinculará alguna cámara digital (ST4-ST7-CH250) para lograr la visión del objeto. El funcionamiento del equipo será el siguiente: primero, dadas las coordenadas del objeto a observar, se tomarán las coordenadas del telescopio para que, a través de una base de datos, se determine un campo de objetos que sirvan para la cámara de visión, luego, la PC obtendrá el offset entre la estrella de observación y la estrella seleccionada como guía, este valor será trasladado a los motores que posicionarán en forma automática el ocular. Una vez que la estrella es visualizada en la cámara (monitor de PC ) se correrá el programa que guiará el telescopio automáticamente. 9. UPLC-ESI-MS/MS analysis of Sudan dyes and Para Red in food. PubMed Li, C; Wu, Y L; Shen, J Z 2010-09-01 An analytical method for the simultaneous determination of Sudan dyes (Sudan Red G, Sudan I, Sudan II, Sudan III, Sudan Red 7B and Sudan IV) and Para Red in food by ultra-performance liquid chromatography-electrospray tandem mass spectrometry (UPLC-ESI-MS/MS) was developed. Samples were extracted with acetonitrile, and water added into the extract. The supernatant was analysed by UPLC-MS/MS after refrigeration and centrifugation. The sample was separated on an Acquity BEH C(18) column, and detected by MS/MS with the multiple reaction monitoring mode. Matrix calibration was used for quantitative testing of the method. The linear matrix calibrations of Sudan dyes and Para Red were 2-50 and 10-250 ng g(-1), respectively, and the regression coefficients were >0.9945. The recoveries were 83.4-112.3% with good coefficients of variation of 2.0-10.8%. The limits of detection were between 0.3 and 1.4 ng g(-1) for the six Sudan dyes, and between 3.7 and 6.0 ng g(-1) for Para Red. The limits of quantification were between 0.9 and 4.8 ng g(-1) for the six Sudan dyes, and between 12.2 and 19.8 ng g(-1) for Para Red. 10. Can para-aryl-dithiols cross-link two plasmonic noble nanoparticles as monolayer dithiolate spacers USDA-ARS?s Scientific Manuscript database Para-aryl-dithiols (PADTs, HS-(C6H4)n-SH, n = 1, 2, and 3) have been used extensively in molecular electronics, surface-enhanced Raman spectroscopy (SERS), and quantum electron tunneling between two gold or silver nanoparticles (AuNPs and AgNPs). One popular belief is that these dithiols cross-link ... 11. An Analysis of Interlanguage Development Over Time: Part 1, "por" and "para". ERIC Educational Resources Information Center Guntermann, Gail 1992-01-01 The first part of a larger planned investigation, this study examines the use of "por" and "para" by nine Peace Corps volunteers in oral interviews at the end of training and roughly one year later, to trace their acquisition over time, in two learning contexts. (24 references) (LB) 12. The Acquisition of Lexical Meaning in a Study Abroad Context: The Spanish Prepositions "por" and "para." ERIC Educational Resources Information Center Lafford, Barbara A.; Ryan, John M. 1995-01-01 Examination of the development of form/function relations of the prepositions "por" and "para" at different levels of proficiency in the interlanguage of study-abroad students in Granada, Spain, revealed "noncanonical" as well as "canonical" uses of these prepositions. The most common noncanonical uses were… 13. Energia Renovable para Centros de Salud Rurales (Renewable Energy for Rural Health Clinics) SciTech Connect Jimenez, T.; Olson, K. 1999-07-28 Esta es la primera de una serie de guias de aplicaciones que el Programa de Energia de Villas de NREL esta comisionando para acoplar sistemas comerciales renovables con aplicaciones rurales, incluyendo agua, escuelas rurales y micro empresas. La guia esta complementada por las actividades de desarrollo del Programa de Energia de Villas de NREL, proyectos pilotos internacionales y programas de visitas profesionales. 14. Interrupting Commemoration: Thinking with Art, Thinking through the Strictures of Argentina's "Espacio para la memoria" ERIC Educational Resources Information Center Paolantonio, Mario Di 2011-01-01 Recently, a few buildings within the "Espacio para la memoria" in Buenos Aires have been designated as a UNESCO Centre where, amongst other educational activities, evidentiary materials of the past repression are to be stored and displayed. Another building in the complex houses a Community Centre operated by the Mothers of the Plaza de… 15. Irradiation of para-aortic lymph node metastases from carcinoma of the cervix or endometrium SciTech Connect Komaki, R.; Mattingly, R.F.; Hoffman, R.G.; Barber, S.W.; Satre, R.; Greenberg, M. 1983-04-01 Twenty-two patients with biopsy-proved para-aortic lymph node metastases from carcinoma of the cervix (15 patients) or endometrium (7 patients) received a median dose of 5,000 rad/25 fractions. Para-aortic nodal metastases were controlled in 77% of cases. Control was significantly lower following radical retroperitoneal lymph node dissection than less extensive sampling procedures. Obstruction of the small bowel developed in 3 patients with tumor recurrence in the para-aortic region. Eight of the 10 patients who were disease-free at 2 years received >5,000 rad. Three patients were still alive without disease at 129, 63, and 60 months, respectively. The 5-year disease-free survival rate was 40% for cervical cancer and 60% for endometrial cancer: in the former group, it was significantly different depending on whether the para-aortic nodes were irradiated (40%) or not (0%). The authors suggest that 5,000-5,500 rad in 5-5.5 weeks is well tolerated and can control aortic nodal metastases in cervical and possibly endometrial cancer. 16. Interrupting Commemoration: Thinking with Art, Thinking through the Strictures of Argentina's "Espacio para la memoria" ERIC Educational Resources Information Center Paolantonio, Mario Di 2011-01-01 Recently, a few buildings within the "Espacio para la memoria" in Buenos Aires have been designated as a UNESCO Centre where, amongst other educational activities, evidentiary materials of the past repression are to be stored and displayed. Another building in the complex houses a Community Centre operated by the Mothers of the Plaza de… 17. Anuncios de servicio público para proteger a los trabajadores de plaguicidas EPA Pesticide Factsheets Estos archivos de anuncios de servicio público se pueden descargar libremente para su uso en la formación, transmisiones de audio, etc.(These public service announcement files can be freely downloaded for use in training, audio broadcasts, etc.) 18. Antimicrobial effect of para-alkoxyphenylcarbamic acid esters containing substituted N-phenylpiperazine moiety PubMed Central Malík, Ivan; Bukovský, Marián; Andriamainty, Fils; Gališinová, Jana 2013-01-01 In current research, nine basic esters of para-alkoxyphenylcarbamic acid with incorporated 4-(4-fluoro-/3-trifluoromethylphenyl)piperazin-1-yl fragment, 6i–6m and 8f–8i, were screened for their in vitro antimicrobial activity against Candida albicans, Staphylococcus aureus and Escherichia coli, respectively. Taking into account the minimum inhibitory concentration assay (MIC), as the most active against given yeast was evaluated 8i (MIC = 0.20 mg/mL), the most lipophilic structure containing para-butoxy and trifluoromethyl substituents. Investigating the efficiency of the compounds bearing only a single atom of fluorine and appropriate para-alkoxy side chain against Candida albicans, the cut-off effect was observed. From evaluated homological series, the maximum of the effectiveness was noticed for the stucture 6 k (MIC = 0.39 mg/mL), containing para-propoxy group attached to phenylcarbamoyloxy fragment, beyond which the compounds ceased to be active. On the contrary, all the tested molecules were against Staphylococcus aureus and Escherichia coli (MICs > 1.00 mg/mL) practically inactive. PMID:24294237 19. Fabrication and Evaluation of New Resins. Volume 1. Synthesis of Para- Ordered Aromatic Polymers DTIC Science & Technology 1978-04-01 identify by block number) Para-ordered Polymers Polybenzobisthiazoles Poly (diphenylbenzobisimidazoles) Polybenzobisoxazoles Thermally Stable Polymers...linear polybenzobisoxazole (PBO) , but with improved solubility, higher molecular weight, and increased thermooxidative stability. PBO PBO is soluble to...order to develop high strength in the oriented film or fiber this molecular weight may have to be increased. Although the thermooxidative stability of 20. Development of High-Activity Para- to Ortho-Hydrogen Conversion Catalysts. Volume 2 DTIC Science & Technology 1989-09-28 and Loeb1, E. M., J. Phys. Chem. 73, 894 (1969). G. C. Michael , Ph.D. thesis, The Pennsylvania State Univ., University Park, 1969. Misono, M., and...hydrogen. Zhavoronkova, K. N.; Peshkov, A. V.; Spivak ,, N. A. Tr. - M’osk. Khim.-Tekhnol. Inst. im. 0. I. Mendeleeva, 99, 89-92 (1978). Ortho-para 1. Conformational Explosion: Understanding the Complexity of the Para-Dialkylbenzene Potential Energy Surfaces Mishra, Piyush; Hewett, Daniel M.; Zwier, Timothy S. 2017-06-01 This talk focuses on the single-conformation spectroscopy of small-chain para-dialkylbenzenes. This work builds on previous studies from our group on long-chain n-alkylbenzenes that identified the first folded structure in octylbenzene. The dialkylbenzenes are representative of a class of molecules that are common components of coal and aviation fuel and are known to be present in vehicle exhaust. We bring the molecules para-diethylbenzene, para-dipropylbenzene and para-dibutylbenzene into the gas phase and cool the molecules in a supersonic expansion. The jet-cooled molecules are then interrogated using laser-induced fluorescence excitation, fluorescence dip IR spectroscopy (FDIRS) and dispersed fluorescence. The LIF spectra in the S_{0}-S_{1} origin region show dramatic increases in the number of resolved transitions with increasing length of alkyl chains, reflecting an explosion in the number of unique low-energy conformations formed when two independent alkyl chains are present. Since the barriers to isomerization of the alkyl chain are similar in size, this results in an 'egg carton' shape to the potential energy surface. We use a combination of electronic frequency shift and alkyl CH stretch infrared spectra to generate a consistent set of conformational assignments. 2. Vínculos observacionais para o processo-S em estrelas gigantes de Bário Smiljanic, R. H. S.; Porto de Mello, G. F.; da Silva, L. 2003-08-01 3. EVALUATION OF PARA-DICHLOROBENZENE EMISSIONS FROM SOLID MOTH REPELLANT AS A SOURCE OF INDOOR AIR POLLUTION EPA Science Inventory Mothcakes made of para-dichlorobenzene have been widely available for the general population to be used as a moth repellant to protect garments from insect damage. Usually, a mothcake is expected to last for weeks or even months during which the para-dichlorobenzene emits slowly ... 4. EVALUATION OF PARA-DICHLOROBENZENE EMISSIONS FROM SOLID MOTH REPELLANT AS A SOURCE OF INDOOR AIR POLLUTION EPA Science Inventory Mothcakes made of para-dichlorobenzene have been widely available for the general population to be used as a moth repellant to protect garments from insect damage. Usually, a mothcake is expected to last for weeks or even months during which the para-dichlorobenzene emits slowly ... 5. Hox and ParaHox gene expression in early body plan patterning of polyplacophoran mollusks PubMed Central Fritsch, Martin; Wollesen, Tim 2016-01-01 ABSTRACT Molecular developmental studies of various bilaterians have shown that the identity of the anteroposterior body axis is controlled by Hox and ParaHox genes. Detailed Hox and ParaHox gene expression data are available for conchiferan mollusks, such as gastropods (snails and slugs) and cephalopods (squids and octopuses), whereas information on the putative conchiferan sister group, Aculifera, is still scarce (but see Fritsch et al., 2015 on Hox gene expression in the polyplacophoran Acanthochitona crinita). In contrast to gastropods and cephalopods, the Hox genes in polyplacophorans are expressed in an anteroposterior sequence similar to the condition in annelids and other bilaterians. Here, we present the expression patterns of the Hox genes Lox5, Lox4, and Lox2, together with the ParaHox gene caudal (Cdx) in the polyplacophoran A. crinita. To localize Hox and ParaHox gene transcription products, we also investigated the expression patterns of the genes FMRF and Elav, and the development of the nervous system. Similar to the other Hox genes, all three Acr‐Lox genes are expressed in an anteroposterior sequence. Transcripts of Acr‐Cdx are seemingly present in the forming hindgut at the posterior end. The expression patterns of both the central class Acr‐Lox genes and the Acr‐Cdx gene are strikingly similar to those in annelids and nemerteans. In Polyplacophora, the expression patterns of the Hox and ParaHox genes seem to be evolutionarily highly conserved, while in conchiferan mollusks these genes are co‐opted into novel functions that might have led to evolutionary novelties, at least in gastropods and cephalopods. PMID:27098677 6. Hox and ParaHox gene expression in early body plan patterning of polyplacophoran mollusks. PubMed Fritsch, Martin; Wollesen, Tim; Wanninger, Andreas 2016-03-01 Molecular developmental studies of various bilaterians have shown that the identity of the anteroposterior body axis is controlled by Hox and ParaHox genes. Detailed Hox and ParaHox gene expression data are available for conchiferan mollusks, such as gastropods (snails and slugs) and cephalopods (squids and octopuses), whereas information on the putative conchiferan sister group, Aculifera, is still scarce (but see Fritsch et al., 2015 on Hox gene expression in the polyplacophoran Acanthochitona crinita). In contrast to gastropods and cephalopods, the Hox genes in polyplacophorans are expressed in an anteroposterior sequence similar to the condition in annelids and other bilaterians. Here, we present the expression patterns of the Hox genes Lox5, Lox4, and Lox2, together with the ParaHox gene caudal (Cdx) in the polyplacophoran A. crinita. To localize Hox and ParaHox gene transcription products, we also investigated the expression patterns of the genes FMRF and Elav, and the development of the nervous system. Similar to the other Hox genes, all three Acr-Lox genes are expressed in an anteroposterior sequence. Transcripts of Acr-Cdx are seemingly present in the forming hindgut at the posterior end. The expression patterns of both the central class Acr-Lox genes and the Acr-Cdx gene are strikingly similar to those in annelids and nemerteans. In Polyplacophora, the expression patterns of the Hox and ParaHox genes seem to be evolutionarily highly conserved, while in conchiferan mollusks these genes are co-opted into novel functions that might have led to evolutionary novelties, at least in gastropods and cephalopods. 7. Ancient origins of axial patterning genes: Hox genes and ParaHox genes in the Cnidaria. PubMed Finnerty, J R; Martindale, M Q 1999-01-01 Among the bilaterally symmetrical, triploblastic animals (the Bilateria), a conserved set of developmental regulatory genes are known to function in patterning the anterior-posterior (AP) axis. This set includes the well-studied Hox cluster genes, and the recently described genes of the ParaHox cluster, which is believed to be the evolutionary sister of the Hox cluster (Brooke et al. 1998). The conserved role of these axial patterning genes in animals as diverse as frogs and flies is believed to reflect an underlying homology (i.e., all bilaterians derive from a common ancestor which possessed an AP axis and the developmental mechanisms responsible for patterning the axis). However, the origin and early evolution of Hox genes and ParaHox genes remain obscure. Repeated attempts have been made to reconstruct the early evolution of Hox genes by analyzing data from the triphoblastic animals, the Bilateria (Schubert et al. 1993; Zhang and Nei 1996). A more precise dating of Hox origins has been elusive due to a lack of sufficient information from outgroup taxa such as the phylum Cnidaria (corals, hydras, jellyfishes, and sea anemones). In combination with outgroup taxa, another potential source of information about Hox origins is outgroup genes (e.g., the genes of the ParaHox cluster). In this article, we present cDNA sequences of two Hox-like genes (anthox2 and anthox6) from the sea anemone, Nematostella vectensis. Phylogenetic analysis indicates that anthox2 (= Cnox2) is homologous to the GSX class of ParaHox genes, and anthox6 is homologous to the anterior class of Hox genes. Therefore, the origin of Hox genes and ParaHox genes occurred prior to the evolutionary split between the Cnidaria and the Bilateria and predated the evolution of the anterior-posterior axis of bilaterian animals. Our analysis also suggests that the central Hox class was invented in the bilaterian lineage, subsequent to their split from the Cnidaria. 8. Rotational excitation of HCN by para- and ortho-H{sub 2} SciTech Connect Vera, Mario Hernández; Kalugina, Yulia; Denis-Alpizar, Otoniel; Stoecklin, Thierry; Lique, François 2014-06-14 Rotational excitation of the hydrogen cyanide (HCN) molecule by collisions with para-H{sub 2}( j = 0, 2) and ortho-H{sub 2}( j = 1) is investigated at low temperatures using a quantum time independent approach. Both molecules are treated as rigid rotors. The scattering calculations are based on a highly correlated ab initio 4-dimensional (4D) potential energy surface recently published. Rotationally inelastic cross sections among the 13 first rotational levels of HCN were obtained using a pure quantum close coupling approach for total energies up to 1200 cm{sup −1}. The corresponding thermal rate coefficients were computed for temperatures ranging from 5 to 100 K. The HCN rate coefficients are strongly dependent on the rotational level of the H{sub 2} molecule. In particular, the rate coefficients for collisions with para-H{sub 2}( j = 0) are significantly lower than those for collisions with ortho-H{sub 2}( j = 1) and para-H{sub 2}( j = 2). Propensity rules in favor of even Δj transitions were found for HCN in collisions with para-H{sub 2}( j = 0) whereas propensity rules in favor of odd Δj transitions were found for HCN in collisions with H{sub 2}( j ⩾ 1). The new rate coefficients were compared with previously published HCN-para-H{sub 2}( j = 0) rate coefficients. Significant differences were found due the inclusion of the H{sub 2} rotational structure in the scattering calculations. These new rate coefficients will be crucial to improve the estimation of the HCN abundance in the interstellar medium. 9. Conversion rate of para-hydrogen to ortho-hydrogen by oxygen: implications for PHIP gas storage and utilization. PubMed Wagner, Shawn 2014-06-01 To determine the storability of para-hydrogen before reestablishment of the room temperature thermal equilibrium mixture. Para-hydrogen was produced at near 100% purity and mixed with different oxygen quantities to determine the rate of conversion to the thermal equilibrium mixture of 75: 25% (ortho: para) by detecting the ortho-hydrogen (1)H nuclear magnetic resonance using a 9.4 T imager. The para-hydrogen to ortho-hydrogen velocity constant, k, near room temperature (292 K) was determined to be 8.27 ± 1.30 L/mol · min(-1). This value was calculated utilizing four different oxygen fractions. Para-hydrogen conversion to ortho-hydrogen by oxygen can be minimized for long term storage with judicious removal of oxygen contamination. Prior calculated velocity rates were confirmed demonstrating a dependence on only the oxygen concentration. 10. Vibration and Vibration-Torsion Levels of the S_{1} and Ground Cationic D_{0}^{+} States of Para-Fluorotoluene and Para-Xylene Below 1000 \\wn Tuttle, William Duncan; Gardner, Adrian M.; Whalley, Laura E.; Wright, Timothy G. 2017-06-01 We have employed resonance-enhanced multiphoton ionisation (REMPI) spectroscopy and zero-kinetic-energy (ZEKE) spectroscopy to investigate the first excited electronic singlet (S_{1}) state and the cationic ground state (D_{0}^{+}) of para-fluorotoluene (pFT) and para-xylene (pXyl). Spectra have been recorded via a large number of selected intermediate levels, to support assignment of the vibration and vibration-torsion levels in these molecules and to investigate possible couplings. The study of levels in this region builds upon previous work on the lower energy regions of pFT and pXyl and here we are interested in how vibration-torsion (vibtor) levels might combine and interact with vibrational ones, and so we consider the possible couplings which occur. Comparisons between the spectra of the two molecules show a close correspondence, and the influence of the second methyl rotor in para-xylene on the onset of intramolecular vibrational redistribution (IVR) in the S_{1} state is a point of interest. This has bearing on future work which will need to consider the role of both more flexible side chains of substituted benzene molecules, and multiple side chains. A. M. Gardner, W. D. Tuttle, L. Whalley, A. Claydon, J. H. Carter and T. G. Wright, J. Chem. Phys., 145, 124307 (2016). A. M. Gardner, W. D. Tuttle, P. Groner and T. G. Wright, J. Chem. Phys., (2017, in press). W. D. Tuttle, A. M. Gardner, K. O'Regan, W. Malewicz and T. G. Wright, J. Chem. Phys., (2017, in press). 11. Anatomic Distribution of Fluorodeoxyglucose-Avid Para-aortic Lymph Nodes in Patients With Cervical Cancer SciTech Connect Takiar, Vinita; Fontanilla, Hiral P.; Eifel, Patricia J.; Jhingran, Anuja; Kelly, Patrick; Iyer, Revathy B.; Levenback, Charles F.; Zhang, Yongbin; Dong, Lei; Klopp, Ann 2013-03-15 Purpose: Conformal treatment of para-aortic lymph nodes (PAN) in cervical cancer allows dose escalation and reduces normal tissue toxicity. Currently, data documenting the precise location of involved PAN are lacking. We define the spatial distribution of this high-risk nodal volume by analyzing fluorodeoxyglucose (FDG)-avid lymph nodes (LNs) on positron emission tomography/computed tomography (PET/CT) scans in patients with cervical cancer. Methods and Materials: We identified 72 PANs on pretreatment PET/CT of 30 patients with newly diagnosed stage IB-IVA cervical cancer treated with definitive chemoradiation. LNs were classified as left-lateral para-aortic (LPA), aortocaval (AC), or right paracaval (RPC). Distances from the LN center to the closest vessel and adjacent vertebral body were calculated. Using deformable image registration, nodes were mapped to a template computed tomogram to provide a visual impression of nodal frequencies and anatomic distribution. Results: We identified 72 PET-positive para-aortic lymph nodes (37 LPA, 32 AC, 3 RPC). All RPC lymph nodes were in the inferior third of the para-aortic region. The mean distance from aorta for all lymph nodes was 8.3 mm (range, 3-17 mm), and from the inferior vena cava was 5.6 mm (range, 2-10 mm). Of the 72 lymph nodes, 60% were in the inferior third, 36% were in the middle third, and 4% were in the upper third of the para-aortic region. In all, 29 of 30 patients also had FDG-avid pelvic lymph nodes. Conclusions: A total of 96% of PET positive nodes were adjacent to the aorta; PET positive nodes to the right of the IVC were rare and were all located distally, within 3 cm of the aortic bifurcation. Our findings suggest that circumferential margins around the vessels do not accurately define the nodal region at risk. Instead, the anatomical extent of the nodal basin should be contoured on each axial image to provide optimal coverage of the para-aortic nodal compartment. 12. Saturn’s Tropospheric Temperatures and Para-Hydrogen Distribution from Ten Years of Cassini Observations Fletcher, Leigh N.; Irwin, Patrick G.; Sinclair, James; Giles, Rohini; Barstow, Joanna; Achterberg, Richard K.; Orton, Glenn S. 2014-11-01 Cassini/CIRS observations of Saturn’s 10-1400 cm-1 spectrum have been inverted to construct a global record of tropospheric temperature and para-hydrogen variability over the ten-year span of the Cassini mission. The data record the slow reversal of seasonal asymmetries in tropospheric conditions from northern winter (2004, Ls=293), through northern spring equinox (2009, Ls=0) to the present day (2014, Ls=60). Mid-latitude tropospheric temperatures have cooled by approximately 4-6 K in the south and warmed by 2-4 K in the north, with the seasonal contrast decreasing with depth. CIRS detected the north polar minimum 100-mbar temperatures 6-8 years after winter solstice, whereas the south polar maximum occurred 1-2 years after summer solstice, consistent with the lag times predicted by radiative equilibrium models. Warm polar cyclones and the northern hexagon persist throughout the mission, suggesting that they are permanent features of Saturn’s tropospheric circulation. The 200-mbar thermal enhancement (“knee”) that was strongest in the summer but weak or absent in winter in 2004-2006 (Fletcher et al., 2007, Icarus 189, p.457-478) has now shifted northward and is present globally in 2014, suggestive of radiative heating in Saturn’s tropospheric haze layer. Saturn’s para-H2 fraction, which serves as a tracer of both tropospheric mixing and the efficiency of re-equilibration between the ortho- and para-hydrogen states, is slowly altering: super-equilibrium conditions (para-H2 fraction exceeding equilibrium expectations and suggestive of subsiding airmasses) that dominated the southern summer hemisphere are now weakening, whereas the sub-equilibrium conditions (suggestive of uplift) of the northern winter are being replaced by equilibrium or super-equilibrium conditions in spring. The thermal ‘knee’ and the para-H2 distribution are tracking both the increased spring illumination and the increasing tropospheric haze opacity of the springtime hemisphere 13. Astronomia para/com crianças carentes em Limeira Bretones, P. S.; Oliveira, V. C. 2003-08-01 14. Two prevalent h alleles in para-Bombay haplotypes among 250,000 Taiwanese. PubMed Chen, Ding-Ping; Tseng, Ching-Ping; Wang, Wei-Ting; Peng, Chien-Ting; Tsao, Kuo-Chien; Wu, Tsu-Lan; Lin, Kuan-Tsou; Sun, Chien-Feng 2004-01-01 Alpha(1,2)-fucosyltransferase catalyzes the transfer of fucose to the C-2 position of galactose on type II precursor substrate Gal beta1-4GlcNAc beta1-R. It plays an important biological role in the formation of H antigen, a precursor oligosaccharide for both A and B antigens on red blood cells. Aberration of alpha(1,2)-fucosyltransferase activity by gene mutations results in decreased synthesis of H antigen, leading to the para-Bombay phenotype. In this study, we collected about 250,000 blood samples in Taiwan during 5 yr and identified the subjects with para-Bombay phenotype. Then we analyzed the sequence of the alpha(1,2)-fucosyltransferase gene by direct sequencing and gene cloning methods, using the blood samples of 30 para-Bombay individuals and 30 control subjects who were randomly selected. The goals of this study were to search for new h alleles, to determine the h allele frequencies, and to test whether the sporadic theory is applicable in Taiwan. Six different h alleles (ha, 547-548 AG-del; hb, 880-881 TT-del; hc, R220C; hd, R220H; he, F174L; and hf, N327T) were observed. Two h alleles, he and hf, were newly discovered in Taiwan. The he allele has a nucleotide 522C>A point mutation, predicting the amino acid 174 substitution of Phe to Leu; the hf allele has missense mutation of nucleotide 980A>C, predicting the amino acid 327 substitution of Asn to Thr. Frequencies of the 6 alleles are ha 46.67%, hb 38.33%, hc 5.00%, hd 1.67%, he 3.33%, and hf 5.00%, respectively. These findings in the Taiwanese population confirm previous observations in other populations that the Bombay and para-Bombay phenotypes are due to diverse, sporadic, nonfunctional alleles, predominantly ha and hb, leading to H deficiency of red blood cells. In contrast to previous reports of non-prevalent associations of h alleles with para-Bombay phenotype, our results suggest a regional allele preference associated with para-Bombay individuals in Taiwan. 15. Functional expression of Drosophila para sodium channels. Modulation by the membrane protein TipE and toxin pharmacology. PubMed Warmke, J W; Reenan, R A; Wang, P; Qian, S; Arena, J P; Wang, J; Wunderler, D; Liu, K; Kaczorowski, G J; Van der Ploeg, L H; Ganetzky, B; Cohen, C J 1997-08-01 The Drosophila para sodium channel alpha subunit was expressed in Xenopus oocytes alone and in combination with tipE, a putative Drosophila sodium channel accessory subunit. Coexpression of tipE with para results in elevated levels of sodium currents and accelerated current decay. Para/TipE sodium channels have biophysical and pharmacological properties similar to those of native channels. However, the pharmacology of these channels differs from that of vertebrate sodium channels: (a) toxin II from Anemonia sulcata, which slows inactivation, binds to Para and some mammalian sodium channels with similar affinity (Kd congruent with 10 nM), but this toxin causes a 100-fold greater decrease in the rate of inactivation of Para/TipE than of mammalian channels; (b) Para sodium channels are >10-fold more sensitive to block by tetrodotoxin; and (c) modification by the pyrethroid insecticide permethrin is >100-fold more potent for Para than for rat brain type IIA sodium channels. Our results suggest that the selective toxicity of pyrethroid insecticides is due at least in part to the greater affinity of pyrethroids for insect sodium channels than for mammalian sodium channels. 16. Do cnidarians have a ParaHox cluster? Analysis of synteny around a Nematostella homeobox gene cluster. PubMed Hui, Jerome H L; Holland, Peter W H; Ferrier, David E K 2008-01-01 The Hox gene cluster is renowned for its role in developmental patterning of embryogenesis along the anterior-posterior axis of bilaterians. Its supposed evolutionary sister or paralog, the ParaHox cluster, is composed of Gsx, Xlox, and Cdx, and also has important roles in anterior-posterior development. There is a debate as to whether the cnidarians, as an outgroup to bilaterians, contain true Hox and ParaHox genes, or instead the Hox-like gene complement of cnidarians arose from independent duplications to those that generated the genes of the bilaterian Hox and ParaHox clusters. A recent whole genome analysis of the cnidarian Nematostella vectensis found conserved synteny between this cnidarian and vertebrates, including a region of synteny between the putative Hox cluster of N. vectensis and the Hox clusters of vertebrates. No syntenic region was identified around a potential cnidarian ParaHox cluster. Here we use different approaches to identify a genomic region in N. vectensis that is syntenic with the bilaterian ParaHox cluster. This proves that the duplication that gave rise to the Hox and ParaHox regions of bilaterians occurred before the origin of cnidarians, and the cnidarian N. vectensis has bona fide Hox and ParaHox loci. 17. ParA and ParB coordinate chromosome segregation with cell elongation and division during Streptomyces sporulation. PubMed Donczew, Magdalena; Mackiewicz, Paweł; Wróbel, Agnieszka; Flärdh, Klas; Zakrzewska-Czerwińska, Jolanta; Jakimowicz, Dagmara 2016-04-01 In unicellular bacteria, the ParA and ParB proteins segregate chromosomes and coordinate this process with cell division and chromosome replication. During sporulation of mycelial Streptomyces, ParA and ParB uniformly distribute multiple chromosomes along the filamentous sporogenic hyphal compartment, which then differentiates into a chain of unigenomic spores. However, chromosome segregation must be coordinated with cell elongation and multiple divisions. Here, we addressed the question of whether ParA and ParB are involved in the synchronization of cell-cycle processes during sporulation in Streptomyces To answer this question, we used time-lapse microscopy, which allows the monitoring of growth and division of single sporogenic hyphae. We showed that sporogenic hyphae stop extending at the time of ParA accumulation and Z-ring formation. We demonstrated that both ParA and ParB affect the rate of hyphal extension. Additionally, we showed that ParA promotes the formation of massive nucleoprotein complexes by ParB. We also showed that FtsZ ring assembly is affected by the ParB protein and/or unsegregated DNA. Our results indicate the existence of a checkpoint between the extension and septation of sporogenic hyphae that involves the ParA and ParB proteins. 18. Differential regulation of ParaHox genes by retinoic acid in the invertebrate chordate amphioxus (Branchiostoma floridae). PubMed Osborne, Peter W; Benoit, Gérard; Laudet, Vincent; Schubert, Michael; Ferrier, David E K 2009-03-01 The ParaHox cluster is the evolutionary sister to the Hox cluster. Like the Hox cluster, the ParaHox cluster displays spatial and temporal regulation of the component genes along the anterior/posterior axis in a manner that correlates with the gene positions within the cluster (a feature called collinearity). The ParaHox cluster is however a simpler system to study because it is composed of only three genes. We provide a detailed analysis of the amphioxus ParaHox cluster and, for the first time in a single species, examine the regulation of the cluster in response to a single developmental signalling molecule, retinoic acid (RA). Embryos treated with either RA or RA antagonist display altered ParaHox gene expression: AmphiGsx expression shifts in the neural tube, and the endodermal boundary between AmphiXlox and AmphiCdx shifts its anterior/posterior position. We identified several putative retinoic acid response elements and in vitro assays suggest some may participate in RA regulation of the ParaHox genes. By comparison to vertebrate ParaHox gene regulation we explore the evolutionary implications. This work highlights how insights into the regulation and evolution of more complex vertebrate arrangements can be obtained through studies of a simpler, unduplicated amphioxus gene cluster. 19. ParA and ParB coordinate chromosome segregation with cell elongation and division during Streptomyces sporulation PubMed Central Donczew, Magdalena; Mackiewicz, Paweł; Wróbel, Agnieszka; Flärdh, Klas; Zakrzewska-Czerwińska, Jolanta 2016-01-01 In unicellular bacteria, the ParA and ParB proteins segregate chromosomes and coordinate this process with cell division and chromosome replication. During sporulation of mycelial Streptomyces, ParA and ParB uniformly distribute multiple chromosomes along the filamentous sporogenic hyphal compartment, which then differentiates into a chain of unigenomic spores. However, chromosome segregation must be coordinated with cell elongation and multiple divisions. Here, we addressed the question of whether ParA and ParB are involved in the synchronization of cell-cycle processes during sporulation in Streptomyces. To answer this question, we used time-lapse microscopy, which allows the monitoring of growth and division of single sporogenic hyphae. We showed that sporogenic hyphae stop extending at the time of ParA accumulation and Z-ring formation. We demonstrated that both ParA and ParB affect the rate of hyphal extension. Additionally, we showed that ParA promotes the formation of massive nucleoprotein complexes by ParB. We also showed that FtsZ ring assembly is affected by the ParB protein and/or unsegregated DNA. Our results indicate the existence of a checkpoint between the extension and septation of sporogenic hyphae that involves the ParA and ParB proteins. PMID:27248800 20. ParA encoded on chromosome II of Deinococcus radiodurans binds to nucleoid and inhibits cell division in Escherichia coli. PubMed Charaka, Vijaya Kumar; Mehta, Kruti P; Misra, H S 2013-09-01 Bacterial genome segregation and cell division has been studied mostly in bacteria harbouring single circular chromosome and low-copy plasmids. Deinococcus radiodurans, a radiation-resistant bacterium, harbours multipartite genome system. Chromosome I encodes majority of the functions required for normal growth while other replicons encode mostly the proteins involved in secondary functions. Here, we report the characterization of putative P-loop ATPase (ParA2) encoded on chromosome II of D. radiodurans. Recombinant ParA2 was found to be a DNA-binding ATPase. E. coli cells expressing ParA2 showed cell division inhibition and mislocalization of FtsZ-YFP and those expressing ParA2-CFP showed multiple CFP foci formation on the nucleoid. Although, in trans expression of ParA2 failed to complement SlmA loss per se, it could induce unequal cell division in slmAminCDE double mutant. These results suggested that ParA2 is a nucleoid-binding protein, which could inhibits cell division in E. coli by affecting the correct localization of FtsZ and thereby cytokinesis. Helping slmAminCDE mutant to produce minicells, a phenotype associated with mutations in the 'Min' proteins, further indicated the possibility of ParA2 regulating cell division by bringing nucleoid compaction at the vicinity of septum growth. 1. Patch tests with commercial hair dye products in patients with allergic contact dermatitis to para-phenylenediamine. PubMed Lee, Hyun-Joo; Kim, Won-Jeong; Kim, Jun-Young; Kim, Hoon-Soo; Kim, Byung-Soo; Kim, Moon-Bum; Ko, Hyun-Chang 2016-01-01 Hair dye is one of the most common causes of allergic contact dermatitis. The main allergen has been identified as para-phenylenediamine. To prevent the recurrence of contact dermatitis to para-phenylenediamine, patients should discontinue the use of para-phenylenediamine-containing hair dye products. However, many patients are unable to discontinue their use for cosmetic or social reasons. Sometimes, they continue to have symptoms even after switching to so-called "less allergenic" hair dyes. To evaluate the safety of 15 commercially available hair dye products in patients with allergic contact dermatitis due to para-phenylenediamine. We performed patch tests using 15 hair dyes that were advertised as "hypoallergenic," "no para-phenylenediamine" and "non-allergenic" products in the market. Twenty three patients completed the study and 20 (87.0%) patients had a positive patch test reaction to at least one product. While four (26.7%) hair dye products contained para-phenylenediamine, 10 (66.7%) out of 15 contained m- aminophenol and 7 (46.7%) contained toluene-2,5-diamine sulfate. Only one product did not elicit a positive reaction in any patient. Small sample size and possibility of false-positive reactions. Dermatologists should educate patients with allergic contact dermatitis to para-phenylenediamine about the importance of performing sensitivity testing prior to the actual use of any hair dye product, irrespective of how it is advertised or labelled. 2. A sensitive and selective enzyme-linked immunosorbent assay for the analysis of Para red in foods. PubMed Wang, Jia; Wei, Keyi; Li, Hao; Li, Qing X; Li, Ji; Xu, Ting 2012-05-07 Para red is a synthetic dye and a potential genotoxic carcinogen. A hapten mimicking Para red structure was synthesized by introducing a carboxyl to the naphthol part of Para red and coupled to carrier protein to form an immunogen for the production of specific antibodies. A sensitive and selective enzyme-linked immunosorbent assay (ELISA) was developed for the detection of Para red in food samples. The limit of detection and inhibition half-maximum concentrations of Para red in phosphate buffered saline with 10% methanol were 0.06 and 2.2 ng mL(-1), respectively. Cross-reactivity values of the ELISA with the tested compounds including Sudan red I, II, III, IV, and G, sunset yellow, 2-naphthol, and 4-nitroaniline were ≤0.2%. This assay was used to determine Para red in tomato sauce, chilli sauce, chilli powder and sausage samples after ultrasonic extraction, cleanup and concentration steps. The average recoveries, repeatability (intraday extractions and analysis), and intra-laboratory reproducibility (interday extractions and analysis) were in the range 90-108%, 4-12% and 8-17%, respectively. This assay was compared to a high-performance liquid chromatographic method for 28 samples, displaying a good correlation (R(2) = 0.95). Para red residues in 53 real world samples determined by ELISA were below the limit of detection. 3. Growth rate retardation and inhibitory effect of para-JEM(R) BLUE on Mycobacterium avium subspecies paratuberculosis. PubMed Okwumabua, Ogi; Moua, Tou Vue; Danz, Tonya; Quinn, Joe; O'Connor, Mike; Gibbons-Burgener, Suzanne 2010-09-01 The effect of para-JEM(R) BLUE on Mycobacterium avium subspecies paratuberculosis (MAP) inoculated into broth-based culture media was evaluated by using 84 fecal samples with known MAP status. Results showed that growth of the organism in samples inoculated into the broth without the para-JEM BLUE was detectable 1-35 days (average of 6 days) earlier in 35 of the samples (42%) compared with the same samples inoculated in broth with para-JEM BLUE. Four additional samples (5%) that were MAP positive in the culture broth that lacked the para-JEM BLUE gave negative results when the reagent was included. Of the remaining 45 samples, growth of MAP was detected 1-4 days (average of 3 days) earlier in 4 of the samples (5%) inoculated in the broth with para-JEM BLUE compared with the same samples inoculated in the broth without the para-JEM BLUE, whereas 41 samples (49%) yielded equivalent results with respect to time-to-growth detection and negative growth, regardless of whether para-JEM BLUE was present in the culture broth. However, exclusion of para-JEM BLUE from the broth increased the number of samples that produced false-positive instrument signals compared with the number that produced false-positive signals when the reagent was added. Modification of the sample processing step had no measurable effect. Observations indicated that, although elimination of para-JEM BLUE from the broth increased false-positive instrument signals, its inclusion has an adverse effect on the growth of certain MAP, which suggests that its elimination from broth cultures may increase sensitivity. 4. Regioselective Enzymatic β-Carboxylation of para-Hydroxy- styrene Derivatives Catalyzed by Phenolic Acid Decarboxylases PubMed Central Wuensch, Christiane; Pavkov-Keller, Tea; Steinkellner, Georg; Gross, Johannes; Fuchs, Michael; Hromic, Altijana; Lyskowski, Andrzej; Fauland, Kerstin; Gruber, Karl; Glueck, Silvia M; Faber, Kurt 2015-01-01 We report on a ‘green’ method for the utilization of carbon dioxide as C1 unit for the regioselective synthesis of (E)-cinnamic acids via regioselective enzymatic carboxylation of para-hydroxystyrenes. Phenolic acid decarboxylases from bacterial sources catalyzed the β-carboxylation of para-hydroxystyrene derivatives with excellent regio- and (E/Z)-stereoselectivity by exclusively acting at the β-carbon atom of the C=C side chain to furnish the corresponding (E)-cinnamic acid derivatives in up to 40% conversion at the expense of bicarbonate as carbon dioxide source. Studies on the substrate scope of this strategy are presented and a catalytic mechanism is proposed based on molecular modelling studies supported by mutagenesis of amino acid residues in the active site. PMID:26190963 5. Sludge reduction by uncoupling metabolism: SBR tests with para-nitrophenol and a commercial uncoupler. PubMed Zuriaga-Agustí, E; Mendoza-Roca, J A; Bes-Piá, A; Alonso-Molina, J L; Amorós-Muñoz, I 2016-11-01 Nowadays cost reduction is a very important issue in wastewater treatment plants. One way, is to minimize the sludge production. Microorganisms break down the organic matter into inorganic compounds through catabolism. Uncoupling metabolism is a method which promote catabolism reactions instead of anabolism ones, where adenosine triphosphate synthesis is inhibited. In this work, the influence of the addition of para-nitrophenol and a commercial reagent to a sequencing batch reactor (SBR) on sludge production and process performance has been analyzed. Three laboratory SBRs were operated in parallel to compare the effect of the addition of both reagents with a control reactor. SBRs were fed with synthetic wastewater and were operated with the same conditions. Results showed that sludge production was slightly reduced for the tested para-nitrophenol concentrations (20 and 25 mg/L) and for a LODOred dose of 1 mL/day. Biological process performance was not influenced and high COD removals were achieved. 6. Development of an integrated remote monitoring technique and its application to para-stressing bridge system Miyamoto, Ayaho; Motoshita, Minoru; Casas, Joan R. 2013-12-01 Bridge monitoring system via information technology is capable of providing more accurate knowledge of bridge performance characteristics than traditional strategies. This paper describes not only an integrated Internet monitoring system that consists of a stand-alone monitoring system (SMS) and a Web-based Internet monitoring system (IMS) for bridge maintenance but also its application to para-stressing bridge system as an intelligent structure. IMS, as a Web-based system, is capable of addressing the remote monitoring by introducing measuring information derived from SMS into the system through Internet or intranet connected by either PHS or LAN. Moreover, the key functions of IMS such as data management system, condition assessment, and decision making with the proposed system are also introduced in this paper. Another goal of this study is to establish the framework of a para-stressing bridge system which is an intelligent bridge by integrating the bridge monitoring information into the system to control the bridge performance automatically. 7. Renormalization-group approach for para-hydrogen adsorbed on exfoliated graphite Mello, E. V. L.; Carneiro, G. M. 1986-04-01 Heat-capacity measurements of para-hydrogen adsorbed on graphite were performed recently and revealed an interesting phase diagram similar to 4He. We report a renormalization-group study based on a three-state Potts model with vacancies which approximates the experimental situation. The resulting global phase diagram is in a three parameter space of pair-interaction constants and chemical potential as studied by Berker, Ostlund and Putnam. The Lennard-Jones or other effective potential between the para-H 2 adsorbed molecules determines the subspace relevant to this adsorbate. A method to calculate thermodynamic densities is discussed and the resulting temperature versus density diagram agrees well with the experiment. 8. Effect of current-loop sizes on the para-Meissner effect in superconductors Krishna, N. Murali; Lingam, Lydia S.; Ghosh, P. K.; Shrivastava, Keshav N. 1998-01-01 We find that there is a range of current-loop sizes and a range of temperatures under which the para-Meissner effect is predicted. When the phase φ/ φ0 of the Josephson Hamiltonian varies in a certain range, the magnetization becomes positive. In general, the magnetization can be both positive as well as negative with zero resistivity in all phases. The susceptibility as a function of temperature at small magnetic fields is explained on the basis of Josephson interaction. The transition temperature of the para-Meissner effect, TpM, is different from that of the Meissner effect, Tc> TpM. The experimental measurements of the magnetization of Tl 2CaBa 2Cu 2O 8 at low fields are in agreement with the theoretical predictions. 9. Molecular Characterization of Neurally Expressing Genes in the Para Sodium Channel Gene Cluster of Drosophila PubMed Central Hong, C. S.; Ganetzky, B. 1996-01-01 To elucidate the mechanisms regulating expression of para, which encodes the major class of sodium channels in the Drosophila nervous system, we have tried to locate upstream cis-acting regulatory elements by mapping the transcriptional start site and analyzing the region immediately upstream of para in region 14D of the polytene chromosomes. From these studies, we have discovered that the region contains a cluster of neurally expressing genes. Here we report the molecular characterization of the genomic organization of the 14D region and the genes within this region, which are: calnexin (Cnx), actin related protein 14D (Arp14D), calcineurin A 14D (CnnA14D), and chromosome associated protein (Cap). The tight clustering of these genes, their neuronal expression patterns, and their potential functions related to expression, modulation, or regulation of sodium channels raise the possibility that these genes represent a functionally related group sharing some coordinate regulatory mechanism. PMID:8849894 10. Kyste géant para-urétral feminine PubMed Central 2014-01-01 Le kyste géant para-urétral féminin infecté est rarement rapporté dans la littérature. Ce kyste est différent du diverticule sous urétral sur le plan clinique, diagnostique et thérapeutique. Sa pathogénie se confond avec celle des diverticules sous urétraux. Son traitement n'est pas bien codifié, vu sa rareté. Nous rapportons un cas atypique de kyste géant para urétral infecté chez une jeune femme de 26 ans. Le kyste était symptomatique et la patiente a eu un traitement chirurgical. Nous discutons les aspects cliniques, diagnostiques et thérapeutiques de cette entité rare à travers une revue de la littérature. 11. Potential of Brachiaria mutica (Para grass) for bioethanol production from Loktak Lake. PubMed Sahoo, Dinabandhu; Ummalyma, Sabeela Beevi; Okram, Aswini Kumar; Sukumaran, Rajeev K; George, Emrin; Pandey, Ashok 2017-10-01 12. A Density Functional Approach to Para-hydrogen at Zero Temperature Ancilotto, Francesco; Barranco, Manuel; Navarro, Jesús; Pi, Marti 2016-10-01 We have developed a density functional (DF) built so as to reproduce either the metastable liquid or the solid equation of state of bulk para-hydrogen, as derived from quantum Monte Carlo zero temperature calculations. As an application, we have used it to study the structure and energetics of small para-hydrogen clusters made of up to N=40 molecules. We compare our results for liquid clusters with diffusion Monte Carlo (DMC) calculations and find a fair agreement between them. In particular, the transition found within DMC between hollow-core structures for small N values and center-filled structures at higher N values is reproduced. The present DF approach yields results for (pH_2)_N clusters indicating that for small N values a liquid-like character of the clusters prevails, while solid-like clusters are instead energetically favored for N ≥ 15. 13. Una propuesta para el desarrollo de un arreglo de síntesis de apertura Arnal, E. M. Los estudios llevados a cabo en la transición del hidrógeno neutro a λ~21-cm han contribuído a incrementar nuestro conocimiento acerca de las propiedades globales del medio interestelar, sea este galáctico o extragaláctico. Avances en este campo han sido provocados, a menudo, por la puesta en servicio de radiotelescopios que poseen una mayor resolución angular. Aquí se presenta una propuesta para desarrollar un nuevo instrumento, un interferómetro, que permitirá abrir nuevas líneas de investigación. Este instrumento combinará la técnica de síntesis de apertura con la de espectroscopía de correlación digital, para alcanzar una resolución angular de 1' y un campo de visión de ~1o.7. 14. Photooxidation of Trimethyl Phosphite in Nitrogen, Oxygen, and para-Hydrogen Matrixes at Low Temperatures. PubMed Ramanathan, N; Sundararajan, K; Gopi, R; Sankaran, K 2017-03-16 Trimethyl phosphite (TMPhite) was photooxidized to trimethyl phosphate (TMP) in N2, O2, and para-H2 matrixes at low temperatures to correlate the conformational landscape of these two molecules. The photooxidation produced the trans (TGG)-rich conformer with respect to the ground state gauche (GGG) conformer of TMP in N2 and O2 matrixes, which has diverged from the conformational composition of freshly deposited pure TMP in the low-temperature matrixes. The enrichment of the trans conformer in preference to the gauche conformer of TMP during photooxidation is due to the TMPhite precursor, which exists exclusively in the trans conformer. Interestingly, whereas the photooxidized TMP molecule suffers site effects possibly due to the local asymmetry in N2 and O2 matrixes, in the para-H2 matrix owing to the quantum crystal nature the site effects were observed to be self-repaired. 15. H2CS abundances and ortho-to-para ratios in interstellar clouds NASA Technical Reports Server (NTRS) Minh, Y. C.; Irvine, W. M.; Brewer, M. K. 1991-01-01 Several H2CS ortho and para transitions have been observed toward interstellar molecular clouds, including cold, dark clouds and star-forming regions. H2CS fractional abundances f(H2CS) about 1-2 10 to the -9th relative to molecular hydrogen toward TMC-1, Orion A, and NGC 7538, and about 5 10 to the -10th for L134N are derived. The H2CS ortho-to-para ratios in TMC-1 are about 1.8 toward the cyanopolyyne peak and the ammonia peak, which may indicate the thermalization of H2CS on 10 K grains. A ratio of about 3, the statistical value, for Orion (3N, 1E) and NGC 7538 is derived, while a value of about 2 for Orion (KL) is found. 16. Control of Photoluminescence of Carbon Nanodots via Surface Functionalization using Para-substituted Anilines PubMed Central Kwon, Woosung; Do, Sungan; Kim, Ji-Hee; Seok Jeong, Mun; Rhee, Shi-Woo 2015-01-01 Carbon nanodots (C-dots) are a kind of fluorescent carbon nanomaterials, composed of polyaromatic carbon domains surrounded by amorphous carbon frames, and have attracted a great deal of attention because of their interesting properties. There are still, however, challenges ahead such as blue-biased photoluminescence, spectral broadness, undefined energy gaps and etc. In this report, we chemically modify the surface of C-dots with a series of para-substituted anilines to control their photoluminescence. Our surface functionalization endows our C-dots with new energy levels, exhibiting long-wavelength (up to 650 nm) photoluminescence of very narrow spectral widths. The roles of para-substituted anilines and their substituents in developing such energy levels are thoroughly studied by using transient absorption spectroscopy. We finally demonstrate light-emitting devices exploiting our C-dots as a phosphor, converting UV light to a variety of colors with internal quantum yields of ca. 20%. PMID:26218869 17. Jupiter's Tropospheric Dynamics from SOFIA Mapping of Temperature, Para-Hydrogen, and Aerosols de Pater, Imke We request time with FORCAST to observe Jupiter at mid-infrared wavelengths using 8-37 micron grism spectroscopy of the collisionally-induced H2-He continuum to derive the zonal mean tropospheric temperatures and para-H2 distribution. In addition, we request imaging in discrete filters between 5 and 37 micron to provide spatial context for the spectroscopy. This proposal is a follow-up of our successful observations in May 2014, where we confirmed the N-S polar asymmetry in the para-H2 fraction detected by Voyager 1, also during late summer in Jupiter's northern hemisphere. In spring 2017, during a world-wide campaign in support of the Juno mission, it gets close to southern summer solstice. This timing is ideal to assess seasonable variability on the planet. 18. Charge Transfer Directed Radical Substitution Enables para-Selective C–H Functionalization PubMed Central Boursalian, Gregory B.; Ham, Won Seok; Mazzotti, Anthony R.; Ritter, Tobias 2016-01-01 Efficient C–H functionalization requires selectivity for specific C–H bonds. Progress has been made for directed aromatic substitution reactions to achieve ortho- and meta- selectivity, but a general strategy for para-selective C–H functionalization has remained elusive. Herein, we introduce a previously unappreciated concept which enables nearly complete para selectivity. We propose that radicals with high electron affinity elicit areneto-radical charge transfer in the transition state of radical addition, which is the factor primarily responsible for high positional selectivity. We demonstrate that the selectivity is predictable by a simple theoretical tool and show the utility of the concept through a direct synthesis of aryl piperazines. Our results contradict the notion, widely held by organic chemists, that radical aromatic substitution reactions are inherently unselective. The concept of charge transfer directed radical substitution could serve as the basis for the development of new, highly selective C–H functionalization reactions. PMID:27442288 19. Charge-transfer-directed radical substitution enables para-selective C-H functionalization Boursalian, Gregory B.; Ham, Won Seok; Mazzotti, Anthony R.; Ritter, Tobias 2016-08-01 Efficient C-H functionalization requires selectivity for specific C-H bonds. Progress has been made for directed aromatic substitution reactions to achieve ortho and meta selectivity, but a general strategy for para-selective C-H functionalization has remained elusive. Herein we introduce a previously unappreciated concept that enables nearly complete para selectivity. We propose that radicals with high electron affinity elicit arene-to-radical charge transfer in the transition state of radical addition, which is the factor primarily responsible for high positional selectivity. We demonstrate with a simple theoretical tool that the selectivity is predictable and show the utility of the concept through a direct synthesis of aryl piperazines. Our results contradict the notion, widely held by organic chemists, that radical aromatic substitution reactions are inherently unselective. The concept of radical substitution directed by charge transfer could serve as the basis for the development of new, highly selective C-H functionalization reactions. 20. ROTATIONAL SPECTROSCOPY OF THE CO-PARA-H{sub 2} MOLECULAR COMPLEX SciTech Connect Potapov, A. V.; Surin, L. A.; Giesen, T. F.; Schlemmer, S.; Panfilov, V. A.; Dumesh, B. S.; Raston, P. L.; Jaeger, W. 2009-10-01 The rotational spectrum of the CO-para-H{sub 2} van der Waals complex, produced using a molecular jet expansion, was observed with two different techniques: OROTRON intracavity millimeter-wave spectroscopy and pulsed Fourier transform microwave spectroscopy. Thirteen transitions in the frequency range from 80 to 130 GHz and two transitions in the 14 GHz region were measured and assigned, allowing for a precise determination of the corresponding energy level positions of CO-para-H{sub 2}. The data obtained enable further radio astronomical searches for this molecular complex and provide a sensitive test of the currently best available intermolecular potential energy surface for the CO-H{sub 2} system. 1. La Experiencia Mexicana (The Mexican Experience). Volumes I and II. ERIC Educational Resources Information Center Finer, Neal B. Designed to be used as part of a comprehensive social studies program on Mexican culture, this two-volume manual, written in Spanish, offers an instructional package on Mexican culture, stressing an art-architecture perspective, which can be used at the secondary, college and adult levels. The teacher's guide, Volume I, includes a discussion of a… 2. Adult Latino College Students: Experiencias y la Educacion ERIC Educational Resources Information Center Garza, Ana Lisa 2011-01-01 The study aimed to gain a better understanding of the learning experiences of adult Latino college students, as described directly in their own voices. The study was guided by two research questions: RQ1: "How do adult Latinos describe their undergraduate college learning experiences?" and RQ2: "How do culture, gender, and ethnic… 3. Nuestros Sentimientos Son Iguales, La Diferencia Es En La Experiencia ERIC Educational Resources Information Center Palomares, Uvaldo H. 1971-01-01 The author concludes that counselors may be the prime cause of miscommunication and prejudicial evaluation in relations with persons from divergent racial and ethnic groups. Counselors must recognize and value the ethnicity of other persons if they are to foster an open, trusting, and productive counseling relationship with them. (Author/BY) 4. Expresiones de Desarrollo Profesional en Educadoras Principiantes y con Experiencia. ERIC Educational Resources Information Center Betsalel-Presser, Raquel 1986-01-01 Attempts to identify the conditions which affect the professional development of both experienced and inexperienced teachers. After presenting the theoretical and conceptual framework of the problem, discusses professional development needs identified by preschool and primary school teachers in an exploratory study done in Montreal, Quebec… 5. Adult Latino College Students: Experiencias y la Educacion ERIC Educational Resources Information Center Garza, Ana Lisa 2011-01-01 The study aimed to gain a better understanding of the learning experiences of adult Latino college students, as described directly in their own voices. The study was guided by two research questions: RQ1: "How do adult Latinos describe their undergraduate college learning experiences?" and RQ2: "How do culture, gender, and ethnic… 6. Expresiones de Desarrollo Profesional en Educadoras Principiantes y con Experiencia. ERIC Educational Resources Information Center Betsalel-Presser, Raquel 1986-01-01 Attempts to identify the conditions which affect the professional development of both experienced and inexperienced teachers. After presenting the theoretical and conceptual framework of the problem, discusses professional development needs identified by preschool and primary school teachers in an exploratory study done in Montreal, Quebec… 7. Nuestros Sentimientos Son Iguales, La Diferencia Es En La Experiencia ERIC Educational Resources Information Center Palomares, Uvaldo H. 1971-01-01 The author concludes that counselors may be the prime cause of miscommunication and prejudicial evaluation in relations with persons from divergent racial and ethnic groups. Counselors must recognize and value the ethnicity of other persons if they are to foster an open, trusting, and productive counseling relationship with them. (Author/BY) 8. La Experiencia Mexicana (The Mexican Experience). Volumes I and II. ERIC Educational Resources Information Center Finer, Neal B. Designed to be used as part of a comprehensive social studies program on Mexican culture, this two-volume manual, written in Spanish, offers an instructional package on Mexican culture, stressing an art-architecture perspective, which can be used at the secondary, college and adult levels. The teacher's guide, Volume I, includes a discussion of a… 9. Complexation and determination of palladium (II) ion with para-Cl-phenylazo-R-acid spectrophotometrically. PubMed Hanna, W G 1999-11-15 The complexation of para-Cl-phenylazo-R-acid azo dye with Pd(II) has been studied spectrophotometrically. Protonation constant (pK(a)) of the ligand has been calculated and the stability conditional constants of para-Cl-phenylazo-R-acid ligand with palladium ion has been determined at a constant temperature (25.0 degrees C), where the molar ratio of this complex is 1:1 (metal:ligand) with logbeta(1)=3.75, and 1:2 with logbeta(2)=8.55. Solid complex of para-Cl-phenylazo-R-acid has been prepared and characterized on the basis of elemental analysis and FTIR spectral data. A procedure for the spectrophotometric determination of Pd(II) using para-Cl-phenylazo-R-acid as a new azo chromophore is proposed where it is rapid, sensitive and highly specific. Beer's law was obeyed in the range 0.50-10.00 ppm at pH 5.0-6.0 to form a violet-red complex (epsilon=7.7 x 10(4) l(-1) mol(-1) cm(-1) at lambda(max)=560 nm). Metal ions such as Cu(II), Cr(III), La(III), Yb(III), Y(III), and Rh(III) interfere with the complex. Ammonium salt of trimellitic acid is used to precipitate some of the interfering ions and a scheme for separation of Pd(II) from a synthetic mixture similar in composition to platinum ore or deposit was made. 10. Role of pelvic and para-aortic lymphadenectomy in abandoned radical hysterectomy in cervical cancer. PubMed Barquet-Muñoz, Salim Abraham; Rendón-Pereira, Gabriel Jaime; Acuña-González, Denise; Peñate, Monica Vanessa Heymann; Herrera-Montalvo, Luis Alonso; Gallardo-Alvarado, Lenny Nadia; Cantú-de León, David Francisco; Pareja, René 2017-01-14 11. ParaDiS on Blue Gene/L: stepping up to the challenge SciTech Connect Hommes, G; Arsenlis, A; Bulatov, V; Cai, W; Cook, R; Hiratani, M; Oppestrup, T; Rhee, M; Tang, M 2006-06-09 This paper reports on the efforts to enable fully scalable simulations of Dislocation Line Dynamics (DLD) for direct calculations of strength of crystalline materials. DLD simulations are challenging and do not lend themselves naturally to parallel computing. Through a combinations of novel physical approaches, mathematical algorithms and computational science developments, a new DLD code ParaDiS is shown to take meaningful advantage of BG/L and, by doing so, to enable discovery class science by computation. 12. Parallel Visualization and Analysis with ParaView on a Cray XT4 SciTech Connect Patchett, John; Ahrens, James; Ahern, Sean; Pugmire, Dave 2009-01-01 Scienti c data sets produced by modern supercomputers like ORNL s Cray XT 4, Jaguar, can be extremely large, making visualization and analysis more di cult as moving large resultant data to dedicated analysis systems can be pro- hibitively expensive. We share our continuing work of integrating a parallel visu- alization system, ParaView, on ORNL s Jaguar system and our e orts to enable extreme scale interactive data visualization and analysis. We will discuss porting challenges and present performance numbers. 13. 16. mu. m para-H/sub 2/ stimulated Raman laser SciTech Connect Jin Chunzhi; Lin Taiji; Wu Xuhua; Niu Zhenya; Ding Yishan; Li Dianjun; Zhu Youxin; Li Yulan; Yang Jinfeng; Wang Naihong; and others 1988-08-01 A para-H/sub 2/ stimulated Raman laser pumped by a TEA CO/sub 2/ laser was developed. The main factors affecting Raman conversion efficiency are discussed. At a working temperature of 100 K, the maximum output energy of the Stokes beam at 16 ..mu..m was 536 mJ, corresponding to an energy conversion efficiency of 13% and a quantum conversion efficiency greater than 20%. 14. Synthesis and Properties of Rodlike Aromatic Heterocyclic Polymers: Phenylated Para-Terphenylene-Polybenzobis-Oxazoles DTIC Science & Technology 1978-12-01 Polybenzobisoxazoles 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(s) S. CONTRACT OR GRANT NUMBER(s) Dr. J. F. Wolfe Dr. F. E. Arnold 9. PERFORMING ORGANIZATION...number) Para-ordered Polymers Polybenzobisoxazoles Polyphenylated Terphenylene 20. ABSTRACT (Continue on reverse side If necessary and Identify by block...poly(amide hydrazide) fibers 1-4 recently described in the literature meet the first two of these criteria. The overriding structural feature of 15. ParaDiS on BlueGene/L: scalable line dynamics SciTech Connect Bulatov, V; Cai, W; Fier, J; Hiratani, M; Pierce, T; Tang, M; Rhee, M; Yates, R K; Arsenlis, A 2004-04-29 We describe an innovative highly parallel application program, ParaDiS, which computes the plastic strength of materials by tracing the evolution of dislocation lines over time. We discuss the issues of scaling the code to tens of thousands of processors, and present early scaling results of the code run on a prototype of the BlueGene/L supercomputer being developed by IBM in partnership with the US DOE's ASC program. 16. Aplicación del Teorema de Nekhorochev para tiempos de estabilidad en Mecánica Celeste Miloni, O.; Núñez, J.; Brunini, A. En Mecánica Celeste, uno de los problemas centrales consiste en la determinación de los tiempos de estabilidad. El teorema de Nekhorochev proporciona un método para dicho estudio, para un sistema determinado por un hamiltoniano descripto en las variables acción-ángulo. El trabajo consiste en la acotación tanto del potencial perturbador y de la matriz hessiana del hamiltoniano integrable para determinar luego el tiempo de estabilidad de dicho sistema, donde por estabilidad se entiende la separación en norma infinito en el espacio de las acciones. 17. Beyond Hox: the role of ParaHox genes in normal and malignant hematopoiesis. PubMed Rawat, Vijay P S; Humphries, R Keith; Buske, Christian 2012-07-19 During the past decade it was recognized that homeobox gene families such as the clustered Hox genes play pivotal roles both in normal and malignant hematopoiesis. More recently, similar roles have also become apparent for members of the ParaHox gene cluster, evolutionarily closely related to the Hox gene cluster. This is in particular found for the caudal-type homeobox genes (Cdx) genes, known to act as upstream regulators of Hox genes. The CDX gene family member CDX2 belongs to the most frequent aberrantly expressed proto-oncogenes in human acute leukemias and is highly leukemogenic in experimental models. Correlative studies indicate that CDX2 functions as master regulator of perturbed HOX gene expression in human acute myeloid leukemia, locating this ParaHox gene at a central position for initiating and maintaining HOX gene dysregulation as a driving leukemogenic force. There are still few data about potential upstream regulators initiating aberrant CDX2 expression in human leukemias or about critical downstream targets of CDX2 in leukemic cells. Characterizing this network will hopefully open the way to therapeutic approaches that target deregulated ParaHox genes in human leukemia. 18. ParaView visualization of Abaqus output on the mechanical deformation of complex microstructures Liu, Qingbin; Li, Jiang; Liu, Jie 2017-02-01 Abaqus® is a popular software suite for finite element analysis. It delivers linear and nonlinear analyses of mechanical and fluid dynamics, includes multi-body system and multi-physics coupling. However, the visualization capability of Abaqus using its CAE module is limited. Models from microtomography have extremely complicated structures, and datasets of Abaqus output are huge, requiring a visualization tool more powerful than Abaqus/CAE. We convert Abaqus output into the XML-based VTK format by developing a Python script and then using ParaView to visualize the results. Such capabilities as volume rendering, tensor glyphs, superior animation and other filters allow ParaView to offer excellent visualizing manifestations. ParaView's parallel visualization makes it possible to visualize very big data. To support full parallel visualization, the Python script achieves data partitioning by reorganizing all nodes, elements and the corresponding results on those nodes and elements. The data partition scheme minimizes data redundancy and works efficiently. Given its good readability and extendibility, the script can be extended to the processing of more different problems in Abaqus. We share the script with Abaqus users on GitHub. 19. The "drinking-buddy" scale as a measure of para-social behavior. PubMed Powell, Larry; Richmond, Virginia P; Cantrell-Williams, Glenda 2012-06-01 Para-social behavior is a form of quasi-interpersonal behavior that results when audience members develop bonds with media personalities that can resemble interpersonal social interaction, but is not usually applied to political communication. This study tested whether the "Drinking-Buddy" Scale, a simple question frequently used in political communication, could be interpreted as a single-item measure of para-social behavior with respect to political candidates in terms of image judgments related to interpersonal attraction and perceived similarity to self. The participants were college students who had voted in the 2008 election. They rated the candidates, Obama or McCain, as drinking buddies and then rated the candidates' perceived similarity to themselves in attitude and background, and also the social and task attraction to the candidate. If the drinking-buddy rating serves as a proxy measure for para-social behavior, then it was expected that participants' ratings for all four kinds of similarity to and attraction toward a candidate would be higher for the candidate they chose as a drinking buddy. The directional hypotheses were supported for interpersonal attraction, but not for perceived similarity. These results indicate that the drinking-buddy scale predicts ratings of interpersonal attraction, while voters may view perceived similarity as an important but not essential factor in their candidate preference. 20. Atlas de aves: Un metodo para documentar distribucion y seguir poblaciones USGS Publications Warehouse Robbins, C.S.; Dowell, B.A.; Dawson, D.K.; Alvarez-Lopez, Humberto; Kattan, Gustavo; Murcia, Carolina 1988-01-01 1. ParaDock: a flexible non-specific DNA--rigid protein docking algorithm. PubMed Banitt, Itamar; Wolfson, Haim J 2011-11-01 Accurate prediction of protein-DNA complexes could provide an important stepping stone towards a thorough comprehension of vital intracellular processes. Few attempts were made to tackle this issue, focusing on binding patch prediction, protein function classification and distance constraints-based docking. We introduce ParaDock: a novel ab initio protein-DNA docking algorithm. ParaDock combines short DNA fragments, which have been rigidly docked to the protein based on geometric complementarity, to create bent planar DNA molecules of arbitrary sequence. Our algorithm was tested on the bound and unbound targets of a protein-DNA benchmark comprised of 47 complexes. With neither addressing protein flexibility, nor applying any refinement procedure, CAPRI acceptable solutions were obtained among the 10 top ranked hypotheses in 83% of the bound complexes, and 70% of the unbound. Without requiring prior knowledge of DNA length and sequence, and within <2 h per target on a standard 2.0 GHz single processor CPU, ParaDock offers a fast ab initio docking solution. 2. The ortho:para-H_2 ratio in C- and J-type shocks Wilgenbus, D.; Cabrit, S.; Pineau des Forêts, G.; Flower, D. R. 2000-04-01 We have computed extensive grids of models of both C- and J-type planar shock waves, propagating in dark, cold molecular clouds, in order to study systematically the behaviour of the ortho:para-H_2 ratio. Careful attention was paid to both macroscopic (dynamical) and microscopic (chemical reactions and collisional population transfer in H_2) aspects. We relate the predictions of the models to observational determinations of the ortho:para-H_2 ratio using both pure rotational lines and rovibrational lines. As an illustration, we consider ISO and ground-based H_2 observations of HH 54. Neither planar C-type nor planar J-type shocks appear able to account fully for these observations. Given the additional constraints provided by the observed ortho:para H_2 ratios, a C-type bowshock, or a C-type precursor followed by a J-type shock, remain as plausible models. Tables~2a-f and 4a-f are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html 3. Theoretical study of the design of a catalyst for para to ortho hydrogen conversion NASA Technical Reports Server (NTRS) Coffman, Robert E. 1992-01-01 The theory of Petzinger and Scalapino (1973) was thoroughly reviewed, and all of the basic equations for paramagnetic para to ortho hydrogen catalysis re-derived. There are only a few minor phase errors and errors of omission in the description of the theory. Three models (described by Petzinger and Scalapino) for the rate of para to ortho H2 catalysis were worked out, and uniform agreement obtained to within a constant factor of 2 pi. The analytical methods developed in the course of this study were then extended to two new models, which more adequately describe the process of surface catalysis including transfer of hydrogen molecules onto and off of the surface. All five equations for the para to ortho catalytic rate of conversion are described. The two new equations describe the catalytic rate for these models: H2 on the surface is a 2-D gas with lifetime tau; and H2 on the surface is a 2-D liquid undergoing Brownian motion (diffusion) with surface lifetime tau. 4. ParaSAM: a parallelized version of the significance analysis of microarrays algorithm PubMed Central Sharma, Ashok; Zhao, Jieping; Podolsky, Robert; McIndoe, Richard A. 2010-01-01 Motivation: Significance analysis of microarrays (SAM) is a widely used permutation-based approach to identifying differentially expressed genes in microarray datasets. While SAM is freely available as an Excel plug-in and as an R-package, analyses are often limited for large datasets due to very high memory requirements. Summary: We have developed a parallelized version of the SAM algorithm called ParaSAM to overcome the memory limitations. This high performance multithreaded application provides the scientific community with an easy and manageable client-server Windows application with graphical user interface and does not require programming experience to run. The parallel nature of the application comes from the use of web services to perform the permutations. Our results indicate that ParaSAM is not only faster than the serial version, but also can analyze extremely large datasets that cannot be performed using existing implementations. Availability:A web version open to the public is available at http://bioanalysis.genomics.mcg.edu/parasam. For local installations, both the windows and web implementations of ParaSAM are available for free at http://www.amdcc.org/bioinformatics/software/parasam.aspx Contact: [email protected] Supplementary information: Supplementary Data is available at Bioinformatics online. PMID:20400455 5. Base de linhas moleculares para síntese espectral estelar Milone, A.; Sanzovo, G. 2003-08-01 A análise das abundâncias quí micas fotosféricas em estrelas do tipo solar ou tardia, através do cálculo teórico de seus espectros, emprega a espectroscopia de alta resolução e necessita de uma base representativa de linhas atômicas e moleculares com suas respectivas constantes bem determinadas. Nesse trabalho, utilizamos como ponto de partida as extensas listas de linhas espectrais de sistemas eletrônicos de algumas moléculas diatômicas compiladas por Kurucz para a construção de uma base de linhas moleculares para a sí ntese espectral estelar. Revisamos as determinações dos fatores rotacionais de Honl-London das forças de oscilador das linhas moleculares, para cada banda vibracional de alguns sistemas eletrônicos, seguindo a regra usual de normalização. Usamos as forças de oscilador eletrônicas da literatura. Os fatores vibracionais de Franck-Condon de cada banda foram especialmente recalculados empregando-se novas constantes moleculares. Reproduzimos, com êxito, as absorções espectrais de determinadas bandas eletrônicas-vibracionais das espécies moleculares C12C12, C12N14 e Mg24H em espectros de estrelas de referência como o Sol e Arcturus. 6. Madres para la Salud: design of a theory-based intervention for postpartum Latinas. PubMed Keller, Colleen; Records, Kathie; Ainsworth, Barbara; Belyea, Michael; Permana, Paska; Coonrod, Dean; Vega-López, Sonia; Nagle-Williams, Allison 2011-05-01 Weight gain in young women suggests that childbearing may be an important contributor to the development of obesity in women. Depressive symptoms can interfere with resumption of normal activity levels following childbirth or with the initiation of or adherence to physical activity programs essential for losing pregnancy weight. Depression symptoms may function directly to promote weight gain through a physiologic mechanism. Obesity and its related insulin resistance may contribute to depressed mood physiologically. Although physical activity has well-established beneficial effects on weight management and depression, women tend to under participate in physical activity during childbearing years. Further, the mechanisms underpinning the interplay of overweight, obesity, physical activity, depression, and inflammatory processes are not clearly explained. This report describes the theoretical rationale, design considerations, and cultural relevance for "Madres para la Salud" [Mothers for Health]. Madres para la Salud is a 12 month prospective, randomized controlled trial exploring the effectiveness of a culturally specific intervention using "bouts" of physical activity to effect changes in body fat, systemic and fat tissue inflammation, and postpartum depression symptoms in sedentary postpartum Latinas. The significance and innovation of Madres para la Salud includes use of a theory-driven approach to intervention, specification and cultural relevance of a social support intervention, use of a Promotora model to incorporate cultural approaches, use of objective measures of physical activity in post partum Latinas women, and the examination of biomarkers indicative of cardiovascular risk related to physical activity behaviors in postpartum Latinas. Copyright © 2011 Elsevier Inc. All rights reserved. 7. Observing para-H2D+ absorption towards Serpens SMM1 Schlemmer, Stephan 2013-10-01 We propose to observe the the ground-state rotational line of para-H2D+ at 1370 GHz (? = 218?m) with SOFIA/GREAT. The line is predicted to be detectable in absorption towards the luminous low-mass Class 0 protostar Serpens SMM1. This northern source consists of a warm, edge-on dust disk with a strong FIR continuum emission (280 Jy at 218 ? m) surrounded by a massive, cool envelope. According to our estimates the para H2D+ line is optically thick (? = 1) and will absorb about 60% of the continuum emission. The expected line absorption signal is at least 0.2 K in TA?, but only ?1 km/s wide. Even with SOFIA/GREAT the proposed project is challenging and needs deep integration, but it also renders possible an unambiguos detection of para-H2D+. As the object is particularly well studied in molecular lines and dust continuum, the proposed observations will allow us to test our understanding of the deuterium chemistry. 8. [Molecular genetic basis for para-Bombay phenotypes in two cases]. PubMed He, Yang-Ming; Xu, Xian-Guo; Zhu, Fa-Ming; Yan, Li-Xing 2007-06-01 This study was purposed to investigate the molecular genetics basis for para-Bombay phenotype. The para-Bombay phenotype of two probands was identified by routine serological techniques. The full coding region of alpha (1, 2) fucosyltransferase gene (FUT1 and FUT2) in the probands was amplified by polymerase chain reaction and the amplified fragments were directly sequenced, meanwhile the mutations of FUT1 were also identified by TOPO TA cloning sequence method. The results indicated that two heterozygous mutations were detected by directly sequencing in two probands: AG deletion at position 547 - 552 and C to T mutation at position 658. Two different mutations were confirmed to be true compound heterozygotes with each mutation on a separate homologous chromosome by TOPO TA cloning sequence method. AG deletion at position 547 - 552 caused a reading frame shift and a premature stop codon. C658T mutation resulted in Arg-->Cys at amino acid position 220. It is suggested that the FUT1 mutation of two probands are compound heterozygous mutation with different chromosomes, which are named h1h3 and may be the genetics basis of para-Bombay phenotype. 9. [Formation of para-Bombay phenotype caused by homozygous or heterozygous mutation of FUT1 gene]. PubMed Zhang, Jin-Ping; Zheng, Yan; Sun, Dong-Ni 2014-02-01 This study was aimed to explore the molecular mechanisms for para-Bombay phenotype formation. The H antigen of these individuals were identified by serological techniques. The full coding region of alpha (1, 2) fucosyltransferase (FUT1) gene of these individuals was amplified by high-fidelity polymerase chain reaction (PCR). PCR product was identified by TOPO cloning sequencing. Analysis and comparison were used to explore the mechanisms of para-bombay phenotype formation in individuals. The results indicated that the full coding region of FUT1 DNA was successfully amplified by PCR and gel electrophoresis. DNA sequencing and analysis found that h1 (547-552delAG) existed in one chromosome and h4 (35C > T) existed in the other chromosome of NO.1 individual. Meantime, h1 (547-552delAG) was found in two chromosomes of NO.2 and NO.3 individual. It also means that FUT1 gene of NO.1 individual was h1h4 heterozygote, FUT1 gene of NO.2 and NO.3 individuals were h1h1 homozygote. It is concluded that homozygous and heterozygous mutation of FUT1 gene can lead to the formation of para-Bombay phenotype. 10. [Para-Bombay phenotype caused by combined heterozygote of two bases deletion on fut1 alleles]. PubMed Ma, Kan-Rong; Tao, Shu-Dan; Lan, Xiao-Fei; Hong, Xiao-Zhen; Xu, Xian-Guo; Zhu, Fa-Ming; Lü, Hang-Jun; Yan, Li-Xing 2011-02-01 This study was purposed to investigate the molecular basis of a para-Bombay phenotype for screening and identification of rare blood group. ABO and H phenotypes of the proband were identified by serological techniques. The exon 6 to exon 7 of ABO gene and full coding region of α-1,2-fucosyltransferase (fut1) gene of the proband were analyzed by polymerase chain reaction and direct sequencing of the amplified fragments. The haplotype of compound heterozygote of fut1 was also identified by cloning sequencing. The results indicated that a rare para-Bombay phenotype was confirmed by serological techniques. Two deletion or insertion variant sites near nucleotide 547 and 880 were detected in fut1 gene. The results of cloning sequence showed that one haplotype of fut1 gene was two bases deletion at 547-552 (AGAGAG→AGAG), and another one was two bases deletion at position 880-882 (TTT→T). Both two variants caused a reading frame shift and a premature stop codon. It is concluded that a rare para-Bombay phenotype is found and confirmed in blood donor population. The molecular basis of this individual is compound heterozygote of two bases deletion on fut1 gene which weaken the activity of α-1, 2-fucosyltransferase. 11. Fragrance material review on 1-(para-menthen-6-yl)-1-propanone. PubMed Scognamiglio, J; Letizia, C S; Api, A M 2013-12-01 A toxicologic and dermatologic review of 1-(para-Menthen-6-yl)-1-propanone when used as a fragrance ingredient is presented. 1-(para-Menthen-6-yl)-1-propanone is a member of the fragrance structural group Alkyl Cyclic Ketones. These fragrances can be described as being composed of an alkyl, R1, and various substituted and bicyclic saturated or unsaturated cyclic hydrocarbons, R2, in which one of the rings may include up to 12 carbons. Alternatively, R2 may be a carbon bridge of C2-C4 carbon chain length between the ketone and cyclic hydrocarbon. This review contains a detailed summary of all available toxicology and dermatology papers that are related to this individual fragrance ingredient and is not intended as a stand-alone document. Available data for 1-(para-Menthen-6-yl)-1-propanone were evaluated then summarized and includes physical properties, acute toxicity, skin irritation, and skin sensitization, data. A safety assessment of the entire Alkyl Cyclic Ketones will be published simultaneously with this document; please refer to Belsito et al. (2013) [Belsito, D., Bickers, D., Bruze, M., Calow, P., Dagli, M., Fryer, A.D., Greim, H., Miyachi, Y., Saurat, J.H., Sipes, I.G., 2013 A Toxicologic and Dermatologic Assessment of Alkyl Cyclic Ketones When Used as Fragrance Ingredients. Submitted with this manuscript.] for an overall assessment of the safe use of this material and all Alkyl Cyclic Ketones in fragrances. Copyright © 2013. Published by Elsevier Ltd. 12. Pelvic and para-aortic lymphadenectomy for surgical staging of endometrial cancer: morbidity and mortality. PubMed Larson, D M; Johnson, K; Olson, K A 1992-06-01 This analysis compared retrospectively the morbidity and mortality of patients with endometrial cancer who had total abdominal hysterectomy with bilateral salpingo-oophorectomy (TAH/BSO) alone or with pelvic and para-aortic lymphadenectomy performed by the same surgeon at one private institution. Between August 1987 and March 1991, 77 women with endometrial cancer were staged surgically by a standard protocol without preoperative radiotherapy. Thirty-five patients (45%) had TAH/BSO alone and 42 (55%) had TAH/BSO with pelvic and para-aortic lymphadenectomy. The median number of lymph nodes removed was 18. Patients having lymphadenectomy had an increased mean (+/- standard deviation) operative time (129 +/- 29 versus 87 +/- 26 minutes; P less than .0001), increased mean estimated blood loss (391 +/- 192 versus 272 +/- 219 mL; P = .013), and a longer postoperative hospital stay (P = .017) compared with patients having TAH/BSO alone. However, there was no difference in transfusion rate, febrile morbidity, postoperative complications, or mortality. We conclude that pelvic and para-aortic lymphadenectomy can be added to TAH/BSO in patients with endometrial cancer without a clinically significant increase in morbidity or mortality. 13. El diseño final del espectrógrafo de banco (EBASIM) para CASLEO Simmons, J.; Levato, H. Utilizando el código de óptica ACCOS V se ha finalizado el diseño del espectrógrafo de banco para CASLEO. En una comunicación anterior habíamos indicado que utilizaríamos un colimador de 150 mm de diámetro con un radio de curvatura de 1540 mm. Para el espejo cámara, que tiene un diámetro de 200 mm, el radio de curvatura es de 1200 mm, ambos radios con una tolerancia no mayor a los 3 mm. En la presente, se informa sobre los detalles finales del cálculo del espectrógrafo que incluye el cómputo para 5 longitudes de onda diferentes y alrededor de 100 rayos. En todos los casos el 75 % de energía está dentro de un diámetro de 13 micrones. El diseño ha sido probado entre 3500 Å hasta 9000 Å con resultados satisfactorios. 14. ParaSAM: a parallelized version of the significance analysis of microarrays algorithm. PubMed Sharma, Ashok; Zhao, Jieping; Podolsky, Robert; McIndoe, Richard A 2010-06-01 Significance analysis of microarrays (SAM) is a widely used permutation-based approach to identifying differentially expressed genes in microarray datasets. While SAM is freely available as an Excel plug-in and as an R-package, analyses are often limited for large datasets due to very high memory requirements. We have developed a parallelized version of the SAM algorithm called ParaSAM to overcome the memory limitations. This high performance multithreaded application provides the scientific community with an easy and manageable client-server Windows application with graphical user interface and does not require programming experience to run. The parallel nature of the application comes from the use of web services to perform the permutations. Our results indicate that ParaSAM is not only faster than the serial version, but also can analyze extremely large datasets that cannot be performed using existing implementations. A web version open to the public is available at http://bioanalysis.genomics.mcg.edu/parasam. For local installations, both the windows and web implementations of ParaSAM are available for free at http://www.amdcc.org/bioinformatics/software/parasam.aspx. 15. The ortho:para ratio of H{sub 3}{sup +} in laboratory and astrophysical plasmas SciTech Connect Crabtree, Kyle N.; Indriolo, Nick; Kreckel, Holger; McCall, Benjamin J. 2015-01-22 The discovery of H{sub 3}{sup +} in the diffuse interstellar medium has dramatically changed our view of the cosmic-ray ionization rate in diffuse molecular clouds. However, another surprise has been that the ortho:para ratio of H{sub 3}{sup +} in these clouds is inconsistent with the temperature derived from the excitation of H{sub 2}, the dominant species in these clouds. In an effort to understand this discrepancy, we have embarked on an experimental program to measure the nuclear spin dependence of the dissociative electron recombination rate of H{sub 3}{sup +} using the CRYRING and TSR ion storage rings. We have also performed the first measurements of the reaction H{sub 3}{sup +}+H{sub 2}→H{sub 2}+H{sub 3}{sup +} below room temperature. This reaction is likely the most common bimolecular reaction in the universe, and plays an important role in interconverting ortho- and para-H{sub 3}{sup +}. Finally, we have constructed a steady-state chemical model for diffuse clouds, which takes into account the spin-dependence of the formation of H{sub 3}{sup +}, its electron recombination, and its reaction with H{sub 2}. We find that the ortho:para ratio of H{sub 3}{sup +} in diffuse clouds is likely governed by a competition between dissociative recombination and thermalization by reactive collisions. 16. Electrical detection of ortho–para conversion in fullerene-encapsulated water PubMed Central Meier, Benno; Mamone, Salvatore; Concistrè, Maria; Alonso-Valdesueiro, Javier; Krachmalnicoff, Andrea; Whitby, Richard J.; Levitt, Malcolm H. 2015-01-01 Water exists in two spin isomers, ortho and para, that have different nuclear spin states. In bulk water, rapid proton exchange and hindered molecular rotation obscure the direct observation of two spin isomers. The supramolecular endofullerene H2O@C60 provides freely rotating, isolated water molecules even at cryogenic temperatures. Here we show that the bulk dielectric constant of this substance depends on the ortho/para ratio, and changes slowly in time after a sudden temperature jump, due to nuclear spin conversion. The attribution of the effect to ortho–para conversion is validated by comparison with nuclear magnetic resonance and quantum theory. The change in dielectric constant is consistent with an electric dipole moment of 0.51±0.05 Debye for an encapsulated water molecule, indicating the partial shielding of the water dipole by the encapsulating cage. The dependence of bulk dielectric constant on nuclear spin isomer composition appears to be a previously unreported physical phenomenon. PMID:26299447 17. The ParaShield entry vehicle concept - Basic theory and flight test development Akin, David L. The ParaShield concept of the Space Systems Laboratory is an ultra-low ballistic coefficient entry vehicle, created to meet the need for entry vehicle technology to return mass from low earth orbit. The concept involves decoupling the ballistic coefficient from the launch vehicle parameters, to pick a value (beta) which optimizes the desired entry vehicle characteristics. Trajectory simulations show that, as the ballistic coefficient is lowered to range of 100-150 Pa, the total heat load and peak heating flux drop markedly, due to primary deceleration in regions of extremely low dynamic pressure. These same low values of beta also result in a low terminal velocity, allowing the use of simple impact attenuation to provide a soft landing on water or dry land. Because the deployable fabric framework serves the functions of both heat shield and parachute, it is referred to as a ParaShield. The experience gained from the design, construction, and integration of a ParaShield test vehicle is discussed. 18. Para-GMRF: parallel algorithm for anomaly detection of hyperspectral image Dong, Chao; Zhao, Huijie; Li, Na; Wang, Wei 2007-12-01 The hyperspectral imager is capable of collecting hundreds of images corresponding to different wavelength channels for the observed area simultaneously, which make it possible to discriminate man-made objects from natural background. However, the price paid for the wealthy information is the enormous amounts of data, usually hundreds of Gigabytes per day. Turning the huge volume data into useful information and knowledge in real time is critical for geoscientists. In this paper, the proposed parallel Gaussian-Markov random field (Para-GMRF) anomaly detection algorithm is an attempt of applying parallel computing technology to solve the problem. Based on the locality of GMRF algorithm, we partition the 3-D hyperspectral image cube in spatial domain and distribute data blocks to multiple computers for concurrent detection. Meanwhile, to achieve load balance, a work pool scheduler is designed for task assignment. The Para-GMRF algorithm is organized in master-slave architecture, coded in C programming language using message passing interface (MPI) library and tested on a Beowulf cluster. Experimental results show that Para-GMRF algorithm successfully conquers the challenge and can be used in time sensitive areas, such as environmental monitoring and battlefield reconnaissance. 19. Quantum chemical and experimental study of 1,2,4-trihydroxy-para-menthane Rottmannová, Lenka; Lukeš, Vladimír; Ilčin, Michal; Fodran, Peter; Herich, Peter; Kožíšek, Jozef; Liptaj, Tibor; Klein, Erik 2013-10-01 The conformational analysis of the para-menthane (PM) and 1,2,4-trihydroxy-para-menthane (TPM) is performed using the quantum chemical density functional theory (DFT) and ab initio Møller-Plesset perturbation theory up to the second order (MP2). In TPM, three hydroxyl groups generate eight stereoisomers comparing to the four para-menthane stereoisomers. From the thermodynamics point of view, the most preferred conformations show the chair-shaped configuration of the cyclohexane ring. The obtained energy barriers for the isopropyl group rotation in the chair-shaped stereoisomers are between 35 and 45 kJ mol-1. The crystal structure as well as the solvated TPM stereoisomer isolated from the Tea tree oil, Melaleuca alternifolia (Maiden & Betche) Cheel, were investigated experimentally. Isolated stereoisomer corresponds to the most energetically preferred conformation and the calculated structural data agree very well with the results from the X-ray and nuclear magnetic resonance measurements. Finally, the influence of the conformation and the presence of the intramolecular hydrogen bonds on the homolytic Osbnd H bond dissociation enthalpies and proton affinities were also discussed with respect to the simple alcohols (methanol, iso-propanol, iso-pentanol, tert-butanol, cyclohexanol) and phenol. 20. ParaDyn Implementation in the US Navy's DYSMAS Simulation System: FY08 Progress Report SciTech Connect Ferencz, R M; DeGroot, A J; Lin, J I; Zywicz, E; Durrenberger, J K; Sherwood, R J; Corey, I R 2008-07-29 The goal of this project is to increase the computational efficiency and capacity of the Navy's DYSMAS simulation system for full ship shock response to underwater explosion. Specifically, this project initiates migration to a parallel processing capability for the structural portion of the overall fluid-structure interaction model. The capstone objective for the first phase is to demonstrate operation of the DYSMAS simulation engine with a production model on a Naval Surface Warfare Center (IHD) parallel platform using the ParaDyn code for parallel processing of the structural dynamics. This year saw a successful launch to integrate ParaDyn, the high-parallel structural dynamics code from Lawrence Livermore National Laboratory (LLNL), into the DYSMAS system for simulating the response of ship structures to underwater explosion (UNDEX). The current LLNL version of DYNA3D, representing ten years of general development beyond the source branch used to initiate DYNA-N customization for DYSMAS, was first connected to the GEMINI flow code through DYSMAS Standard Coupler Interface (SCI). This permitted an early 'sanity check' by Naval Surface Warfare Center, Indian Head Division (NSWC-IHD) personnel that equivalent results were generated for their standard UNDEX test problems, thus ensuring the Verification & Validation pedigree they have developed remains intact. The ParaDyn code was then joined to the SCI in a manner requiring no changes to GEMINI. Three NSWC-IHD engineers were twice hosted at LLNL to become familiar with LLNL computer systems, the execution of the prototype software system, and to begin assessment of its accuracy and performance. Scaling data for the flow solver GEMINI was attained up to a one billion cell, 1000 processor run. The NSWC-IHD engineers were granted privileges to continue their evaluations through remote connections to LLNL's Open Computing Facility. Finally, the prototype changes were integrated into the mainline ParaDyn source 1. Implementação de um algoritmo para a limpeza de mapas da RCFM Souza, C. L.; Wuensche, C. A. 2003-08-01 2. El uso de la neuromodulación para el tratamiento del temblor PubMed Central Bendersky, Damián; Ajler, Pablo; Yampolsky, Claudio 2014-01-01 3. Melhoramentos no código Wilson-Devinney para binárias eclipsantes Vieira, L. A.; Vaz, L. P. R. 2003-08-01 4. Pre-treatment surgical para-aortic lymph node assessment in locally advanced cervical cancer PubMed Central Brockbank, Elly; Kokka, Fani; Bryant, Andrew; Pomel, Christophe; Reynolds, Karina 2014-01-01 Background Cervical cancer is the most common cause of death from gynaecological cancers worldwide. Locally advanced cervical cancer, FIGO stage equal or more than IB1 is treated with chemotherapy and external beam radiotherapy followed by brachytherapy. If there is metastatic para-aortic nodal disease radiotherapy is extended to additionally cover this area. Due to increased morbidity, ideally extended-field radiotherapy is given only when para-aortic nodal disease is proven. Therefore accurate assessment of the extent of the disease is very important for planning the most appropriate treatment. Objectives To evaluate the effectiveness and safety of pre-treatment surgical para-aortic lymph node assessment for woman with locally advanced cervical cancer (FIGO stage IB2 to IVA). Search methods We searched the Cochrane Gynaecological Cancer Group Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 1), MEDLINE and EMBASE (up to January 2011). We also searched registers of clinical trials, abstracts of scientific meetings, reference lists of included studies and contacted experts in the field. Selection criteria Randomised controlled trials (RCTs) that compared surgical para-aortic lymph node assessment and dissection with radiological staging techniques, in adult women diagnosed with locally advanced cervical cancer. Data collection and analysis Two reviewers independently assessed whether potentially relevant trials met the inclusion criteria, abstracted data and assessed risk of bias. One RCT was identified so no meta-analyses were performed. Main results We found only one trial, which included 61 women, that met our inclusion criteria. This trial reported data on surgical versus clinical staging and an assessment of the two surgical staging techniques; laparoscopic (LAP) versus extraperitoneal (EXP) surgical staging. The clinical staging was either a contrast-enhanced CT scan or MRI scan of the abdomen and 5. An Experimental Investigation of Collisions of NH3 with Para-H2 at the Temperatures of Cold Molecular Clouds Willey, D. R.; Timlin, R. E., Jr.; Merlin, J. M.; Sowa, M. M.; Wesolek, D. M. 2002-03-01 Experimentally measured cross sections for collisional broadening of ammonia by J=0, para-H2 are presented for the (J,K)=(1,1),(2,2), and (3,3) NH3 inversion transitions at temperatures from 12.5 to 40 K. The cross sections were obtained in a quasi-equilibrium environment utilizing the collisional cooling technique. These are the first laboratory measurements of interactions between NH3 and J=0, para-H2 at the temperatures of cold molecular clouds. The measured cross sections are compared to theoretical predictions using two existing ab initio NH3-H2 potential surfaces and to previously measured He broadening cross sections. In comparison to He broadening cross sections at these temperatures, the para-H2 results are up to 4 times larger. This is in contrast to 300 K experimental results where para-H2 cross sections are only 50% larger than He. 6. Conformational isomerism in the solid-state structures of tetracaine and tamoxifen with para-sulphonato-calix[4]arene Danylyuk, Oksana; Monachino, Melany; Lazar, Adina N.; Suwinska, Kinga; Coleman, Anthony W. 2010-02-01 The solid-state complexes between para-sulphonato-calix[4]arene and the drugs tamoxifen and tetracaine show an unusual 4:1 guest-host stoichiometry with formation of hydrophobic layer of drug molecules held between bilayers of para-sulphonato-calix[4]arene. In both structures each of the four independent drug molecules adopts different conformation due to the different mode of interaction with the anionic host, the neighbouring drug cations and water molecules. 7. Effects of para-substituents of styrene derivatives on their chemical reactivity on platinum nanoparticle surfaces Hu, Peiguang; Chen, Limei; Deming, Christopher P.; Lu, Jia-En; Bonny, Lewis W.; Chen, Shaowei 2016-06-01 Stable platinum nanoparticles were successfully prepared by the self-assembly of para-substituted styrene derivatives onto the platinum surfaces as a result of platinum-catalyzed dehydrogenation and transformation of the vinyl groups to the acetylene ones, forming platinum-vinylidene/-acetylide interfacial bonds. Transmission electron microscopic measurements showed that the nanoparticles were well dispersed without apparent aggregation, suggesting sufficient protection of the nanoparticles by the organic capping ligands, and the average core diameter was estimated to be 2.0 +/- 0.3 nm, 1.3 +/- 0.2 nm, and 1.1 +/- 0.2 nm for the nanoparticles capped with 4-tert-butylstyrene, 4-methoxystyrene, and 4-(trifluoromethyl)styrene, respectively, as a result of the decreasing rate of dehydrogenation with the increasing Taft (polar) constant of the para-substituents. Importantly, the resulting nanoparticles exhibited unique photoluminescence, where an increase of the Hammett constant of the para-substituents corresponded to a blue-shift of the photoluminescence emission, suggesting an enlargement of the HOMO-LUMO band gap of the nanoparticle-bound acetylene moieties. Furthermore, the resulting nanoparticles exhibited apparent electrocatalytic activity towards oxygen reduction in acidic media, with the best performance among the series of samples observed with the 4-tert-butylstyrene-capped nanoparticles due to an optimal combination of the nanoparticle core size and ligand effects on the bonding interactions between platinum and oxygen species.Stable platinum nanoparticles were successfully prepared by the self-assembly of para-substituted styrene derivatives onto the platinum surfaces as a result of platinum-catalyzed dehydrogenation and transformation of the vinyl groups to the acetylene ones, forming platinum-vinylidene/-acetylide interfacial bonds. Transmission electron microscopic measurements showed that the nanoparticles were well dispersed without apparent 8. Madres para la Salud: Design of a Theory-based Intervention for Postpartum Latinas PubMed Central Keller, Colleen; Records, Kathie; Ainsworth, Barbara; Belyea, Michael; Permana, Paska; Coonrod, Dean; Vega-López, Sonia; Nagle-Williams, Allison 2011-01-01 Background Weight gain in young women suggests that childbearing may be an important contributor to the development of obesity in women. Depressive symptoms can interfere with resumption of normal activity levels following childbirth or with the initiation of or adherence to physical activity programs essential for losing pregnancy weight. Depression symptoms may function directly to promote weight gain through a physiologic mechanism. Obesity and its related insulin resistance may contribute to depressed mood physiologically. Although physical activity has well-established beneficial effects on weight management and depression, women tend to under participate in physical activity during childbearing years. Further, the mechanisms underpinning the interplay of overweight, obesity, physical activity, depression, and inflammatory processes are not clearly explained. Objectives This report describes the theoretical rationale, design considerations, and cultural relevance for “Madres para la Salud” [Mothers for Health]. Design and Methods Madres para la Salud is a 12 month prospective, randomized controlled trial exploring the effectiveness of a culturally specific intervention using “bouts” of physical activity to effect changes in body fat, systemic and fat tissue inflammation, and postpartum depression symptoms in sedentary postpartum Latinas. Summary The significance and innovation of Madres para la Salud includes use of a theory-driven approach to intervention, specification and cultural relevance of a social support intervention, use of a Promotora model to incorporate cultural approaches, use of objective measures of physical activity in post partum Latinas women, and the examination of biomarkers indicative of cardiovascular risk related to physical activity behaviors in postpartum Latinas. PMID:21238614 9. Supported transition metal catalysts for para- to ortho-hydrogen conversion NASA Technical Reports Server (NTRS) Brooks, Christopher J.; Wang, Wei; Eyman, Darrell P. 1994-01-01 The main goal of this study was to develop and improve on existing catalysts for the conversion of ortho- to para-hydrogen. Starting with a commercially available Air Products nickel silicate, which had a beta value of 20, we were trying to synthesize catalysts that would be an improvement to AP. This was accomplished by preparing silicates with various metals as well as different preparation methods. We also prepared supported ruthenium catalysts by various techniques using several metal precursors to improve present technology. What was also found was that the activation conditions prior to catalytic testing was highly important for both the silicates and the supported ruthenium catalysts. While not the initial focus of the research, we made some interesting observations into the adsorption of H2 on ruthenium. This helped us to get a better understanding of how ortho- to para-H2 conversion takes place, and what features in a catalyst are important to optimize activity. Reactor design was the final area in which some interesting conclusions were drawn. As discussed earlier, the reactor catalyst bed must be constructed using straight 1/8 feet OD stainless steel tubing. It was determined that the use of 1/4 feet OD tubing caused two problems. First, the radius from the center of the bed to the wall was too great for thermal equilibrium. Since the reaction of ortho- to para-H2 is exothermic, the catalyst bed center was warmer than the edges. Second, the catalyst bed was too shallow using a 1/4 feet tube. This caused reactant blow-by which was thought to decrease the measured activity when the flow rate was increased. The 1/8 feet tube corrected both of these concerns. 10. Molecular basis for H blood group deficiency in Bombay (Oh) and para-Bombay individuals. PubMed Central Kelly, R J; Ernst, L K; Larsen, R D; Bryant, J G; Robinson, J S; Lowe, J B 1994-01-01 The penultimate step in the biosynthesis of the human ABO blood group oligosaccharide antigens is catalyzed by alpha-(1,2)-fucosyltransferase(s) (GDP-L-fucose: beta-D-galactoside 2-alpha-L-fucosyltransferase, EC 2.4.1.69), whose expression is determined by the H and Secretor (SE) blood group loci (also known as FUT1 and FUT2, respectively). These enzymes construct Fuc alpha 1-->2Gal beta-linkages, known as H determinants, which are essential precursors to the A and B antigens. Erythrocytes from individuals with the rare Bombay and para-Bombay blood group phenotypes are deficient in H determinants, and thus A and B determinants, as a consequence of apparent homozygosity for null alleles at the H locus. We report a molecular analysis of a human alpha-(1,2)-fucosyltransferase gene, thought to correspond to the H blood group locus, in a Bombay pedigree and a para-Bombay pedigree. We find inactivating point mutations in the coding regions of both alleles of this gene in each H-deficient individual. These results define the molecular basis for H blood group antigen deficiency in Bombay and para-Bombay phenotypes, provide compelling evidence that this gene represents the human H blood group locus, and strongly support a hypothesis that the H and SE loci represent distinct alpha-(1,2)-fucosyltransferase genes. Candidate sequences for the human SE locus are identified by low-stringency Southern blot hybridization analyses, using a probe derived from the H alpha-(1,2)-fucosyltransferase gene. Images PMID:7912436 11. Molecular basis for H blood group deficiency in Bombay (Oh) and para-Bombay individuals. PubMed Kelly, R J; Ernst, L K; Larsen, R D; Bryant, J G; Robinson, J S; Lowe, J B 1994-06-21 The penultimate step in the biosynthesis of the human ABO blood group oligosaccharide antigens is catalyzed by alpha-(1,2)-fucosyltransferase(s) (GDP-L-fucose: beta-D-galactoside 2-alpha-L-fucosyltransferase, EC 2.4.1.69), whose expression is determined by the H and Secretor (SE) blood group loci (also known as FUT1 and FUT2, respectively). These enzymes construct Fuc alpha 1-->2Gal beta-linkages, known as H determinants, which are essential precursors to the A and B antigens. Erythrocytes from individuals with the rare Bombay and para-Bombay blood group phenotypes are deficient in H determinants, and thus A and B determinants, as a consequence of apparent homozygosity for null alleles at the H locus. We report a molecular analysis of a human alpha-(1,2)-fucosyltransferase gene, thought to correspond to the H blood group locus, in a Bombay pedigree and a para-Bombay pedigree. We find inactivating point mutations in the coding regions of both alleles of this gene in each H-deficient individual. These results define the molecular basis for H blood group antigen deficiency in Bombay and para-Bombay phenotypes, provide compelling evidence that this gene represents the human H blood group locus, and strongly support a hypothesis that the H and SE loci represent distinct alpha-(1,2)-fucosyltransferase genes. Candidate sequences for the human SE locus are identified by low-stringency Southern blot hybridization analyses, using a probe derived from the H alpha-(1,2)-fucosyltransferase gene. 12. ParaSight-F rapid manual diagnostic test of Plasmodium falciparum infection. PubMed Central Uguen, C.; Rabodonirina, M.; De Pina, J. J.; Vigier, J. P.; Martet, G.; Maret, M.; Peyron, F. 1995-01-01 The ParaSight(R)-F test is a qualitative diagnostic test of Plasmodium falciparum, which is based on the detection by a monoclonal antibody of a species-specific soluble antigen (histidine-rich protein (HRP-II)) in whole blood and which can be performed without special equipment. A visual reading is given by a polyclonal antibody coupled with dye-loaded liposomes; when positive, a pink line appears. The test has been compared with microscopic examination of thin blood smears and with Quantitative Buffy Coat malaria test (QBC(R) in a single-blind study. A total of 358 patients who had returned to France from malarial areas and consulted their doctor with symptoms or for a routine examination were enrolled in the study; 33 of them were found to have a falciparum malaria infection by the diagnostic test. On the day of consultation, the specificity of the ParaSight(R)-F test was 99% and its sensitivity 94%. The follow-up of infected patients after treatment showed that the test became negative later than the other reference tests. There was no correlation between antigen persistence and the intensity of the ParaSight(R)-F signal or circulating parasitaemia. No cross-reaction was noted for seven malaria cases due to other Plasmodium species. The test was performed quickly (10 tests in 20 minutes), was easy to read, and required minimal space. For cases of imported malaria, the test's specificity and low threshold for detection could make it a valuable adjunct test. However, in its present form, it cannot replace microscopic techniques which are species-specific and quantitative. In endemic areas, the test seems to be very promising by its results and ease of use according to published field studies. Images Fig. 1 Fig. 2 PMID:8846490 13. Ortho/para ratio of H2O+ toward Sagittarius B2(M) revisited. PubMed Schilke, Peter; Lis, Dariusz C; Bergin, Edwin A; Higgins, Ronan; Comito, Claudia 2013-10-03 The HIFI instrument aboard the Herschel satellite has allowed the observation and characterization of light hydrides, the building blocks of interstellar chemistry. In this article, we revisit the ortho/para ratio for H2O(+) toward the Sgr B2(M) cloud core. The line of sight toward this star forming region passes through several spiral arms and the gas in the Bar potential in the inner Galaxy. In contrast to earlier findings, which used fewer lines to constrain the ratio, we find a ratio of 3, which is uniformly consistent with high-temperature formation of the species. In view of the reactivity of this ion, this matches the expectations. 14. Salud Para Su Corazon (Health for Your Heart) Community Health Worker Model PubMed Central Balcazar, H.; Alvarado, M.; Ortiz, G. 2012-01-01 This article describes 6 Salud Para Su Corazon (SPSC) family of programs that have addressed cardiovascular disease risk reduction in Hispanic communities facilitated by community health workers (CHWs) or Promotores de Salud (PS). A synopsis of the programs illustrates the designs and methodological approaches that combine community-based participatory research for 2 types of settings: community and clinical. Examples are provided as to how CHWs can serve as agents of change in these settings. A description is presented of a sustainability framework for the SPSC family of programs. Finally, implications are summarized for utilizing the SPSC CHW/PS model to inform ambulatory care management and policy. PMID:21914992 15. Manual de reforestación para América Tropical Treesearch Blanca I. Ruiz 2002-01-01 Aún cuando las comunidades tengan necesidad apremiante de sembrar árboles y tengan conciencia de ésta, la tarea que enfrentan no es sencilla ni barata (capítulo 5). Sin embargo, los beneficios que proveen los árboles son incontables (capítulo 2). De hecho, sin éstos nuestra civilización no podría existir tal cual la conocemos. Este manual fue escrito para ayudar a... 16. Summary of Documentation for DYNA3D-ParaDyn's Software Quality Assurance Regression Test Problems SciTech Connect Zywicz, Edward 2016-08-18 17. Synthesis of icariin from kaempferol through regioselective methylation and para-Claisen–Cope rearrangement PubMed Central Mei, Qinggang; Wang, Chun; Zhao, Zhigang; Yuan, Weicheng 2015-01-01 Summary The hemisynthesis of the naturally occurring bioactive flavonoid glycoside icariin (1) has been accomplished in eleven steps with 7% overall yield from kaempferol. The 4′-OH methylation of kaempferol, the 8-prenylation of 3-O-methoxymethyl-4′-O-methyl-5-O-prenyl-7-O-benzylkaempferol (8) via para-Claisen–Cope rearrangement catalyzed by Eu(fod)3 in the presence of NaHCO3, and the glycosylation of icaritin (3) are the key steps. PMID:26425179 18. Electrically conducting poly(para-phenylene sulfide) prepared by doping with nitrosyl salts from solution Rubner, Michael; Cukor, Peter; Jopson, Harriet; Deits, Walter 1982-03-01 Para(polyphenylene sulfide) may be doped spontaneously and rapidly with nitrosyl salts (NOPF6, NOSbF6) from solution to yield an electrically conducting material (10-1ohm-1cm-1). The level of conductivity is primarily dependent on the extent of dopant incorporation, which in turn is determined by the polymer’s crystallinity; the more amorphous the polymer, the more dopant it takes up and the more conductive it becomes. The incorporation of dopants produces irreversible chemical changes in the polymer resulting in the deterioration of its mechanical properties. 19. Manual de métodos de campo para el monitoreo de aves terrestres Treesearch C. John Ralph; Geoffrey R. Geupel; Peter Pyle; Thomas E. Martin; David F DeSante; Borja Milá 1996-01-01 El presente manual es una recopilación de métodos de campo para la determinación de índices de abundancia y datos demográficos de poblaciones de aves terrestres en una amplia variedad de hábitats. Está dirigido a biólogos, técnicos de campo, e investigadores de cualquier parte del Continente Americano. Los métodos descritos incluyen cuatro tipos de censos... 20. Salud Para Su Carozón--a Latino promotora-led cardiovascular health education program. PubMed 2012-01-01 Salud Para Su Carozón is a culturally sensitive, community-based program to increase heart healthy knowledge and behaviors among Latinos. Promotoras were trained using a 10-session manual to teach participants from 7 communities about heart disease risk factors and skills to achieve heart healthy behaviors. In 435 participants with pre-to-post self-reported data, there were increases in physical activity outside of work (57%-78%), heart health knowledge (49%-76%), and confidence in preparing heart healthy meals (66%-81%) (all Ps < .001). Results suggest that promotoras can provide effective health education to improve heart health risk behaviors in select Latino communities. 1. [Analysis on FUT1 and FUT2 gene of 10 para-Bombay individuals in China]. PubMed Guo, Zhong-hui; Xiang, Dong; Zhu, Zi-yan; Wang, Jian-lian; Zhang, Jia-min; Liu, Xi; Shen, Wei; Chen, He-ping 2004-10-01 This is a study on the allele composing of ABO, FUT1 and FUT2 gene loci of 10 para-Bombay individuals in China. Ten samples coming from different districts of China were suspected of para-Bombay phenotype by primary serology tests. Routine and absorb-elution tests were conducted to identify their ABO type, and duplex polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) was applied to getting their ABO genotype. Most of them were submitted to a test of their Lewis type as well. Then through direct DNA sequencing with PCR products of FUT1 and FUT2 genes, the genotypes of their H and SE gene loci were analyzed. It can be confirmed that the 10 samples are para-Bombay. All of their ABO genotypes are consistent with the serological absorb-elution results and the substances detected results in saliva. Seven out of 10 have recessive homozygous gene at their H locus. Each phenotype of h1h1 (nt547-552Deltaag), h2h2 (nt880-882Deltatt) and h4h4 (nt35 t-->c) are ascertained in 2 individuals; moreover, h3h3 (nt 658 c-->t) is identified in one individual. The rest are hh heterozygous individuals: one is h3/h(new-1); the other is h2/h(new-2); the last one is h1/h2. The h(new-1) (nt586 c-->t) allele has a point mutation at nt 586 C to T, which leads a nonsense mutation Gln(CAG) to stop (TAG).The second h (new-2) (nt328 g-->a) has an nt328 G to A missense mutation,which leads Ala (GCC),was replaced by Thr (ACC) at 110 amino acid position. All the 10 samples have Se (nt357 c-->t) synonymous mutation. One Bm(h) (B/O) individual with h4h4 phenotype has a Se(w)(nt357 c-->t; nt385 a-->t) allele, whose Lewis type is Le(a+b+). Moreover, the authors detected a (nt716 g-->a) mutation in two samples' Se gene. Four kinds of known h alleles (h1-h4), 2 kinds of novel non-functional FUT1 alleles, a Se(w) allele, and a novel SeG716A polymorphism in Chinese para-Bombay individuals were detected. At the same time, the authors noticed that all the 10 samples have the nt357 c 2. ParaCEST MRI contrast agents capable of derivatization via"click" chemistry. PubMed Milne, Mark; Chicas, Kirby; Li, Alex; Bartha, Robert; Hudson, Robert H E 2012-01-14 A comprehensive series of lanthanide chelates has been prepared with a tetrapropargyl DOTAM type ligand. The complexes have been characterized by a combination of (1)H NMR, single-crystal X-ray crystallography, CEST and relaxation studies and have also been evaluated for potential use as paramagnetic chemical exchange saturation transfer (ParaCEST) contrast agents in magnetic resonance imaging (MRI). We demonstrate the functionalization of several chelates by means of alkyne-azide "click" chemistry in which a glucosyl azide is used to produce a tetra-substituted carbohydrate-decorated lanthanide complex. The carbohydrate periphery of the chelates has a potent influence on the CEST properties as described herein. 3. NMR at earth's magnetic field using para-hydrogen induced polarization. PubMed Hamans, Bob C; Andreychenko, Anna; Heerschap, Arend; Wijmenga, Sybren S; Tessari, Marco 2011-09-01 A method to achieve NMR of dilute samples in the earth's magnetic field by applying para-hydrogen induced polarization is presented. Maximum achievable polarization enhancements were calculated by numerically simulating the experiment and compared to the experimental results and to the thermal equilibrium in the earth's magnetic field. Simultaneous 19F and 1H NMR detection on a sub-milliliter sample of a fluorinated alkyne at millimolar concentration (∼10(18) nuclear spins) was realized with just one single scan. A highly resolved spectrum with a signal/noise ratio higher than 50:1 was obtained without using an auxiliary magnet or any form of radio frequency shielding. 4. Three-dimensional treatment planning for para-aortic node irradiation in patients with cervical cancer SciTech Connect Munzenrider, J.E.; Doppke, K.P.; Brown, A.P.; Burman, C.; Cheng, E.; Chu, J.; Chui, C.; Drzymala, R.E.; Goitein, M.; Manolis, J.M. ) 1991-05-15 Three-dimensional treatment planning has been used by four cooperating centers to prepare and analyze multiple treatment plans on two cervix cancer patients. One patient had biopsy-proven and CT-demonstrable metastasis to the para-aortic nodes, while the other was at high risk for metastatic involvement of para-aortic nodes. Volume dose distributions were analyzed, and an attempt was made to define the role of 3-D treatment planning to the para-aortic region, where moderate to high doses (50-66 Gy) are required to sterilize microscopic and gross metastasis. Plans were prepared using the 3-D capabilities for tailoring fields to the target volumes, but using standard field arrangements (3-D standard), and with full utilization of the 3-D capabilities (3-D unconstrained). In some but not all 3-D unconstrained plans, higher doses were delivered to the large nodal volume and to the volume containing gross nodal disease than in plans analyzed but not prepared with full 3-D capability (3-D standard). The small bowel was the major dose limiting organ. Its tolerance would have been exceeded in all plans which prescribed 66 Gy to the gross nodal mass, although some reduction in small bowel near-maximum dose was achieved in the 3-D unconstrained plans. All plans were able to limit doses to other normal organs to tolerance levels or less, with significant reductions seen in doses to spinal cord, kidneys, and large bowel in the 3-D unconstrained plans, as compared to the 3-D standard plans. A high probability of small bowel injury was detected in one of four 3-D standard plans prescribed to receive 50 Gy to the large para-aortic nodal volume; the small bowel dose was reduced to an acceptable level in the corresponding 3-D unconstrained plan. An optimum beam energy for treating this site was not identified, with plans using 4, 6, 10, 15, 18, and 25 MV photons all being equally acceptable. (Abstract Truncated) 5. Translation and adaptation of the Competencias Esenciales en Salud Pública para los recursos humanos en salud. PubMed Almeida, Maria de Lourdes de; Peres, Aida Maris; Ferreira, Maria Manuela Frederico; Mantovani, Maria de Fátima 2017-06-05 6. Path integral molecular dynamics simulation of solid para-hydrogen with an aluminum impurity Mirijanian, Dina T.; Alexander, Millard H.; Voth, Gregory A. 2002-11-01 The equilibrium properties of an aluminum impurity trapped in solid para-hydrogen have been studied. The results were compared to those of a previous study by Krumrine et al. [J. Chem. Phys. 113 (2000) 9079] with an atomic boron. In the presence of vacancy defect, when the orientation-dependent Al- pH 2 potential is used, the Al atom is displaced to a position half way between its original substituted site and the vacancy site. Thermodynamic results also indicate that the presence of a neighboring vacancy helps to stabilize the Al impurity to a far greater extent than in the case of the B impurity. 7. [The role of experience in the neurology of facial expression of emotions]. PubMed Gordillo, Fernando; Pérez, Miguel A; Arana, José M; Mestas, Lilia; López, Rafael M 2015-04-01 Introduccion. La expresion facial de las emociones tiene una funcion social importante que facilita la interaccion entre las personas. Este proceso tiene una base neurologica, que no se aisla del contexto ni de la experiencia acumulada por la interaccion entre las personas en dicho contexto. Sin embargo, hasta la fecha, no se conocen con claridad los efectos de la experiencia sobre la percepcion de las emociones. Objetivos. Discutir que funcion desempeña la experiencia en el reconocimiento de la expresion facial de las emociones y analizar los sesgos que las experiencias negativas y positivas podrian tener sobre la percepcion emocional. Desarrollo. La maduracion de las estructuras que soportan la capacidad para reconocer la emocion pasa por un periodo sensible durante la adolescencia, donde la experiencia adquirida puede tener mayor impacto sobre el reconocimiento emocional. Experiencias de abuso, maltrato, abandono, guerras o estres generan un sesgo hacia las expresiones de ira y tristeza. De igual manera, las experiencias positivas dan lugar a un sesgo hacia la expresion de alegria. Conclusiones. Solo cuando las personas son capaces de utilizar la expresion facial de las emociones como un canal de comprension y expresion interaccionaran de manera adecuada con su entorno. Este entorno, a su vez, dara lugar a experiencias que modulan dicha capacidad. Por lo tanto, es un proceso autorregulatorio que puede ser dirigido a traves de la implementacion de programas de intervencion sobre los aspectos emocionales. 8. Where does the electron go? The nature of ortho/para and meta group directing in electrophilic aromatic substitution SciTech Connect Liu, Shubin 2014-11-21 Electrophilic aromatic substitution as one of the most fundamental chemical processes is affected by atoms or groups already attached to the aromatic ring. The groups that promote substitution at the ortho/para or meta positions are, respectively, called ortho/para and meta directing groups, which are often characterized by their capability to donate electrons to or withdraw electrons from the ring. Though resonance and inductive effects have been employed in textbooks to explain this phenomenon, no satisfactory quantitative interpretation is available in the literature. Here, based on the theoretical framework we recently established in density functional reactivity theory (DFRT), where electrophilicity and nucleophilicity are simultaneously quantified by the Hirshfeld charge, the nature of ortho/para and meta group directing is systematically investigated for a total of 85 systems. We find that regioselectivity of electrophilic attacks is determined by the Hirshfeld charge distribution on the aromatic ring. Ortho/para directing groups have most negative charges on the ortho/para positions, while meta directing groups often possess the largest negative charge on the meta position. Our results do not support that ortho/para directing groups are electron donors and meta directing groups are electron acceptors. Most neutral species we studied here are electron withdrawal in nature. Anionic systems are always electron donors. There are also electron donors serving as meta directing groups. We predicted ortho/para and meta group directing behaviors for a list of groups whose regioselectivity is previously unknown. In addition, strong linear correlations between the Hirshfeld charge and the highest occupied molecular orbital have been observed, providing the first link between the frontier molecular orbital theory and DFRT. 9. Where does the electron go? The nature of ortho/para and meta group directing in electrophilic aromatic substitution. PubMed Liu, Shubin 2014-11-21 Electrophilic aromatic substitution as one of the most fundamental chemical processes is affected by atoms or groups already attached to the aromatic ring. The groups that promote substitution at the ortho/para or meta positions are, respectively, called ortho/para and meta directing groups, which are often characterized by their capability to donate electrons to or withdraw electrons from the ring. Though resonance and inductive effects have been employed in textbooks to explain this phenomenon, no satisfactory quantitative interpretation is available in the literature. Here, based on the theoretical framework we recently established in density functional reactivity theory (DFRT), where electrophilicity and nucleophilicity are simultaneously quantified by the Hirshfeld charge, the nature of ortho/para and meta group directing is systematically investigated for a total of 85 systems. We find that regioselectivity of electrophilic attacks is determined by the Hirshfeld charge distribution on the aromatic ring. Ortho/para directing groups have most negative charges on the ortho/para positions, while meta directing groups often possess the largest negative charge on the meta position. Our results do not support that ortho/para directing groups are electron donors and meta directing groups are electron acceptors. Most neutral species we studied here are electron withdrawal in nature. Anionic systems are always electron donors. There are also electron donors serving as meta directing groups. We predicted ortho/para and meta group directing behaviors for a list of groups whose regioselectivity is previously unknown. In addition, strong linear correlations between the Hirshfeld charge and the highest occupied molecular orbital have been observed, providing the first link between the frontier molecular orbital theory and DFRT. 10. Modelizacion, control e implementacion de un procesador energetico paralelo para aplicacion en sistemas multisalida Ferreres Sabater, Agustin 11. Fire safety improvement of para-aramid fiber in thermoplastic polyurethane elastomer. PubMed Chen, Xilei; Wang, Wenduo; Li, Shaoxiang; Jiao, Chuanmei 2017-02-15 This article mainly studied fire safety effects of para-aramid fiber (AF) in thermoplastic polyurethane (TPU). The TPU/AF composites were prepared by molten blending method, and then the fire safety effects of all TPU composites were tested using cone calorimeter test (CCT), microscale combustion colorimeter test (MCC), smoke density test (SDT), and thermogravimetric/fourier transform infrared spectroscopy (TG-IR). The CCT test showed that AF could improve the fire safety of TPU. Remarkably, the peak value of heat release rate (pHRR) and the peak value of smoke production rate (pSPR) for the sample with 1.0wt% content of AF were decreased by 52.0% and 40.5% compared with pure TPU, respectively. The MCC test showed that the HRR value of AF-2 decreased by 27.6% compared with pure TPU. TG test showed that AF promoted the char formation in the degradation process of TPU; as a result the residual carbon was increased. The TG-IR test revealed that AF had increased the thermal stability of TPU at the beginning and reduced the release of CO2 with the decomposition going on. Through the analysis of the results of this experiment, it will make a great influence on the study of the para-aramid fiber in the aspect of fire safety of polymer. Copyright © 2016 Elsevier B.V. All rights reserved. 12. Effects of para-fluorine substituent of polystyrene on gradient-index fiber-optic properties Koike, Kotaro; Suzuki, Akifumi; Makino, Kenji; Koike, Yasuhiro 2015-01-01 To study the effects of fluorine substituent of polystyrene (PSt) on gradient-index fiber-optic properties, a poly(para-fluorostyrene) (P(p-FSt))-based graded-index plastic optical fiber (GI POF) is fabricated, and its properties are compared with those of a PSt-based GI POF. The para-fluorine substitution positively affects the glass transition temperature (Tg) of the core, wavelength dispersion of the optimum refractive index profile, bandwidth, and attenuation. The core Tg of the P(p-FSt)-based GI POF is 88 °C, which is higher than that of the PSt-based GI POF by 9 °C when both fibers have an identical numerical aperture (NA = 0.2). The optimum refractive index profile coefficient for the P(p-FSt)-based GI POF varies from 2.2 to 2.1 in the 600-800 nm range, whereas that for the PSt-based GI POF varies from 2.6 to 2.3 in the same wavelength region. The bandwidth of the P(p-FSt)-based GI POF is intrinsically higher than that of PSt-based GI POF. Moreover, the fiber attenuation of the P(p-FSt)-based GI POF was significantly smaller than that of the PSt-based GI POF over the source wavelength range. Our study demonstrates that P(p-FSt) has favorable properties as a GI POF base material. 13. Rotational relaxation of CS by collision with ortho- and para-H{sub 2} molecules SciTech Connect Denis-Alpizar, Otoniel; Stoecklin, Thierry Halvick, Philippe; Dubernet, Marie-Lise 2013-11-28 Quantum mechanical investigation of the rotationally inelastic collisions of CS with ortho- and para-H{sub 2} molecules is reported. The new global four-dimensional potential energy surface presented in our recent work is used. Close coupling scattering calculations are performed in the rigid rotor approximation for ortho- and para-H{sub 2} colliding with CS in the j = 0–15 rotational levels and for collision energies ranging from 10{sup −2} to 10{sup 3} cm{sup −1}. The cross sections and rate coefficients for selected rotational transitions of CS are compared with the ones previously reported for the collision of CS with He. The largest discrepancies are observed at low collision energy, below 1 cm{sup −1}. Above 10 cm{sup −1}, the approximation using the square root of the relative mass of the colliders to calculate the cross sections between a molecule and H{sub 2} from the data available with {sup 4}He is found to be a good qualitative approximation. The rate coefficients calculated with the electron gas model for the He-CS system show more discrepancy with our accurate results. However, scaling up these rates by a factor of 2 gives a qualitative agreement. 14. Ortho-to-para ratio in interstellar water on the sightline toward Sagittarius B2(N). PubMed Lis, Dariusz C; Bergin, Edwin A; Schilke, Peter; van Dishoeck, Ewine F 2013-10-03 The determination of the water ortho-to-para ratio (OPR) is of great interest for studies of the formation and thermal history of water ices in the interstellar medium and protoplanetary disk environments. We present new Herschel observations of the fundamental rotational transitions of ortho- and para-water on the sightline toward Sagittarius B2(N), which allow improved estimates of the measurement uncertainties due to instrumental effects and assumptions about the excitation of water molecules. These new measurements, suggesting a spin temperature of 24-32 K, confirm the earlier findings of an OPR below the high-temperature value on the nearby sightline toward Sagittarius B2(M). The exact implications of the low OPR in the galactic center molecular gas remain unclear and will greatly benefit from future laboratory measurements involving water freeze-out and evaporation processes under low-temperature conditions, similar to those present in the galactic interstellar medium. Given the specific conditions in the central region of the Milky Way, akin to those encountered in active Galactic nuclei, gas-phase processes under the influence of strong X-ray and cosmic ray ionization also have to be carefully considered. We summarize some of the latest laboratory measurements and their implications here. 15. Effects of acoustic feedback training in elite-standard Para-Rowing. PubMed Schaffert, Nina; Mattes, Klaus 2015-01-01 Assessment and feedback devices have been regularly used in technique training in high-performance sports. Biomechanical analysis is mainly visually based and so can exclude athletes with visual impairments. The aim of this study was to examine the effects of auditory feedback on mean boat speed during on-water training of visually impaired athletes. The German National Para-Rowing team (six athletes, mean ± s, age 34.8 ± 10.6 years, body mass 76.5 ± 13.5 kg, stature 179.3 ± 8.6 cm) participated in the study. Kinematics included boat acceleration and distance travelled, collected with Sofirow at two intensities of training. The boat acceleration-time traces were converted online into acoustic feedback and presented via speakers during rowing (sections with and without alternately). Repeated-measures within-participant factorial ANOVA showed greater boat speed with acoustic feedback than baseline (0.08 ± 0.01 m·s(-1)). The time structure of rowing cycles was improved (extended time of positive acceleration). Questioning of athletes showed acoustic feedback to be a supportive training aid as it provided important functional information about the boat motion independent of vision. It gave access for visually impaired athletes to biomechanical analysis via auditory information. The concept for adaptive athletes has been successfully integrated into the preparation for the Para-Rowing World Championships and Paralympics. 16. The rennet-induced clotting of para-kappa-casein revisited: inhibition experiments with pepstatin A. PubMed Brinkhuis, J; Payens, T A 1985-12-20 The proteolysis of micellar kappa-casein by rennet was followed by SDS-polyacrylamide gel-electrophoresis and the clotting of the para-kappa-casein formed by absorbance measurements. Up to a degree of proteolysis of about 0.4 the enzyme-inhibitor pepstatin A proved able to instantaneously stop the clotting. This effect is explained by the rapid condensation of monofunctional, monomeric and polymeric particles of para-kappa-casein. At higher degrees of proteolysis pepstatin was no longer able to completely block the polymerization. This is explained by the retardation of the condensation of the monofunctionals as their size grows larger. A kinetic analysis of the enzyme-controlled stage of the clotting process predicts that the system should gel at an early degree of proteolysis of about 0.07. The actual gel points occur at considerably higher degrees of proteolysis. This suggests that the enzymic attack of the polymeric inert kappa-casein particles is not completely at random. Primary micelles of kappa-casein, however, are degraded by random attack rather than by a 'catch-and-razor' mechanism. 17. [Study on the molecular genetics basis for one para-Bombay phenotype]. PubMed Hong, Xiao-Zhen; Shao, Xiao-Chun; Xu, Xian-Guo; Hu, Qing-Fa; Wu, Jun-Jie; Zhu, Fa-Ming; Fu, Qi-Hua; Yan, Li-Xing 2005-12-01 To investigate the molecular genetics basis for one para-Bombay phenotype, the red blood cell phenotype of the proband was characterized by standard serological techniques. Exon 6 and 7 of ABO gene, the entire coding region of FUT1 gene and FUT2 gene were amplified by polymerase chain reaction from genomic DNA of the proband respectively. The PCR products were purified by agarose gels and directly sequenced. The PCR-SSP and genescan were performed to confirm the mutations detected by sequencing. The results showed that the proband ABO genotype was A(102)A(102). Two heterozygous mutations of FUT1 gene, an A to G transition at position 682 and AG deletion at position 547-552 were detected in the proband. A682G could cause transition of Met-->Val at amino acid position 228, AG deletion at position 547-552 caused a reading frame shift and a premature stop codon. The FUT2 genotype was heterozygous for a functional allele Se(357) and a weakly functional allele Se(357), 385 (T/T homozygous at position 357 and A/T heterozygous at 385 position). It is concluded that the compound heterozygous mutation--a novel A682G missense mutation and a 547-552 del AG is the molecular mechanism of this para-Bombay phenotype. 18. Theoretical study of the preferential solvation effect on the solvatochromic shifts of para-nitroaniline. PubMed Frutos-Puerto, Samuel; Aguilar, Manuel A; Fdez Galván, Ignacio 2013-02-28 The origin of the nonlinear solvatochromic shift of para-nitroaniline was investigated using a mean-field sequential QM/MM method, with electron transitions computed at the CASPT2/cc-pVDZ level. Experimental data shows that the solvatochromic shift has a strong nonlinear behavior in certain solvent mixtures. We studied the case of cyclohexane-triethylamine mixtures. The results are in good agreement with the experiments and correctly reproduce the nonlinear variation of the solvent shift. Preferential solvation is clearly observed, where the local solvent composition in the neighborhood of the solute is significantly different from the bulk. It is found that even at low triethylamine concentrations a strong hydrogen bond is formed between para-nitroaniline and triethylamine, and cyclohexane is practically absent from the first solvation layer already at a molar fraction of 0.6 in triethylamine. The hydrogen bond formed is sufficiently long-lived to determine an asymmetric environment around the solute molecule. The resulting nonlinear solvent effect is mainly due to this hydrogen bond influence, although there is also a small contribution from dielectric enrichment. 19. Ortho-to-Para Ratio in Interstellar Water on the Sightline toward Sagittarius B2(N) Lis, Dariusz C.; Bergin, Edwin A.; Schilke, Peter; van Dishoeck, Ewine F. 2013-10-01 The determination of the water ortho-to-para ratio (OPR) is of great interest for studies of the formation and thermal history of water ices in the interstellar medium and protoplanetary disk environments. We present new Herschel observations of the fundamental rotational transitions of ortho- and para-water on the sightline toward Sagittarius B2(N), which allow improved estimates of the measurement uncertainties due to instrumental effects and assumptions about the excitation of water molecules. These new measurements, suggesting a spin temperature of 24-32 K, confirm the earlier findings of an OPR below the high-temperature value on the nearby sightline toward Sagittarius B2(M). The exact implications of the low OPR in the galactic center molecular gas remain unclear and will greatly benefit from future laboratory measurements involving water freeze-out and evaporation processes under low-temperature conditions, similar to those present in the galactic interstellar medium. Given the specific conditions in the central region of the Milky Way, akin to those encountered in active Galactic nuclei, gas-phase processes under the influence of strong X-ray and cosmic ray ionization also have to be carefully considered. We summarize some of the latest laboratory measurements and their implications here. 20. The rapid manual ParaSight-F test. A new diagnostic tool for Plasmodium falciparum infection. PubMed Shiff, C J; Premji, Z; Minjas, J N 1993-01-01 A rapid manual test for Plasmodium falciparum, the ParaSight-F test, has been used on a series of patients in a holoendemic malaria area of coastal Tanzania. The test, which is an antigen capture test detecting trophozoite-derived histidine rich protein-II, is simple to perform and provides a definitive answer in about 10 min. It requires no special equipment and is read using a single drop of blood. When compared with 272 thick blood films examined microscopically by 2 observers and confirmed by the QBC malaria test, the ParaSight-F test had 88.9% sensitivity and 87.5% specificity. Detectable antigenaemia in a group of 40 people declined following treatment with Fansidar and by 10 d after treatment all but 4 individuals were antigen free. The remaining 4, although clear of peripheral parasitaemia, remained antigenaemic for 14 d. The test shows great promise for rapid effective diagnosis of P. falciparum in clinics and village health centres where there is no facility for microscopy. Because of its accuracy and rapid action it may even obviate the need for microscopical examination of blood films to diagnose P. falciparum malaria. 1. Para-position derivatives of fungal anthelmintic cyclodepsipeptides engineered with Streptomyces venezuelae antibiotic biosynthetic genes. PubMed Yanai, Koji; Sumida, Naomi; Okakura, Kaoru; Moriya, Tatsuki; Watanabe, Manabu; Murakami, Takeshi 2004-07-01 PF1022A, a cyclooctadepsipeptide possessing strong anthelmintic properties and produced by the filamentous fungus Rosellinia sp. PF1022, consists of four alternating residues of N-methyl-L-leucine and four residues of D-lactate or D-phenyllactate. PF1022A derivatives obtained through modification of their benzene ring at the para-position with nitro or amino groups act as valuable starting materials for the synthesis of compounds with improved anthelmintic activities. Here we describe the production of such derivatives by fermentation through metabolic engineering of the PF1022A biosynthetic pathway in Rosellinia sp. PF1022. Three genes cloned from Streptomyces venezuelae, and required for the biosynthesis of p-aminophenylpyruvate from chorismate in the chloramphenicol biosynthetic pathway, were expressed in a chorismate mutase-deficient strain derived from Rosellinia sp. PF1022. Liquid chromatography-mass spectrometry and NMR analyses confirmed that this approach facilitated the production of PF1022A derivatives specifically modified at the para-position. This fermentation method is environmentally safe and can be used for the industrial scale production of PF1022A derivatives. 2. Asymmetric Top Rotors in Superfluid Para-Hydrogen Nano-Clusters Zeng, Tao; Li, Hui; Roy, Pierre-Nicholas 2012-06-01 We present the first simulation study of bosonic clusters doped with an asymmetric top molecule. A variation of the path-integral Monte Carlo method is developed to study a para-water (pH_2O) impurity in para-hydrogen (pH_2) clusters. The growth pattern of the doped clusters is similar in nature to that of the pure clusters. The pH_2O molecule appears to rotate freely in the cluster due to its large rotational constants and the lack of adiabatic following. The presence of pH_2O substantially quenches the superfluid response of pH_2 with respect to the space fixed frame. We also study the behaviour of a sulphur dioxide (32S16O_2) dopant in the pH_2 clusters. For such a heavy rotor, the adiabatic following of the pH_2 molecules is established and the superfluid renormalization of the rotational constants is observed. The rotational structure of the SO_2-p(H_2)_N clusters' ro-vibrational spectra is predicted. The connection between the superfluid response respect to the external boundary rotation and the dopant rotation is discussed. 3. Comparing the Well-Being of Para and Olympic Sport Athletes: A Systematic Review. PubMed Macdougall, Hannah; O'Halloran, Paul; Shields, Nora; Sherry, Emma 2015-07-01 This systematic review included 12 studies that compared the well-being of Para and Olympic sport athletes. Meta-analyses revealed that Para athletes, compared with Olympic sport athletes, had lower levels of self-acceptance, indicated by athletic identity, d = 0.47, 95% confidence interval (CI) [0.77, 0.16], and body-image perceptions, d = 0.33, 95% CI [0.59, 0.07], and differed from Olympic sport athletes in terms of their motivation, indicated by a greater mastery-oriented climate, d = 0.74, 95% CI [0.46, 1.03]. Given an inability to pool the remaining data for meta-analysis, individual standardized mean differences were calculated for other dimensions of psychological and subjective well-being. The results have implications for professionals and coaches aiming to facilitate the well-being needs of athletes under their care. Future research would benefit from incorporating established models of well-being based on theoretical rationale combined with rigorous study designs. 4. WormBase ParaSite - a comprehensive resource for helminth genomics. PubMed Howe, Kevin L; Bolt, Bruce J; Shafie, Myriam; Kersey, Paul; Berriman, Matthew 2016-11-27 The number of publicly available parasitic worm genome sequences has increased dramatically in the past three years, and research interest in helminth functional genomics is now quickly gathering pace in response to the foundation that has been laid by these collective efforts. A systematic approach to the organisation, curation, analysis and presentation of these data is clearly vital for maximising the utility of these data to researchers. We have developed a portal called WormBase ParaSite (http://parasite.wormbase.org) for interrogating helminth genomes on a large scale. Data from over 100 nematode and platyhelminth species are integrated, adding value by way of systematic and consistent functional annotation (e.g. protein domains and Gene Ontology terms), gene expression analysis (e.g. alignment of life-stage specific transcriptome data sets), and comparative analysis (e.g. orthologues and paralogues). We provide several ways of exploring the data, including genome browsers, genome and gene summary pages, text search, sequence search, a query wizard, bulk downloads, and programmatic interfaces. In this review, we provide an overview of the back-end infrastructure and analysis behind WormBase ParaSite, and the displays and tools available to users for interrogating helminth genomic data. 5. In-Situ Visualization Experiments with ParaView Cinema in RAGE SciTech Connect Kares, Robert John 2015-10-15 A previous paper described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen suboptimally. A proposed method of addressing this difficulty is to simply render multiple images at runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView/Catalyst has been extended to include such a capability via the so-called Cinema framework. Here I describe some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities. 6. Semantics-based distributed I/O with the ParaMEDIC framework. SciTech Connect Balaji, P.; Feng, W.; Lin, H.; Mathematics and Computer Science; Virginia Tech; North Carolina State Univ. 2008-01-01 Many large-scale applications simultaneously rely on multiple resources for efficient execution. For example, such applications may require both large compute and storage resources; however, very few supercomputing centers can provide large quantities of both. Thus, data generated at the compute site oftentimes has to be moved to a remote storage site for either storage or visualization and analysis. Clearly, this is not an efficient model, especially when the two sites are distributed over a wide-area network. Thus, we present a framework called 'ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing' which uses application-specific semantic information to convert the generated data to orders-of-magnitude smaller metadata at the compute site, transfer the metadata to the storage site, and re-process the metadata at the storage site to regenerate the output. Specifically, ParaMEDIC trades a small amount of additional computation (in the form of data post-processing) for a potentially significant reduction in data that needs to be transferred in distributed environments. 7. Herschel/SPIRE observations of water production rates and ortho-to-para ratios in comets★ Wilson, Thomas G.; Rawlings, Jonathan M. C.; Swinyard, Bruce M. 2017-04-01 This paper presents Herschel/SPIRE (Spectral and Photometric Imaging Receiver) spectroscopic observations of several fundamental rotational ortho- and para-water transitions seen in three Jupiter-family comets and one Oort-cloud comet. Radiative transfer models that include excitation by collisions with neutrals and electrons, and by solar infrared radiation, were used to produce synthetic emission line profiles originating in the cometary coma. Ortho-to-para ratios (OPRs) were determined and used to derived water production rates for all comets. Comparisons are made with the water production rates derived using an OPR of 3. The OPR of three of the comets in this study is much lower than the statistical equilibrium value of 3; however they agree with observations of comets 1P/Halley and C/2001 A2 (LINEAR), and the protoplanetary disc TW Hydrae. These results provide evidence suggesting that OPR variation is caused by post-sublimation gas-phase nuclear-spin conversion processes. The water production rates of all comets agree with previous work and, in general, decrease with increasing nucleocentric offset. This could be due to a temperature profile, additional water source or OPR variation in the comae, or model inaccuracies. 8. Genetic evidence that the degradation of para-cresol by Geobacter metallireducens is catalyzed by the periplasmic para-cresol methylhydroxylase. PubMed Chaurasia, Akhilesh Kumar; Tremblay, Pier-Luc; Holmes, Dawn E; Zhang, Tian 2015-10-01 9. Association of plasma ortho-tyrosine/para-tyrosine ratio with responsiveness of erythropoiesis-stimulating agent in dialyzed patients. PubMed Kun, Szilárd; Mikolás, Esztella; Molnár, Gergo A; Sélley, Eszter; Laczy, Boglárka; Csiky, Botond; Kovács, Tibor; Wittmann, István 2014-09-01 Objectives Patients with end-stage renal failure (ESRF) treated with erythropoiesis-stimulating agents (ESAs) are often ESA-hyporesponsive associated with free radical production. Hydroxyl free radical converts phenylalanine into ortho-tyrosine, while physiological isomer para-tyrosine is formed enzymatically, mainly in the kidney. Production of 'para-tyrosine' is decreased in ESRF and it can be replaced by ortho-tyrosine in proteins. Our aim was to study the role of tyrosines in ESA-responsiveness. Methods Four groups of volunteers were involved in our cross-sectional study: healthy volunteers (CONTR; n = 16), patients on hemodialysis without ESA-treatment (non-ESA-HD; n = 8), hemodialyzed patients with ESA-treatment (ESA-HD; n = 40), and patients on continuous peritoneal dialysis (CAPD; n = 21). Plasma ortho-, para-tyrosine, and phenylalanine levels were detected using a high performance liquid chromatography (HPLC)-method. ESA-demand was expressed by ESA-dose, ESA-dose/body weight, and erythropoietin resistance index1 (ERI1, weekly ESA-dose/body weight/hemoglobin). Results We found significantly lower para-tyrosine levels in all groups of dialyzed patients when compared with control subjects, while in contrast ortho-tyrosine levels and ortho-tyrosine/para-tyrosine ratio were comparatively significantly higher in dialyzed patients. Among groups of dialyzed patients the ortho-tyrosine level and ortho-tyrosine/para-tyrosine ratio were significantly higher in ESA-HD than in the non-ESA-HD and CAPD groups. There was a correlation between weekly ESA-dose/body weight, ERI1, and ortho-tyrosine/para-tyrosine ratio (r = 0.441, P = 0.001; r = 0.434, P = 0.001, respectively). Our most important finding was that the ortho-tyrosine/para-tyrosine ratio proved to be an independent predictor of ERI1 (β = 0.330, P = 0.016). In these multivariate regression models most of the known predictors of ESA-hyporesponsiveness were included. Discussion Our findings may 10. Um enfoque antropológico para o ensino de astronomia no nível médio Costa, G. B.; Jafelice, L. C. 2003-08-01 11. Ordered expression pattern of Hox and ParaHox genes along the alimentary canal in the ascidian juvenile. PubMed Nakayama, Satoshi; Satou, Kunihiro; Orito, Wataru; Ogasawara, Michio 2016-07-01 The Hox and ParaHox genes of bilateria share a similar expression pattern along the body axis and are known to be associated with anterior-posterior patterning. In vertebrates, the Hox genes are also expressed in presomitic mesoderm and gut endoderm and the ParaHox genes show a restricted expression pattern in the gut-related derivatives. Regional expression patterns in the embryonic central nervous system of the basal chordates amphioxus and ascidian have been reported; however, little is known about their endodermal expression in the alimentary canal. We focus on the Hox and ParaHox genes in the ascidian Ciona intestinalis and investigate the gene expression patterns in the juvenile, which shows morphological regionality in the alimentary canal. Gene expression analyses by using whole-mount in situ hybridization reveal that all Hox genes have a regional expression pattern along the alimentary canal. Expression of Hox1 to Hox4 is restricted to the posterior region of pharyngeal derivatives. Hox5 to Hox13 show an ordered expression pattern correlated with each Hox gene number along the postpharyngeal digestive tract. This expression pattern along the anterior-posterior axis has also been observed in Ciona ParaHox genes. Our observations suggest that ascidian Hox and ParaHox clusters are dispersed; however, the ordered expression patterns along the alimentary canal appear to be conserved among chordates. 12. Comparing para-rowing set-ups on an ergometer using kinematic movement patterns of able-bodied rowers. PubMed Cutler, B; Eger, T; Merritt, T; Godwin, A 2017-04-01 While numerous studies have investigated the biomechanics of able-bodied rowing, few studies have been completed with para-rowing set-ups. The purpose of this research was to provide benchmark data for handle kinetics and joint kinematics for able-bodied athletes rowing in para- rowing set-ups on an indoor ergometer. Able-bodied varsity rowers performed maximal trials in three para-rowing set-ups; Legs, Trunk and Arms (LTA), Trunk and Arms (TA) and Arms and Shoulders (AS) rowing. The handle force kinetics of the LTA stroke were comparable to the values for able-bodied literature. Lumbar flexion at the catch, extension at the finish and total range of motion were, however, greater than values in the literature for able-bodied athletes in the LTA set-up. Additionally, rowers in TA and AS set-ups utilised more extreme ranges of motion for lumbar flexion, elbow flexion and shoulder abduction than the LTA set-up. This study provides the first biomechanical values of the para-rowing strokes for researchers, coaches and athletes to use while promoting the safest training programmes possible for para-rowing. 13. Determination of the ortho to para ratio of H2Cl+ and H2O+ from submillimeter observations. PubMed Gerin, Maryvonne; de Luca, Massimo; Lis, Dariusz C; Kramer, Carsten; Navarro, Santiago; Neufeld, David; Indriolo, Nick; Godard, Benjamin; Le Petit, Franck; Peng, Ruisheng; Phillips, Thomas G; Roueff, Evelyne 2013-10-03 The opening of the submillimeter sky with the Herschel Space Observatory has led to the detection of new interstellar molecular ions, H2O(+), H2Cl(+), and HCl(+), which are important intermediates in the synthesis of water vapor and hydrogen chloride. In this paper, we report new observations of H2O(+) and H2Cl(+) performed with both Herschel and ground-based telescopes, to determine the abundances of their ortho and para forms separately and derive the ortho-to-para ratio. At the achieved signal-to-noise ratio, the observations are consistent with an ortho-to-para ratios of 3 for both H2O(+) and H2Cl(+), in all velocity components detected along the lines-of-sight to the massive star-forming regions W31C and W49N. We discuss the mechanisms that contribute to establishing the observed ortho-to-para ratio and point to the need for a better understanding of chemical reactions, which are important for establishing the H2O(+) and H2Cl(+) ortho-to-para ratios. 14. Instrumentation for Astronomy Teaching: Projecting the Sun Image. (Spanish Title: Instrumentación Para la Enseñanza de Astronomía: Proyectando la Imagen del Sol.) Instrumentação Para O Ensino de Astronomia: Projetando a Imagem do Sol Catelli, Francisco; Giovan, Odilon; Balen, Osvaldo; Siqueira da Silva, Fernando 2009-07-01 In this work we describe a simple optical device to project the Sun image which is useful for solar eclipses observing and for sunspots' size estimating. Se describe un dispositivo óptico simple para proyectar la imagen del Sol, lo cual es apropiado para las observaciones de eclipses solares y para estimar las dimensiones de las manchas solares. É descrito um dispositivo ótico simples para projetar a imagem do Sol, adequado para observações de eclipses solares e à estimativa do tamanho das manchas solares. 15. Amine vs. carboxylic acid protonation in ortho-, meta-, and para-aminobenzoic acid: An IRMPD spectroscopy study Cismesia, Adam P.; Nicholls, Georgina R.; Polfer, Nicolas C. 2017-02-01 Infrared multiple photon dissociation (IRMPD) spectroscopy and computational chemistry are applied to the ortho-, meta-, and para- positional isomers of aminobenzoic acid to investigate whether the amine or the carboxylic acid are the favored sites of proton attachment in the gas phase. The NH and OH stretching modes yield distinct patterns that establish the carboxylic acid as the site of protonation in para-aminobenzoic acid, as opposed to the amine group in ortho- and meta-aminobenzoic acid, in agreement with computed thermochemistries. The trends for para- and meta-substitutions can be rationalized simplistically by inductive effects and resonant stabilization, and will be discussed in light of computed charge distributions based from electrostatic potentials. In ortho-aminobenzoic acid, the close proximity of the amine and acid groups allow a simultaneous interaction of the proton with both groups, thus stabilizing and delocalizing the charge more effectively, and compensating for some of the resonance stabilization effects. 16. Molecular hydrogen in the vicinity of NGC 7538 IRS 1 and IRS 2 - Temperature and ortho-to-para ratio NASA Technical Reports Server (NTRS) Hoban, Susan; Reuter, Dennis C.; Mumma, Michael J.; Storrs, Alex D. 1991-01-01 Near-infrared spectroscopic observations of the active star-forming region near NGC 7538 IRS 1 and IRS 2 were made. The relative intensities of the v = 1-0 Q(1), Q(3), and Q(5) lines of molecular hydrogen are used to calculate a rotational excitation temperature. Comparison of the measured intensity of the Q(2) transition relative to the intensity of Q(1) and Q(3) permitted the retrieval of the ratio of ortho-to-para hydrogen. It is found that an ortho-to-para ratio of between 1.6 and 2.35 is needed to explain the Q-branch line intensity ratios, depending on the excitation model used. This range in ortho-to-para ratios implies a range of molecular hydrogen formation temperature of approximately 105 K to 140 K. 17. A Case of Advanced Gastric Cancer with Para-Aortic Lymph Node Metastasis from Co-Occurring Prostate Cancer PubMed Central Park, Miyeong; Lee, Young-Joon; Park, Ji-Ho; Choi, Sang-Kyung; Hong, Soon-Chan; Jung, Eun-Jung; Ju, Young-tae; Jeong, Chi-Young; Lee, Jeong-Hee; Ha, Woo-Song 2017-01-01 An 84-year-old man was diagnosed with two synchronous adenocarcinomas, a Borrmann type IV advanced gastric adenocarcinoma in his antrum and a well-differentiated Borrmann type I carcinoma on the anterior wall of the higher body of his stomach. Pre-operatively, computed tomography of the abdomen revealed the presence of advanced gastric cancer with peri-gastric and para-aortic lymph node (LN) metastasis. He planned for palliative total gastrectomy owing to the risk of obstruction by the antral lesion. We performed a frozen biopsy of a para-aortic LN during surgery and found that the origin of the para-aortic LN metastasis was from undiagnosed prostate cancer. Thus, we performed radical total gastrectomy and D2 LN dissection. Post-operatively, his total prostate-specific antigen levels were high (227 ng/mL) and he was discharged 8 days after surgery without any complications. PMID:28337367 18. Para-axillary subcutaneous endoscopic approach in torticollis: tips and tricks in the surgical technique. PubMed Tokar, Baran; Karacay, Safak; Arda, Surhan; Alici, Umut 2015-04-01 An obvious scar on the neck may appear following the open surgery for congenital muscular torticollis (CMT). The cosmetic result may displease the patient and the family. In this study, we describe a minimally invasive technique, para-axillary subcutaneous endoscopic approach (PASEA) in CMT. A total of 11 children (seven girls and four boys with the age range between 1 and 15 years) were operated for torticollis by PASEA. All patients had facial asymmetry and head and neck postural abnormality. Following an incision at the ipsilateral para-axillary region, a subcutaneous cavernous working space is formed toward sternocleidomastoid (SCM) muscle. The muscle and fascia are cut by cautery under endoscopic vision. The patients had postoperative 2nd-week and 3rd-month visits. The incision scar, inspection, and palpation findings of the region, head posture, and shoulder position of the affected side were considered in evaluation of the cosmetic outcome. Preoperative and postoperative range of motion of the head and neck were compared for functional outcome. We preferred single incision surgery in our last two patients; the rest had double para-axillary incision for port insertion. Incomplete transection of the muscle was not observed. There was no serious complication. Postoperatively, head posture and shoulder elevation were corrected significantly. Range of motion of the head was improved. Postoperatively, all the patients had rotation capacity with more than 30 degrees. The range of postoperative flexion and extension movements was between 45 and 60 degrees. The open surgery techniques of CMT causes visible lifelong incision scar on the neck. PASEA leaves a cosmetically hidden scar in the axillary region. A single incision surgery is also possible. A well-formed cavernous working space is needed. External manual palpation, delicate dissection, and cutting of SCM muscle with cautery are the important components of the procedure. Surgeons having experience in pediatric 19. Metabolism of Doubly para-Substituted Hydroxychlorobiphenyls by Bacterial Biphenyl Dioxygenases PubMed Central Pham, Thi Thanh My; Sondossi, Mohammad 2015-01-01 In this work, we examined the profile of metabolites produced from the doubly para-substituted biphenyl analogs 4,4′-dihydroxybiphenyl, 4-hydroxy-4′-chlorobiphenyl, 3-hydroxy-4,4′-dichlorobiphenyl, and 3,3′-dihydroxy-4,4′-chlorobiphenyl by biphenyl-induced Pandoraea pnomenusa B356 and by its biphenyl dioxygenase (BPDO). 4-Hydroxy-4′-chlorobiphenyl was hydroxylated principally through a 2,3-dioxygenation of the hydroxylated ring to generate 2,3-dihydro-2,3,4-trihydroxy-4′-chlorobiphenyl and 3,4-dihydroxy-4′-chlorobiphenyl after the removal of water. The former was further oxidized by the biphenyl dioxygenase to produce ultimately 3,4,5-trihydroxy-4′-chlorobiphenyl, a dead-end metabolite. 3-Hydroxy-4,4′-dichlorobiphenyl was oxygenated on both rings. Hydroxylation of the nonhydroxylated ring generated 2,3,3′-trihydroxy-4′-chlorobiphenyl with concomitant dechlorination, and 2,3,3′-trihydroxy-4′-chlorobiphenyl was ultimately metabolized to 2-hydroxy-4-chlorobenzoate, but hydroxylation of the hydroxylated ring generated dead-end metabolites. 3,3′-Dihydroxy-4,4′-dichlorobiphenyl was principally metabolized through a 2,3-dioxygenation to generate 2,3-dihydro-2,3,3′-trihydroxy-4,4′-dichlorobiphenyl, which was ultimately converted to 3-hydroxy-4-chlorobenzoate. Similar metabolites were produced when the biphenyl dioxygenase of Burkholderia xenovorans LB400 was used to catalyze the reactions, except that for the three substrates used, the BPDO of LB400 was less efficient than that of B356, and unlike that of B356, it was unable to further oxidize the initial reaction products. Together the data show that BPDO oxidation of doubly para-substituted hydroxychlorobiphenyls may generate nonnegligible amounts of dead-end metabolites. Therefore, biphenyl dioxygenase could produce metabolites other than those expected, corresponding to dihydrodihydroxy metabolites from initial doubly para-substituted substrates. This finding shows that a clear 20. Evolution of Antp-class genes and differential expression of Hydra Hox/paraHox genes in anterior patterning PubMed Central Gauchat, Dominique; Mazet, Françoise; Berney, Cédric; Schummer, Michèl; Kreger, Sylvia; Pawlowski, Jan; Galliot, Brigitte 2000-01-01 The conservation of developmental functions exerted by Antp-class homeoproteins in protostomes and deuterostomes suggested that homologs with related functions are present in diploblastic animals. Our phylogenetic analyses showed that Antp-class homeodomains belong either to non-Hox or to Hox/paraHox families. Among the 13 non-Hox families, 9 have diploblastic homologs, Msx, Emx, Barx, Evx, Tlx, NK-2, and Prh/Hex, Not, and Dlx, reported here. Among the Hox/paraHox, poriferan sequences were not found, and the cnidarian sequences formed at least five distinct cnox families. Two are significantly related to the paraHox Gsx (cnox-2) and the mox (cnox-5) sequences, whereas three display some relatedness to the Hox paralog groups 1 (cnox-1), 9/10 (cnox-3) and the paraHox cdx (cnox-4). Intermediate Hox/paraHox genes (PG 3 to 8 and lox) did not have clear cnidarian counterparts. In Hydra, cnox-1, cnox-2, and cnox-3 were not found chromosomally linked within a 150-kb range and displayed specific expression patterns in the adult head. During regeneration, cnox-1 was expressed as an early gene whatever the polarity, whereas cnox-2 was up-regulated later during head but not foot regeneration. Finally, cnox-3 expression was reestablished in the adult head once it was fully formed. These results suggest that the Hydra genes related to anterior Hox/paraHox genes are involved at different stages of apical differentiation. However, the positional information defining the oral/aboral axis in Hydra cannot be correlated strictly to that characterizing the anterior–posterior axis in vertebrates or arthropods. PMID:10781050 1. Hole Transfer Processes in meta- and para-Conjugated Mixed Valence Compounds: Unforeseen effects of bridge substituents and solvent dynamics. PubMed Schaefer, Julian; Holzapfel, Marco; Mladenova, Boryana; Kattnig, Daniel; Krummenacher, Ivo; Braunschweig, Holger; Grampp, Günter; Lambert, Christoph 2017-04-12 To address the question whether donor substituents can be utilized to accelerate the hole transfer (HT) between redox sites attached in para- or in meta-positions to a central benzene bridge we investigated three series of mixed valence compounds based on triarylamine redox centers that are connected to a benzene bridge via alkyne spacers at para- and meta-positions. The electron density at the bridge was tuned by substituents with different electron donating or accepting character. By analyzing optical spectra and by DFT computations we show that the HT properties are independent of bridge substituents for one of the meta-series, while donor substituents can strongly decrease the intrinsic barrier in the case of the para-series. In stark contrast, temperature-dependent ESR measurements demonstrate a dramatic increase of both the apparent barrier and the rate of HT for strong donor substituents in the para-cases. This is caused by an unprecedented substituent-dependent change of the HT mechanism from that described by transition state theory to a regime controlled by solvent dynamics. For solvents with slow longitudinal relaxation (PhNO2, oDCB), this adds an additional contribution to the intrinsic barrier via the dielectric relaxation process. Attaching the donor substituents to the bridge at positions where the molecular orbital coefficients are large accelerates the HT rate for meta-conjugated compounds just as for the para-series. This effect demonstrates that the para-meta paradigm no longer holds if appropriate substituents and substitution patterns are chosen, thereby considerably broadening the applicability of meta-topologies for optoelectronic applications. 2. ParaHaplo: A program package for haplotype-based whole-genome association study using parallel computing. PubMed Misawa, Kazuharu; Kamatani, Naoyuki 2009-10-21 Since more than a million single-nucleotide polymorphisms (SNPs) are analyzed in any given genome-wide association study (GWAS), performing multiple comparisons can be problematic. To cope with multiple-comparison problems in GWAS, haplotype-based algorithms were developed to correct for multiple comparisons at multiple SNP loci in linkage disequilibrium. A permutation test can also control problems inherent in multiple testing; however, both the calculation of exact probability and the execution of permutation tests are time-consuming. Faster methods for calculating exact probabilities and executing permutation tests are required. We developed a set of computer programs for the parallel computation of accurate P-values in haplotype-based GWAS. Our program, ParaHaplo, is intended for workstation clusters using the Intel Message Passing Interface (MPI). We compared the performance of our algorithm to that of the regular permutation test on JPT and CHB of HapMap. ParaHaplo can detect smaller differences between 2 populations than SNP-based GWAS. We also found that parallel-computing techniques made ParaHaplo 100-fold faster than a non-parallel version of the program. ParaHaplo is a useful tool in conducting haplotype-based GWAS. Since the data sizes of such projects continue to increase, the use of fast computations with parallel computing--such as that used in ParaHaplo--will become increasingly important. The executable binaries and program sources of ParaHaplo are available at the following address: http://sourceforge.jp/projects/parallelgwas/?_sl=1. 3. Precision velocimetry planet hunting with PARAS: current performance and lessons to inform future extreme precision radial velocity instruments Roy, Arpita; Chakraborty, Abhijit; Mahadevan, Suvrath; Chaturvedi, Priyanka; Prasad, Neelam J. S. S. V.; Shah, Vishal; Pathan, F. M.; Anandarao, B. G. 2016-08-01 The PRL Advanced Radial-velocity Abu-sky Search (PARAS) instrument is a fiber-fed stabilized high-resolution cross-dispersed echelle spectrograph, located on the 1.2 m telescope in Mt. Abu India. Designed for exoplanet detection, PARAS is capable of single-shot spectral coverage of 3800 - 9600 Å, and currently achieving radial velocity (RV) precisions approaching 1 m s-1 over several months using simultaneous ThAr calibration. As such, it is one of the few dedicated stabilized fiber-fed spectrographs on small (1-2 m) telescopes that are able to fill an important niche in RV follow-up and stellar characterization. The success of ground-based RV surveys is motivating the push into extreme precisions, with goals of 10 cm s-1 in the optical and <1 m s-1 in the near-infrared (NIR). Lessons from existing instruments like PARAS are invaluable in informing hardware design, providing pipeline prototypes, and guiding scientific surveys. Here we present our current precision estimates of PARAS based on observations of bright RV standard stars, and describe the evolution of the data reduction and RV analysis pipeline as instrument characterization progresses and we gather longer baselines of data. Secondly, we discuss how our experience with PARAS is a critical component in the development of future cutting edge instruments like (1) the Habitable Zone Planet Finder (HPF), a near-infrared spectrograph optimized to look for planets around M dwarfs, scheduled to be commissioned on the Hobby Eberly Telescope in 2017, and (2) the NEID optical spectrograph, designed in response to the NN-EXPLORE call for an extreme precision Doppler spectrometer (EPDS) for the WIYN telescope. In anticipation of instruments like TESS and GAIA, the ground-based RV support system is being reinforced. We emphasize that instruments like PARAS will play an intrinsic role in providing both complementary follow-up and battlefront experience for these next generation of precision velocimeters. 4. ORTHO-TO-PARA ABUNDANCE RATIO OF WATER ION IN COMET C/2001 Q4 (NEAT): IMPLICATION FOR ORTHO-TO-PARA ABUNDANCE RATIO OF WATER SciTech Connect Shinnaka, Yoshiharu; Kawakita, Hideyo; Kobayashi, Hitomi; Boice, Daniel C.; Martinez, Susan E. 2012-04-20 The ortho-to-para abundance ratio (OPR) of cometary molecules is considered to be one of the primordial characteristics of cometary ices, and contains information concerning their formation. Water is the most abundant species in cometary ices, and OPRs of water in comets have been determined from infrared spectroscopic observations of H{sub 2}O rovibrational transitions so far. In this paper, we present a new method to derive OPR of water in comets from the high-dispersion spectrum of the rovibronic emission of H{sub 2}O{sup +} in the optical wavelength region. The rovibronic emission lines of H{sub 2}O{sup +} are sometimes contaminated by other molecular emission lines but they are not affected seriously by telluric absorption compared with near-infrared observations. Since H{sub 2}O{sup +} ions are mainly produced from H{sub 2}O by photoionization in the coma, the OPR of H{sub 2}O{sup +} is considered to be equal to that of water based on the nuclear spin conservation through the reaction. We have developed a fluorescence excitation model of H{sub 2}O{sup +} and applied it to the spectrum of comet C/2001 Q4 (NEAT). The derived OPR of water is 2.54{sup +0.32}{sub -0.25}, which corresponds to a nuclear spin temperature (T{sub spin}) of 30{sup +10}{sub -4} K. This is consistent with the previous value determined in the near-infrared for the same comet (OPR = 2.6 {+-} 0.3, T{sub spin} = 31{sup +11}{sub -5} K). 5. Practical in-situ determination of ortho-para hydrogen ratios via fiber-optic based Raman spectroscopy DOE PAGES Sutherland, Liese -Marie; Knudson, James N.; Mocko, Michal; ... 2015-12-17 An experiment was designed and developed to prototype a fiber-optic-based laser system, which measures the ratio of ortho-hydrogen to para-hydrogen in an operating neutron moderator system at the Los Alamos Neutron Science Center (LANSCE) spallation neutron source. Preliminary measurements resulted in an ortho to para ratio of 3.06:1, which is within acceptable agreement with the previously published ratio. As a result, the successful demonstration of Raman Spectroscopy for this measurement is expected to lead to a practical method that can be applied for similar in-situ measurements at operating neutron spallation sources. 6. Ruthenium-Catalyzed Ortho C-H Arylation of Aromatic Nitriles with Arylboronates and Observation of Partial Para Arylation. PubMed Koseki, Yuta; Kitazawa, Kentaroh; Miyake, Masashi; Kochi, Takuya; Kakiuchi, Fumitoshi 2016-12-29 Ruthenium-catalyzed C-H arylation of aromatic nitriles with arylboronates is described. The use of RuH2(CO){P(4-MeC6H4)3}3 as a catalyst provided higher yields of the ortho arylation products than the conventional RuH2(CO)(PPh3)3 catalyst. The arylation takes place mostly at the ortho positions, but unprecedented para arylation was also partially observed to give ortho,para diarylation products. In addition to C-H bond cleavage, the cyano group was also found to function as a directing group for cleavage of C-O bonds in aryl ethers. 7. Evaluation of para-dichlorobenzene emissions from solid moth repellant as a source of indoor air pollution SciTech Connect Chang, J.C.S.; Krebs, K.A. 1992-01-01 The paper reports results of dynamic and static chamber tests to evaluate para-dichlorobenzene emission rates from mothcakes. The data were analyzed by a model that assumes that the emission rate is controlled by gas-phase mass transfer. Results indicate that the para-dichlorobenzene emission from mothcakes is a temperature-sensitive sublimation process. Full-scale house tests were also conducted to measure mass transfer coefficients based on the model developed. The values of the mass transfer coefficient obtained are very comparable to those estimated by theoretical heat transfer studies. 8. Uma grade de perfis teóricos para estrelas massivas em transição Nascimento, C. M. P.; Machado, M. A. 2003-08-01 9. Genomic organization of Hox and ParaHox clusters in the echinoderm, Acanthaster planci. PubMed Baughman, Kenneth W; McDougall, Carmel; Cummins, Scott F; Hall, Mike; Degnan, Bernard M; Satoh, Nori; Shoguchi, Eiichi 2014-12-01 The organization of echinoderm Hox clusters is of interest due to the role that Hox genes play in deuterostome development and body plan organization, and the unique gene order of the Hox complex in the sea urchin Strongylocentrotus purpuratus, which has been linked to the unique development of the axial region. Here, it has been reported that the Hox and ParaHox clusters of Acanthaster planci, a corallivorous starfish found in the Pacific and Indian oceans, generally resembles the chordate and hemichordate clusters. The A. planci Hox cluster shared with sea urchins the loss of one of the medial Hox genes, even-skipped (Evx) at the anterior of the cluster, as well as organization of the posterior Hox genes. 10. Wild birds as pets in Campina Grande, Paraíba State, Brazil: an ethnozoological approach. PubMed Licarião, Morgana R; Bezerra, Dandara M M; Alves, Rômulo R N 2013-03-01 Birds are one of the animals most widely used by humans and are highly valued as pets. The present work reports the use of wild birds as pets in the city of Campina Grande, Paraíba State (PB), Brazil. The owners' choice and perceptions of the species ecology was assessed as well. The methodology employed included unstructured and semi-structured interviews, guided tours and direct observations. A total of 26 bird species distributed among ten families and four orders were identified. The most frequently encountered order was Passeriformes (76.9%), with a predominance of the family Emberizidae (34.6%). The specimens kept as pets were principally obtained in public markets or between the breeders themselves. The popularity of birds as pets, compounded by the inefficiency of official controls over the commerce of wild animals has stimulated the illegal capture and breeding of wild birds in Campina Grande. 11. Ostracoda and foraminifera from Paleocene (Olinda well), Paraíba Basin, Brazilian Northeast. PubMed Piovesan, Enelise K; Melo, Robbyson M; Lopes, Fernando M; Fauth, Gerson; Costa, Denize S 2017-08-07 Paleocene ostracods and planktonic foraminifera from the Maria Farinha Formation, Paraíba Basin, are herein presented. Eleven ostracod species were identified in the genera Cytherella Jones, Cytherelloidea Alexander, Eocytheropteron Alexander, Semicytherura Wagner, Paracosta Siddiqui, Buntonia Howe, Soudanella Apostolescu, Leguminocythereis Howe and, probably, Pataviella Liebau. The planktonic foraminifera are represented by the genera Guembelitria Cushman, Parvularugoglobigerina Hofker, Woodringina Loeblich and Tappan, Heterohelix Ehrenberg, Zeauvigerina Finlay, Muricohedbergella Huber and Leckie, and Praemurica Olsson, Hemleben, Berggren and Liu. The ostracods and foraminifera analyzed indicate an inner shelf paleoenvironment for the studied section. Blooms of Guembelitria spp., which indicate either shallow environments or upwelling zones, were also recorded reinforcing previous paleoenvironmental interpretations based on other fossil groups for this basin. 12. High-pressure dissociation of crystalline para-diiodobenzene: optical experiments and Car-Parrinello calculations. PubMed Brillante, Aldo; Della Valle, Raffaele G; Farina, Luca; Venuti, Elisabetta; Cavazzoni, Carlo; Emerson, Andrew P J; Syassen, Karl 2005-03-09 We have investigated the high-pressure properties of the molecular crystal para-diiodobenzene, by combining optical absorption, reflectance, and Raman experiments with Car-Parrinello simulations. The optical absorption edge exhibits a large red shift from 4 eV at ambient conditions to about 2 eV near 30 GPa. Reflectance measurements up to 80 GPa indicate a redistribution of oscillator strength toward the near-infrared. The calculations, which describe correctly the two known molecular crystal phases at ambient pressure, predict a nonmolecular metallic phase, stable at high pressure. This high-density phase is characterized by an extended three-dimensional network, in which chemically bound iodine atoms form layers connected by hydrocarbon bridges. Experimentally, Raman spectra of samples recovered after compression show vibrational modes of elemental solid iodine. This result points to a pressure-induced molecular dissociation process which leads to the formation of domains of iodine and disordered carbon. 13. Blood vitamin C levels of motorized tricycle drivers in Parañaque, Philippines. PubMed Sia Su, Glenn L; Kayali, Sara 2008-08-01 Vitamin C is an essential micronutrient maintaining the state of well being of individuals. This study examined factors affecting and the blood vitamin C levels of motorized tricycle drivers in Parañaque City, Philippines. Consenting drivers (N=49) were included in the study and were assessed through self-administered questionnaires, 24-h food recalls, and anthropometric measurements and on the analysis of their blood vitamin C levels. Factors related to the blood vitamin C levels of the motorized tricycle drivers were determined by correlation analysis. Majority (79.6%) of drivers had low blood vitamin C levels. Workplace and vitamin C supplementations (p<0.05) were significantly related to the blood vitamin C levels of the motorized tricycle drivers. Further studies are recommended to understand the problem and the determinants of vitamin C deficiency among the general population. 14. Coordination nano-space as stage of hydrogen ortho-para conversion. PubMed Kosone, Takashi; Hori, Akihiro; Nishibori, Eiji; Kubota, Yoshiki; Mishima, Akio; Ohba, Masaaki; Tanaka, Hiroshi; Kato, Kenichi; Kim, Jungeun; Real, José Antonio; Kitagawa, Susumu; Takata, Masaki 2015-07-01 The ability to design and control properties of nano-sized space in porous coordination polymers (PCPs) would provide us with an ideal stage for fascinating physical and chemical phenomena. We found an interconversion of nuclear-spin isomers for hydrogen molecule H2 adsorbed in a Hofmann-type PCP, {Fe(pz)[Pd(CN)4]} (pz=pyrazine), by the temperature dependence of Raman spectra. The ortho (o)-para (p) conversion process of H2 is forbidden for an isolated molecule. The charge density study using synchrotron radiation X-ray diffraction reveals the electric field generated in coordination nano-space. The present results corroborate similar findings observed on different systems and confirm that o-p conversion can occur on non-magnetic solids and that electric field can induce the catalytic hydrogen o-p conversion. 15. Leaf Pressure Volume Data in Caxiuana and Tapajos National Forest, Para, Brazil (2011) SciTech Connect Powell, Thomas; Moorcroft, Paul 2017-01-01 Pressure volume curve measurements on leaves of canopy trees from the from the Caxiuana and Tapajos National Forests, Para, Brazil. Tapajos samples were harvested from the km 67 forested area, which is adjacent to the decommissioned throughfall exclusion drought experimental plot. Caxiuana samples were harvested from trees growing in the throughfall exclusion plots. Data were collected in 2011. Dataset includes: date of measurement, site ID, plot ID, tree ID (species, tree tag #), leaf area, fresh weight, relative weight, leaf water potential, and leaf water loss. P-V curve parameters (turgor loss point, osmotic potential, and bulk modulus of elasticity) can be found in Powell et al. (2017) Differences in xylem cavitation resistance and leaf hydraulic traits explain differences in drought tolerance among mature Amazon rainforest trees. Global Change Biology. 16. Searching for auxetics with DYNA3D and ParaDyn Hoover, Wm. G.; Hoover, C. G. 2005-03-01 We sought to simulate auxetic behavior by carrying out dynamic analyses of mesoscopic model structures. We began by generating nearly periodic cellular structures. Four-node Shell elements and eight-node Brick elements are the basic building blocks for each cell. The shells and bricks obey standard elastic-plastic continuum mechanics. The dynamical response of the structures was next determined for a three-stage loading process: (1) homogeneous compression; (2) viscous relaxation; (3) uniaxial compression. The simulations were carried out with both serial and parallel computer codes - DYNA3D and ParaDyn - which describe the deformation of the shells and bricks with a robust contact algorithm. We summarize the results found here. 17. Food for Life/Comida para la Vida: Creating a Food Festival to Raise Diabetes Awareness PubMed Central Lancaster, Kristie; Walker, Willie; Vance, Thomas; Kaskel, Phyllis; Arniella, Guedy; Horowitz, Carol 2012-01-01 African and Latino Americans have higher rates of diabetes and its complications than White Americans. Identifying people with undiagnosed diabetes and helping them obtain care can help to prevent complications and mortality. To kick off a screening initiative, our community-academic partnership created the “Food for Life Festival,” or “Festival Comida para la Vida.” This article will describe the community’s perspective on the Festival, which was designed to screen residents, and demonstrate that eating healthy can be fun, tasty, and affordable in a community-centered, culturally consonant setting. More than 1,000 residents attended the event; 382 adults were screened for diabetes, and 181 scored as high risk. Fifteen restaurants distributed free samples of healthy versions of their popular dishes. Community residents, restaurateurs, and clinicians commented that the event transformed many of their preconceived ideas about healthy foods and patient care. PMID:20097997 18. pH-dependent spectral properties of para-aminobenzoic acid and its derivatives. PubMed Thayer, Mitchell P; McGuire, Colin; Stennett, Elana M S; Lockhart, Mary Kate; Canache, Daniela; Novak, Marnie; Schmidtke, Sarah J 2011-12-15 The local environment dictates the structural and functional properties of many important chemical and biological systems. The impact of pH on the photophysical properties of a series of para-aminobenzoic acids is examined using a combination of experimental spectroscopy and quantum chemical calculations. Following photoexcitation, PABA derivatives may undergo an intramolecular charge transfer (ICT) resulting in the formation of a zwitterionic species. The thermodynamics of the excited state reaction and temperature-dependence of the radiative emission processes are evaluated through variable temperature fluorescence spectroscopy carried out in a range of aqueous buffers. Quantum chemical calculations are used to analyze structural changes with modifications at the amine position and different protonation states. The ICT is only observed in the tertiary amine, which calculations show has more sp(2) character than the primary or secondary amines. Thermodynamic analysis indicates the ICT reaction is driven by entropy. 19. Food for Life / Comida para la Vida: creating a food festival to raise diabetes awareness. PubMed Lancaster, Kristie; Walker, Willie; Vance, Thomas; Kaskel, Phyllis; Arniella, Guedy; Horowitz, Carol 2009-01-01 African and Latino Americans have higher rates of diabetes and its complications than White Americans. Identifying people with undiagnosed diabetes and helping them obtain care can help to prevent complications and mortality. To kick off a screening initiative, our community-academic partnership created the "Food for Life Festival," or "Festival Comida para la Vida." This article will describe the community's perspective on the Festival, which was designed to screen residents, and demonstrate that eating healthy can be fun, tasty, and affordable in a community-centered, culturally consonant setting. More than 1,000 residents attended the event; 382 adults were screened for diabetes, and 181 scored as high risk. Fifteen restaurants distributed free samples of healthy versions of their popular dishes. Community residents, restaurateurs, and clinicians commented that the event transformed many of their preconceived ideas about healthy foods and patient care. 20. A Very Rare Cause of a Relapsing Para-Oesophageal Abscess PubMed Central Wespi, Simon Peter; Frei, Remus; Sulz, Michael Christian 2016-01-01 Oesophageal involvement in Crohn's disease (CD) is uncommon and most often accompanied by involvement of more distal parts. Its presentation is mostly non-specific, and therefore a diagnosis, especially in isolated oesophageal disease, is difficult. We present the case of a 42-year-old male patient who was referred to our gastroenterology department because of a para-oesophageal abscess. Under antibiotic treatment the abscess healed, but despite great diagnostic efforts, its aetiology remained unclear. Three years later the patient was hospitalized again because of an abscess at the same site. Endoscopy showed disseminated ulcerations of the lower oesophagus, raising suspicion of CD. After excluding other possible causes, we made the diagnosis of isolated CD of the oesophagus. We review the available literature on this topic and discuss the clinical presentation, symptoms, endoscopic findings, and histology as well as treatment of oesophageal CD. PMID:27403115 1. Proyecto para la medición sistemática de seeing en CASLEO Fernández Lajus, E.; Forte, J. C. La calidad del seeing astronómico es ciertamente uno de los parámetros mas importantes que caracterizan el sitio de un observatorio. Por tanto se desea determinar si el alto valor de seeing observado con el telescopio de 2.15 m se debe a efectos internos y/o del entorno a la cupula o si se debe simplemente al seeing propio del lugar. El actual mecanismo de refrigeración del espejo primario del 2.15, parece haber mejorado notablemente la calidad del seeing. Sin embargo se hace necesario saber hasta que punto el valor del seeing puede ser mejorado. La primera etapa del proyecto consistió en la puesta a punto del telescopio emplazado para este propósito y la adquisición de las primeras medidas tentativas de seeing. 2. Evaluation of apparent viscosity of Para rubber latex by diffuse reflection near-infrared spectroscopy. PubMed Sirisomboon, Panmanas; Chowbankrang, Rawiphan; Williams, Phil 2012-05-01 Near-infrared spectroscopy in diffuse reflection mode was used to evaluate the apparent viscosity of Para rubber field latex and concentrated latex over the wavelength range of 1100 to 2500 nm, using partial least square regression (PLSR). The model with ten principal components (PCs) developed using the raw spectra accurately predicted the apparent viscosity with correlation coefficient (r), standard error of prediction (SEP), and bias of 0.974, 8.6 cP, and -0.4 cP, respectively. The ratio of the SEP to the standard deviation (RPD) and the ratio of the SEP to the range (RER) for the prediction were 4.4 and 16.7, respectively. Therefore, the model can be used for measurement of the apparent viscosity of field latex and concentrated latex in quality assurance and process control in the factory. 3. Path integral centroid molecular dynamics simulation of para-hydrogen sandwiched by graphene sheets Minamino, Yuki; Kinugawa, Kenichi 2016-11-01 The carbon-hydrogen composite systems of para-hydrogen (p-H2) sandwiched by a couple of graphene sheets have been investigated by means of path integral centroid molecular dynamics simulations at 17 K. It has been shown that sandwiched hydrogen is liquid-like but p-H2 molecules are preferably adsorbed onto the graphene sheets because of attractive graphene-hydrogen interaction. The diffusion coefficient of p-H2 molecules in the direction parallel to the graphene sheets is comparable to that in pure liquid p-H2. There exists a characteristic mode of 140 cm-1 of the p-H2 molecules, attributed to adsorption-binding motion perpendicular to the graphene sheets. 4. Theoretical and Experimental Studies on the Nonlinear Optical Chromophore para Bromoacetanilide Jothy, V. Bena; Vijayakumar, T.; Jayakumar, V. S.; Udayalekshmi, K.; Ramamurthy, K.; Joe, I. Hubert 2008-11-01 Vibrational spectral analysis of the hydrogen bonded non-linear optical (NLO) material para Bromo Acetanilide (PBA) is carried out using NIR FT-Raman and FT-IR spectroscopy. Ab initio molecular orbital computations have been performed at HF/6-31G(d) level to derive equilibrium geometry, vibrational wavenumbers, intensities and first hyperpolarizability. The lowering of the imino stretching wavenumbers suggests the existence of strong intermolecular N-H⋯O hydrogen bonding substantiated by the natural bond orbital (NBO) analysis. Blue shifting CH stretching wavenumbers, simultaneous activation of carbonyl stretching mode and the strong activation of low wavenumber H-bond stretching vibrations shows the presence of intramolecular charge transfer in the molecule. 5. Quantum dynamical simulations for nuclear spin selective laser control of ortho- and para-fulvene. PubMed Belz, S; Grohmann, T; Leibscher, M 2009-07-21 In the present paper we explore the prospects for laser control of the photoinduced nonadiabatic dynamics of para- and ortho-fulvene with the help of quantum dynamical simulations. Previous investigations [Bearpark et al., J. Am. Chem. Soc. 118, 5253 (1996); Alfalah et al., J. Chem. Phys. 130, 124318 (2009)] show that photoisomerization of fulvene is hindered by ultrafast radiationless decay through a conical intersection at planar configuration. Here, we demonstrate that photoisomerization can nevertheless be initiated by damping unfavorable nuclear vibrations with properly designed laser pulses. Moreover, we show that the resulting intramolecular torsion is nuclear spin selective. The selectivity of the photoexcitation with respect to the nuclear spin isomers can be further enhanced by applying an optimized sequence of two laser pulses. 6. pH-dependent spectral properties of para-aminobenzoic acid and its derivatives Thayer, Mitchell P.; McGuire, Colin; Stennett, Elana M. S.; Lockhart, Mary Kate; Canache, Daniela; Novak, Marnie; Schmidtke, Sarah J. 2011-12-01 The local environment dictates the structural and functional properties of many important chemical and biological systems. The impact of pH on the photophysical properties of a series of para-aminobenzoic acids is examined using a combination of experimental spectroscopy and quantum chemical calculations. Following photoexcitation, PABA derivatives may undergo an intramolecular charge transfer (ICT) resulting in the formation of a zwitterionic species. The thermodynamics of the excited state reaction and temperature-dependence of the radiative emission processes are evaluated through variable temperature fluorescence spectroscopy carried out in a range of aqueous buffers. Quantum chemical calculations are used to analyze structural changes with modifications at the amine position and different protonation states. The ICT is only observed in the tertiary amine, which calculations show has more sp 2 character than the primary or secondary amines. Thermodynamic analysis indicates the ICT reaction is driven by entropy. 7. Ortho-, meta-, and para-benzyne. A comparative CCSD (T) investigation Kraka, Elfi; Cremer, Dieter 1993-12-01 Geometries and energies of ortho-benzyne ( 1), mata-benzyne ( 2), and para-benzyne ( 3) have been calculated at the CCSD (T), GVB, GVB-LSDC, and MBPT (2) levels of theory employing the 6-31G(d, p) basis. Calculations suggest relative energies of O, 13.7, and 25.3 kcal/mol, respectively, and Δ H0f(298) values of 110.8, 123.9, and 135.7 kcal/mol for 1, 2, and 3. With the Δ H0f(298) value of 3, the reaction enthalpy Δ RH(298) and the activation enthalpy Δ H#(298) for the Bergman cyclization of (Z)-hexa-1,5-diy -ene to 3 are calculated to be 9.1 and 28.5 kcal/mol. 8. The spectrochemical behavior of composites based on poly (para-phenylenevinylene), reduced graphene oxide and pyrene Ilie, Mirela; Baibarac, Mihaela 2017-10-01 A new composite material based on poly (para-phenylenevinylene) (PPV), pyrene (Py) and reduced graphene oxide (RGO) is synthesized using thermal conversion route. The properties of this material are investigated by Raman scattering, photoluminescence (PL), infrared (IR) and ultraviolet-visible (UV-Vis) spectroscopy. Adding Py at PPV precursor solution (PPV PS) containing RGO, yields to important modifications in both vibrational and electronic properties of these composites. The presence of Py into PPV matrix determines a blue shift of PPV PL. According to Raman and IR studies, PPV is non-covalently functionalized with Py which interacts forward with RGO through π-π interactions causing an important modification into the polymer chains conjugation length. 9. Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code NASA Technical Reports Server (NTRS) Yamakov, Vesselin I. 2016-01-01 This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces. 10. Fluorescence resonance energy transfer for investigation of the interaction of Para Red with serum albumins. PubMed Zhu, Lin; Zeng, Xiaodan; Zhang, Fusheng 2016-03-01 Para Red (PR) has been isolated from food additives, and shown to be toxic to humans. To facilitate examination of its toxicity, the interaction between PR and serum albumins (SA) was studied using fluorescence quenching and circular dichroism (CD) spectrophotometry. The experiments showed that the fluorescence intensity of serum albumins decreased with increasing concentrations of PR, which resulted from the binding of PR and SA. The binding constant, number of binding sites and thermodynamic parameters were calculated and hydrogen bond and van der Waals interactions were shown to play a key role in the binding process. Competition experiments indicated that PR mainly binds to Trp residues of SA within the site I. As the CD and three-dimensional spectra revealed, the addition of PR induced a conformational change in SA. 11. Analysis of the thermal reaction products of para polyphenylene by combined gas chromatography-mass spectrometry NASA Technical Reports Server (NTRS) Fewell, L. L. 1976-01-01 Analysis of the volatiles and sublimate produced when para-polyphenylene is pyrolyzed to constant weight under vacuum in the temperature range from 380 to 1000 C indicates that the polymer undergoes thermal degradation in two stages. The first stage involved dehydrohalogenation, which is essentially a curing reaction that produces crosslinking between polyphenylene chains resulting from the loss of chlorine from the polymer in the form of hydrogen chloride. The second stage of the thermal degradation is dehydrogenation because hydrogen is the major volatile species. Increasing amounts of polycyclic aromatic hydrocarbons (phenanthrene and 9, 10 benzphenanthrene) in the sublimate, concomitant with increasing C/H ratios of the polymeric residue with pyrolysis temperature, is consistent with the buildup of polynuclear structures in the polymer matrix. 12. Searching for Auxetics with DYNA3D and ParaDyn SciTech Connect Hoover, W G; Hoover, C G 2004-09-11 We sought to simulate auxetic behavior by carrying out dynamic analyses of mesoscopic model structures. We began by generating nearly periodic cellular structures. Four-node 'Shell' elements and eight-node 'Brick' elements are the basic building blocks for each cell. The shells and bricks obey standard elastic-plastic continuum mechanics. The dynamical response of the structures was next determined for a three-stage loading process: (1) homogeneous compression; (2) viscous relaxation; (3) uniaxial compression. The simulations were carried out with both serial and parallel computer codes--DYNA3D and ParaDyn--which describe the deformation of the shells and bricks with a robust contact algorithm. We summarize the results found here. 13. Analysis of the thermal reaction products of para polyphenylene by combined gas chromatography-mass spectrometry NASA Technical Reports Server (NTRS) Fewell, L. L. 1976-01-01 Analysis of the volatiles and sublimate produced when para-polyphenylene is pyrolyzed to constant weight under vacuum in the temperature range from 380 to 1000 C indicates that the polymer undergoes thermal degradation in two stages. The first stage involved dehydrohalogenation, which is essentially a curing reaction that produces crosslinking between polyphenylene chains resulting from the loss of chlorine from the polymer in the form of hydrogen chloride. The second stage of the thermal degradation is dehydrogenation because hydrogen is the major volatile species. Increasing amounts of polycyclic aromatic hydrocarbons (phenanthrene and 9, 10 benzphenanthrene) in the sublimate, concomitant with increasing C/H ratios of the polymeric residue with pyrolysis temperature, is consistent with the buildup of polynuclear structures in the polymer matrix. 14. Unusual Raman spectra of para-nitroaniline by sequential Fermi resonances. PubMed Xia, Jiarui; Zhu, Ling; Feng, Yanting; Li, Yongqing; Zhang, Zhenglong; Xia, Lixin; Liu, Liwei; Ma, Fengcai 2014-01-01 In this communication, we report the unusual Raman spectra of para-nitroaniline (PNA) by sequential Fermi resonances. The combinational mode 1292 cm(-1) in the experimental Raman spectrum indirectly gains the initial spectral weight at 1392 cm(-1) by three sequential Fermi resonances. These Fermi resonances result in the strong interaction between the donor group of NH2 and the acceptor group of NO2. Our theoretical calculations provide reasonable interpretation for the abnormal Raman spectra of PNA. Experimental surface enhanced Raman scattering (SERS) spectrum of PNA further confirmed our conclusion, where the strongest Raman peak at 1292 cm(-1) is very weak, while the Raman peak at 1392 cm(-1) becoming the strongest Raman peak, which is consistent with the theoretical simulations. 15. Long-range excitons in conjugated polymers with ring torsions: poly( para-phenylene) and polyaniline Harigaya, Kikuo 1998-08-01 Ring torsion effects on the optical excitation properties of poly( para-phenylene) (PPP) and polyaniline (PAN) are investigated by extending the Shimoi-Abe model (Shimoi Y and Abe S 1996 Synth. Met. 78 219). The model is solved using the intermediate-exciton formalism. Long-range excitons are characterized, and the long-range component of the oscillator strengths is calculated. We find that ring torsions affect the long-range excitons in PAN more readily than those in PPP, due to the larger torsion angle of PAN and the large number of bonds whose hopping integrals are modulated by torsions. Next, ring torsional disorder effects simulated by the Gaussian distribution function are analysed. The long-range component of the total oscillator strengths after sample averaging is nearly independent of the disorder strength in the PPP case, while that in the PAN case decreases readily as the disorder becomes stronger. 16. Applicability of the ParaDNA(®) Screening System to Seminal Samples. PubMed Tribble, Nicholas D; Miller, Jamie A D; Dawnay, Nick; Duxbury, Nicola J 2015-05-01 Seminal fluid represents a common biological material recovered from sexual assault crime scenes. Such samples can be prescreened using different techniques to determine cell type and relative amount before submitting for full STR profiling. The ParaDNA(®) Screening System is a novel forensic test which identifies the presence of DNA through amplification and detection of two common STR loci (D16S539 and TH01) and the Amelogenin marker. The detection of the Y allele in samples could provide a useful tool in the triage and submission of sexual assault samples by enforcement authorities. Male template material was detected on a range of common sexual assault evidence items including cotton pillow cases, condoms, swab heads and glass surfaces and shows a detection limit of 1 in 1000 dilution of neat semen. These data indicate this technology has the potential to be a useful tool for the detection of male donor DNA in sexual assault casework. 17. Shaped Ceria Nanocrystals Catalyze Efficient and Selective Para-Hydrogen-Enhanced Polarization. PubMed Zhao, Evan W; Zheng, Haibin; Zhou, Ronghui; Hagelin-Weaver, Helena E; Bowers, Clifford R 2015-11-23 Intense para-hydrogen-enhanced NMR signals are observed in the hydrogenation of propene and propyne over ceria nanocubes, nano-octahedra, and nanorods. The well-defined ceria shapes, synthesized by a hydrothermal method, expose different crystalline facets with various oxygen vacancy densities, which are known to play a role in hydrogenation and oxidation catalysis. While the catalytic activity of the hydrogenation of propene over ceria is strongly facet-dependent, the pairwise selectivity is low (2.4% at 375 °C), which is consistent with stepwise H atom transfer, and it is the same for all three nanocrystal shapes. Selective semi-hydrogenation of propyne over ceria nanocubes yields hyperpolarized propene with a similar pairwise selectivity of (2.7% at 300 °C), indicating product formation predominantly by a non-pairwise addition. Ceria is also shown to be an efficient pairwise replacement catalyst for propene. 18. [Neurocience in the Junta para Ampliación de Estudios]. PubMed Díaz, Alfredo Baratas 2007-01-01 The development of the Neurociencias in the Spain at the first third of the 20th century had a strong histological and pathological component. The work of Santiago Ramon and Cajal and Luis Simarro was continued by some excellent disciples: Nicolas Achúcarro, Gonzalo Rodriguez Lafora, Fernando de Castro, etc. Some of them had to make compatible diverse occupations, even the professional exercise of psychiatry, before obtaining a modest - but stable - position of investigation. In spite of some misalignments in the institutional development of the centers and the personal biographical ups and downs, the Junta para Ampliación de Estudios was the great institution that fomented the international formation of the investigators and equipped to them with the means to develop its work. 19. El Proyecto Sismico "LARSE" - Trabajando Hacia un Futuro con Mas Seguridad para Los Angeles USGS Publications Warehouse Henyey, Thomas L.; Fuis, Gary S.; Benthien, Mark L.; Burdette, Thomas R.; Christofferson, Shari A.; Clayton, Robert W.; Criley, Edward E.; Davis, Paul M.; Hendley, James W.; Kohler, Monica D.; Lutter, William J.; McRaney, John K.; Murphy, Janice M.; Okaya, David A.; Ryberg, Trond; Simila, Gerald W.; Stauffer, Peter H. 1999-01-01 La region de Los Angeles contiene una red de fallas activas, incluyendo muchas fallas por empuje que son profundas y no rompen la superficie de la tierra. Estas fallas ocultas incluyen la falla anteriormente desconocida que fue responsable por la devastacion que ocurrio durante el terremoto de Northridge en enero de 1994, el terremoto mas costoso en la historia de los Estados Unidos. El Experimento Sismico en la Region de Los Angeles (Los Angeles Region Seismic Experiment, LARSE), esta localizando los peligros ocultos de los terremotos debajo de la region de Los Angeles para mejorar la construccion de las estructuras que pueden apoyar terremotos que son inevitables en el futuro, y que ayudaran a los cientificos determinar donde occurira el sacudimento mas fuerte y poderoso. 20. Anion photoelectron spectroscopy of deprotonated ortho-, meta-, and para-methylphenol Nelson, Daniel J.; Gichuhi, Wilson K.; Miller, Elisa M.; Lehman, Julia H.; Lineberger, W. Carl 2017-02-01 The anion photoelectron spectra of ortho-, meta-, and para-methylphenoxide, as well as methyl deprotonated meta-methylphenol, were measured. Using the Slow Electron Velocity Map Imaging technique, the Electron Affinities (EAs) of the o-, m-, and p-methylphenoxyl radicals were measured as follows: 2.1991±0.0014, 2.2177±0.0014, and 2.1199±0.0014 eV, respectively. The EA of m-methylenephenol was also obtained, 1.024±0.008 eV. In all four cases, the dominant vibrational progressions observed are due to several ring distortion vibrational normal modes that were activated upon photodetachment, leading to vibrational progressions spaced by ˜500 cm-1. Using the methylphenol O-H bond dissociation energies reported by King et al. and revised by Karsili et al., a thermodynamic cycle was constructed and the acidities of the methylphenol isomers were determined as follows: Δa c i dH298K 0=348.39 ±0.25 , 348.82±0.25, 350.08±0.25, and 349.60±0.25 kcal/mol for cis-ortho-, trans-ortho-, m-, and p-methylphenol, respectively. The excitation energies for the ground doublet state to the lowest excited doublet state electronic transition in o-, m-, and p-methylphenoxyl were also measured as follows: 1.029±0.009, 0.962±0.002, and 1.029±0.009 eV, respectively. In the photoelectron spectra of the neutral excited states, C-O stretching modes were excited in addition to ring distortion modes. Electron autodetachment was observed in the cases of both m- and p-methylphenoxide, with the para isomer showing a lower photon energy onset for this phenomenon. 1. Prevalence of oral trauma in Para-Pan American Games athletes. PubMed Andrade, Rafaela Amarante; Modesto, Adriana; Evans, Patricia Louise Scabell; Almeida, Anne Louise Scabell; da Silva, Juliana de Jesus Rodrigues; Guedes, Aurelino Machado Lima; Guedes, Fábio Ribeiro; Ranalli, Dennis N; Tinoco, Eduardo Muniz Barretto 2013-08-01 The aim of this cross-sectional epidemiological survey was to assess the prevalence of oral trauma in athletes representing 25 countries competing at the most recent Para-Pan American Games (III PARAPAN) held in Rio de Janeiro, Brazil. The study was approved by the appropriate institutional review board. The examiners participated in standardization and calibration training sessions before the field phase began. Invitations were sent to >1200 participating athletes competing in eight sports and to the Medical Committee of the Para-Pan American Sports Organization before and during the III PARAPAN. A convenience sample of 120 athletes was recruited. After signing an informed consent, all athletes answered a questionnaire. Data were collected at the clinical examination and recorded in a specific trauma form. The mean age of the athletes was 32.5 years. Males comprised 79.2% of the sample; females 20.8%. The prevalence of oral trauma among the athletes was 47.5% (N = 57). However, only 15 athletes reported that these traumatic injuries were sports-related. The sport with the highest prevalence of oral trauma was judo (75%); the least was volleyball with no reported traumatic injuries. The most common traumatic injury was enamel fracture (27.4%). The teeth most affected were the maxillary permanent central incisors (N = 19), followed by the maxillary premolars (N = 8). On the basis of the results of this study of oral trauma among athletes examined at the III PARAPAN, a recommendation for enhanced educational efforts and the use of properly fitted mouthguards to prevent traumatic injuries among high-performance athletes with disabilities seems warranted. © 2012 John Wiley & Sons A/S. 2. A novel FUT1 allele was identified in a Chinese individual with para-Bombay phenotype. PubMed Xu, X; Tao, S; Ying, Y; Hong, X; He, Y; Zhu, F; Lv, H; Yan, L 2011-12-01 The para-Bombay phenotype is characterised by H-deficient or H partially deficient red blood cells (RBCs) in individuals who secrete ABH antigens in their saliva. Samples from an individual whose RBCs had an apparent para-Bombay phenotype and his family members were investigated and a novel FUT1 allele was identified. RBCs' phenotype was characterised by standard serologic technique. Genomic DNA was sequenced with primers that amplified the coding sequence of FUT1 and FUT2, respectively. Routine ABO genotyping analysis was performed. Haplotypes of FUT1 were identified by TOPO cloning sequencing. Recombination expression vectors of FUT1 mutation alleles were constructed and transfected into COS-7 cells. The pα-(1,2)-fucosyltransferase activity of expression protein was determined. B101/O02 genotype of the proband was correlated with ABH substances in saliva. The proband carried a new FUT1 allele which showed 35C/T, 235G/C and 682A/G heterozygote by directly DNA sequencing. Two haplotypes, 235C and 35T+682G, were identified by TOPO cloning sequencing and COS-7 cells transfected with five recombination vectors including wild-type, 35T, 235C, 682G and 35T+682G alleles were established respectively. The α-(1,2)-fucosyltransferase activities of cell lysates which had transfected with 35T, 235C, 682G and 35T+682G recombination vectors showed 79·45, 16·23, 80·32 and 24·59%, respectively, compared with that of the wild-type FUT1-transfected cell lysates. A novel FUT1 allele 235C was identified, which greatly diminished the activity of α-(1,2)-fucosyltransferase. © 2011 The Authors. Transfusion Medicine © 2011 British Blood Transfusion Society. 3. A comparison of para-anastomotic compliance profiles after vascular anastomosis: nonpenetrating clips versus standard sutures. PubMed Baguneid, M S; Goldner, S; Fulford, P E; Hamilton, G; Walker, M G; Seifalian, A M 2001-04-01 Anastomotic compliance is an important predictive factor for long-term patency of small diameter vascular reconstruction. In this experimental study we compare the compliance of continuous and interrupted sutured vascular anastomoses with those using nonpenetrating clips. Both common carotid arteries in nine goats (average weight, 57 +/- 5.7 kg) were transected, and end-to-end anastomoses were constructed with nonpenetrating clips or polypropylene sutures. The latter were applied with both interrupted and continuous techniques. Intraluminal pressure was measured with a Millar Mikro-tip transducer, and vessel wall motion was determined with duplex ultrasound equipped with an echo-locked wall-tracking system. Diametrical compliance was determined. Environmental scanning electron microscopy was performed on explanted anastomoses. There was a reduction in anastomotic compliance and associated proximal and distal para-anastomotic hypercompliant zones with the use of all techniques. However, compliance loss was significantly less in those anastomoses with clips and interrupted sutures when compared with continuous suture (P <.001). Furthermore, the total compliance mismatch across anastomoses with continuous sutures was significantly greater than those with clips or interrupted sutures (P <.05). The mean time for constructing clipped anastomoses was 5.7 +/- 1.4 minutes, which was significantly less than either continuous (P <.0001) or interrupted sutures (P <.0001). Furthermore, environmental scanning electron microscopy demonstrated minimal intimal damage with good intimal apposition in the clip group. Anastomoses performed with nonpenetrating clips resulted in improved para-anastomotic compliance profiles and reduced intimal damage when compared with those with polypropylene sutures. These benefits may enhance long-term graft patency by reducing the risk of anastomotic intimal hyperplasia. 4. Anion photoelectron spectroscopy of deprotonated ortho-, meta-, and para-methylphenol. PubMed Nelson, Daniel J; Gichuhi, Wilson K; Miller, Elisa M; Lehman, Julia H; Lineberger, W Carl 2017-02-21 The anion photoelectron spectra of ortho-, meta-, and para-methylphenoxide, as well as methyl deprotonated meta-methylphenol, were measured. Using the Slow Electron Velocity Map Imaging technique, the Electron Affinities (EAs) of the o-, m-, and p-methylphenoxyl radicals were measured as follows: 2.1991±0.0014, 2.2177±0.0014, and 2.1199±0.0014 eV, respectively. The EA of m-methylenephenol was also obtained, 1.024±0.008 eV. In all four cases, the dominant vibrational progressions observed are due to several ring distortion vibrational normal modes that were activated upon photodetachment, leading to vibrational progressions spaced by ∼500 cm(-1). Using the methylphenol O-H bond dissociation energies reported by King et al. and revised by Karsili et al., a thermodynamic cycle was constructed and the acidities of the methylphenol isomers were determined as follows: ΔacidH298K(0)=348.39±0.25, 348.82±0.25, 350.08±0.25, and 349.60±0.25 kcal/mol for cis-ortho-, trans-ortho-, m-, and p-methylphenol, respectively. The excitation energies for the ground doublet state to the lowest excited doublet state electronic transition in o-, m-, and p-methylphenoxyl were also measured as follows: 1.029±0.009, 0.962±0.002, and 1.029±0.009 eV, respectively. In the photoelectron spectra of the neutral excited states, C-O stretching modes were excited in addition to ring distortion modes. Electron autodetachment was observed in the cases of both m- and p-methylphenoxide, with the para isomer showing a lower photon energy onset for this phenomenon. 5. Effects of lead on K(+)-para-nitrophenyl phosphatase activity and protection by thiol reagents. PubMed Rajanna, B; Chetty, C S; McBride, V; Rajanna, S 1990-01-01 Lead (Pb) inhibited K(+)-stimulated para-nitrophenyl phosphatase (K(+)-PNPPase) of rat brain P2 fraction in a concentration-dependent manner with IC50 3.5 microM. Altered pH versus activity demonstrated comparable inhibitions by Pb in buffered acidic, neutral and alkaline pH ranges. Inhibition of enzyme activity was higher at lower temperatures (17-27 degrees C) compared to 37 degrees C. Preincubation of enzyme with sulfhydryl (-SH) agents such as cysteine (Cyst) and dithiothreitol (DTT) but not glutathione (GSH) protected against Pb-inhibition. Uncompetitive type of inhibition with respect to the activation of K+ was indicated by a decrease in Vmax from 16.2 to 8.37 mumoles of para-nitrophenol (PNP)/mg protein/hr and Km from 18.99 to 12.39 mM. Kinetic studies on substrate (p-nitrophenyl phosphate) activation in the presence of Pb (3.5 microM) indicated a significant decrease in Vmax from 8.94 to 4.69 mumoles of PNP/mg protein/hr with no change in Km. Cyst (3 microM) and DTT (10 microM) reversed the Pb-inhibited Vmax from 4.69 to 8.38 and 7.24 mumoles of PNP/mg protein/hr respectively. These results suggest that the critical conformational property of K(+)-PNPPase is sensitive to Pb. The data also indicates that the Pb inhibits Na(+)-K+ ATPase system by interacting with dephosphorylation of the enzyme-phosphoryl complex, while Cyst and DTT protected against Pb-inhibition. 6. Association between anemia and subclinical infection in children in Paraíba State, Brazil PubMed Central Sales, Márcia Cristina; de Queiroz, Everton Oliveira; Paiva, Adriana de Azevedo 2011-01-01 Background With subclinical infection, serum iron concentrations are reduced, altering the synthesis of hemoglobin, the main indicator of anemia. Objective To evaluate the association between subclinical infection and anemia in children of Paraíba State. Methods This is a cross-sectional study involving 1116 children aged 6 to 59 months from nine municipalities of Paraíba State. Demographic and socioeconomic data were collected by means of a specific questionnaire. The C-reactive protein and hemoglobin levels were determined by the latex agglutination technique and automated counter, respectively. C-reactive protein values ≥ 6 mg/L were used as indicative of subclinical infection, while the presence of anemia was determined by hemoglobin values < 11.0 g/dL. The data were analyzed using the Epi Info computer program, with significance being set at 5%. Results Data from this research showed that 80.1% of the children belonged to families that were below the bread line, with per capita income < ½ of the minimum wage at that time (R$350.00 approximately US$ 175.00). The prevalences of subclinical infection and anemia were 11.3% and 36.3%, respectively. Subclinical infection was significantly associated with anemia (p-value < 0.05). There were lower levels of hemoglobin in children with C-reactive protein ≥ 6 mg/L, with a mean hemoglobin level in children with subclinical infection of 10.93 g/dL (standard deviation - SD = 1.21 g/dL) and without infection of 11.26 g/dL (SD = 1.18 g/dL) (p-value < 0.05). Conclusion Anemia is associated with subclinical infection in this population, indicating that this is an important variable to be considered in studies of the prevalence of anemia in children. PMID:23284254 7. Extraperitoneal Robotic-Assisted Para-Aortic Lymphadenectomy in Gynecologic Cancer Staging: Current Evidence. PubMed Bogani, Giorgio; Ditto, Antonino; Martinelli, Fabio; Signorelli, Mauro; Chiappa, Valentina; Sabatucci, Ilaria; Scaffa, Cono; Lorusso, Domenica; Raspagliesi, Francesco 2016-01-01 We reviewed the current evidence on the safety, effectiveness, and applicability of extraperitoneal robotic-assisted para-aortic lymphadenectomy (ExtRA-PAL) as the staging procedure of gynecologic malignancies. PubMed (MEDLINE), Scopus, Web of Science databases, and ClinicalTrials.gov were searched for original studies reporting outcomes of ExtRA-PAL. Quality of the included studies and their level of recommendation were assessed using the Grading of Recommendations, Assessment, Development, and Evaluation and the American College of Obstetricians and Gynecologists guidelines, respectively. Overall, 62 studies were identified; after a process of evidence acquisition 5 original investigations were available for this review that included 98 patients undergoing ExtRA-PAL. The main surgical indication was staging for cervical cancer (n = 71, 72%). The mean (SD) number of para-aortic node yielded was 15.4 (±4.7) nodes. Blood transfusion and intraoperative complication rates were 2% and 6%, respectively. ExtRA-PAL was completed in 88 patients (90%). Six (6%) and 4 (4%) patients had conversion to other minimally invasive procedures and open surgery, respectively. Success rate was 99% among patients undergoing ExtRA-PAL without concomitant procedures. Overall, mean (SD) length of hospital stay was 2.8 (±0.5) days. Twenty-four patients (24%) developed postoperative events. According to the Clavien-Dindo grading system, grades IIIa and IIIb morbidity rates were 12% and 2%, respectively. No grades IV and V morbidity occurred. ExtRA-PAL is associated with a high success rate and a relative low morbidity rate. However, because of the limited data on this issue, further studies are warranted to assess the long-term effectiveness of this procedure. 8. On Ensino de Astronomia: Desafios para Implantação Faria, R. Z.; Voelzke, M. R. 2008-09-01 Em 2002 o ensino de Astronomia foi proposto como um dos temas estruturadores pelos Parâmetros Curriculares Nacionais e sugerido como facilitador para que o aluno compreendesse a Física como construção humana e parte do seu mundo vivencial, mas raramente seus conceitos foram ensinados. A presente pesquisa discute dois aspectos relacionados à abordagem de Astronomia. O primeiro aspecto é se ela está sendo abordada pelos professores do Ensino Médio e o segundo, aborda a maneira como ela está sendo ensinada. Optou-se pela aplicação de um questionário a partir do 2° semestre de 2006 e durante o ano de 2007 com professores que ministram a disciplina de Física, os quais trabalham em escolas estaduais em Rio Grande da Serra, Ribeirão Pires e Mauá no estado São Paulo. Dos 66,2% dos professores que responderam ao questionário nos municípios de Rio Grande da Serra, Ribeirão Pires e Mauá, 57,4% não aplicaram nenhum tópico de astronomia, 70,2% não utilizaram laboratório, 89,4% não utilizaram qualquer tipo de programa computacional, 83,0% nunca fizeram visitas com alunos a museus e planetários e 38,3% não indicaram qualquer tipo de livro ou revista referente à astronomia aos seus alunos. Mesmo considerando a Astronomia um conteúdo potencialmente significativo, esta não fez parte dos planejamentos escolares. Portanto são necessárias propostas que visem estratégias para a educação continuada dos professores como, por exemplo, cursos específicos sobre o ensino em Astronomia. 9. Electron Spin Polarization Transfer to ortho-H2 by Interaction of para-H2 with Paramagnetic Species: A Key to a Novel para → ortho Conversion Mechanism. PubMed Terenzi, Camilla; Bouguet-Bonnet, Sabine; Canet, Daniel 2015-05-07 We report that at ambient temperature and with 100% enriched para-hydrogen (p-H2) dissolved in organic solvents, paramagnetic spin catalysis of para → ortho hydrogen conversion is accompanied at the onset by a negative ortho-hydrogen (o-H2) proton NMR signal. This novel finding indicates an electron spin polarization transfer, and we show here that this can only occur if the H2 molecule is dissociated upon its transient adsorption by the paramagnetic catalyst. Following desorption, o-H2 is created until the thermodynamic equilibrium is reached. A simple theory confirms that in the presence of a static magnetic field, the hyperfine coupling between unpaired electrons and nuclear spins is responsible for the observed polarization transfer. Owing to the negative electron gyromagnetic ratio, this explains the experimental results and ascertains an as yet unexplored mechanism for para → ortho conversion. Finally, we show that the recovery of o-H2 magnetization toward equilibrium can be simply modeled, leading to the para → ortho conversion rate. 10. Chlorination of 2-phenoxypropanoic acid with NCP in aqueous acetic acid: using a novel ortho-para relationship and the para/meta ratio of substituent effects for mechanism elucidation. PubMed Segurado, Manuel A P; Reis, João Carlos R; de Oliveira, Jaime D Gomes; Kabilan, Senthamaraikannan; Shanthi, Manohar 2007-07-06 Rate constants were measured for the oxidative chlorodehydrogenation of (R,S)-2-phenoxypropanoic acid and nine ortho-, ten para- and five meta-substituted derivatives using (R,S)-1-chloro-3-methyl-2,6-diphenylpiperidin-4-one (NCP) as chlorinating agent. The kinetics was run in 50% (v/v) aqueous acetic acid acidified with perchloric acid under pseudo-first-order conditions with respect to NCP at temperature intervals of 5 K between 298 and 318 K, except at the highest temperature for the meta derivatives. The dependence of rate constants on temperature was analyzed in terms of the isokinetic relationship (IKR). For the 20 reactions studied at five different temperatures, the isokinetic temperature was estimated to be 382 K, which suggests the preferential involvement of water molecules in the rate-determining step. The dependence of rate constants on meta and para substitution was analyzed using the tetralinear extension of the Hammett equation. The parameter lambda for the para/meta ratio of polar substituent effects was estimated to be 0.926, and its electrostatic modeling suggests the formation of an activated complex bearing an electric charge near the oxygen atom belonging to the phenoxy group. A new approach is introduced for examining the effect of ortho substituents on reaction rates. Using IKR-determined values of activation enthalpies for a set of nine pairs of substrates with a given substituent, a linear correlation is found between activation enthalpies of ortho and para derivatives. The correlation is interpreted in terms of the selectivity of the reactant toward para- or ortho-monosubstituted substrates, the slope of which being related to the ortho effect. This slope is thought to be approximated by the ratio of polar substituent effects from ortho and para positions in benzene derivatives. Using the electrostatic theory of through-space interactions and a dipole length of 0.153 nm, this ratio was calculated at various positions of a charged reaction 11. Revista Interamericana de Educacion de Adultos, 2003 (Interamerican Review of Adult Education, 2003). ERIC Educational Resources Information Center Guerra, Alfonso Rangel, Ed. 2003-01-01 This journal offers a collection of Spanish-language articles, including: "Educacion a Distancia y Uso de las Tecnologias: Experiencias Desafios y Oportunidades Educativas para Jovenes y Adultos" (Ma. Mercedes Ruiz); "Volver a a Educacion Fundamental? Notas para una Arquelogia de los Mandatos Fundacionales del CREFAL" (Jorge… 12. Oxidative Heck Reaction as a Tool for Para-selective Olefination of Aniline: A DFT Supported Mechanism. PubMed Moghaddam, Firouz Matloubi; Pourkaveh, Raheleh; Karimi, Ashkan 2017-09-13 This study describes the first para-selective palladium-catalyzed alkenylation of tertiary amines. This regioselective C-H activation was conducted without any chelation moieties. A series of olefins were reacted under mild reaction conditions at 60 °C, and the corresponding products were obtained in good yields with high selectivity. 13. Remodelar Correctamente: Guía de Prácticas Acreditadas Seguras Para Trabajar Con Plomo EPA Pesticide Factsheets Información general sobre los requisitos legales de las prácticas relacionadas al manejo seguro del plomo para dueños de hogares, inquilinos, proveedores de cuido infantil, y padres durante las actividades de remodelación 14. Madres Para Niños: Engaging Latina Mothers as Consultees to Promote Their Children's Early Elementary School Achievement ERIC Educational Resources Information Center Knotek, Steven E.; Sánchez, Marta 2017-01-01 The Madres para Niños (MpN) program uses consultee-centered consultation as a vehicle to help immigrant Latino parents focus and reframe their preexisting child advocacy skills toward their children's successful transition into elementary school in a new geographic and cultural context. This article describes the Latina mother's experience as… 15. 75 FR 34943 - Defense Federal Acquisition Regulation Supplement; Para-Aramid Fibers and Yarns Manufactured in a... Federal Register 2010, 2011, 2012, 2013, 2014 2010-06-21 ... made from DuPont Kevlar. DuPont supplies its Kevlar staple fiber to four major and six minor yarn... para-aramid yarns: DuPont TM which makes Kevlar , and the Teijin Group which makes Twaron. DuPont TM... 16. LBA-ECO TG-07 Trace Gas Fluxes, Undisturbed and Logged Sites, Para, Brazil: 2000-2002 Treesearch M.M. Keller; R.K. Varner; J.D. Dias; H.S. Silva; P.M. Crill; Jr. de Oliveira; G.P. Asner 2009-01-01 Trace gas fluxes of carbon dioxide, methane, nitrous oxide, and nitric oxide were measured manually at undisturbed and logged forest sites in the Tapajos National Forest, near Santarem, Para, Brazil. Manual measurements were made approximately weekly at both the undisturbed and logged sites. Fluxes from clay and sand soils were completed at the undisturbed sites.... 17. ParaStream: A parallel streaming Delaunay triangulation algorithm for LiDAR points on multicore architectures Wu, Huayi; Guan, Xuefeng; Gong, Jianya 2011-09-01 This paper presents a robust parallel Delaunay triangulation algorithm called ParaStream for processing billions of points from nonoverlapped block LiDAR files. The algorithm targets ubiquitous multicore architectures. ParaStream integrates streaming computation with a traditional divide-and-conquer scheme, in which additional erase steps are implemented to reduce the runtime memory footprint. Furthermore, a kd-tree-based dynamic schedule strategy is also proposed to distribute triangulation and merging work onto the processor cores for improved load balance. ParaStream exploits most of the computing power of multicore platforms through parallel computing, demonstrating qualities of high data throughput as well as a low memory footprint. Experiments on a 2-Way-Quad-Core Intel Xeon platform show that ParaStream can triangulate approximately one billion LiDAR points (16.4 GB) in about 16 min with only 600 MB physical memory. The total speedup (including I/O time) is about 6.62 with 8 concurrent threads. 18. Factor Analysis of the Spanish Version of the WAIS: The Escala de Inteligencia Wechsler para Adultos (EIWA). ERIC Educational Resources Information Center Gomez, Francisco C., Jr.; And Others 1992-01-01 The standardization of the Escala de Inteligencia Wechsler para Adultos (EIWA) and the original Wechsler Adult Intelligence Scale (WAIS) were subjected to principal components analysis to examine their comparability for 616 EIWA subjects and 800 WAIS subjects. Similarity of factor structures of both scales is supported. (SLD) 19. Factor Analysis of the Spanish Version of the WAIS: The Escala de Inteligencia Wechsler para Adultos (EIWA). ERIC Educational Resources Information Center Gomez, Francisco C., Jr.; And Others 1992-01-01 The standardization of the Escala de Inteligencia Wechsler para Adultos (EIWA) and the original Wechsler Adult Intelligence Scale (WAIS) were subjected to principal components analysis to examine their comparability for 616 EIWA subjects and 800 WAIS subjects. Similarity of factor structures of both scales is supported. (SLD) 20. PLASMA CLEARANCE OF VITELLOGENIN IN SHEEPSHEAD MINNOWS AFTER CESSATION OF EXPOSURE TO 17BETA-ESTRADIOL AND PARA-NONYLPHENOL EPA Science Inventory Two experiments were performed to determine the rate of vitellogenin plasma accumulation and clearance in male sheepshead minnows (Cyprinodon variegatus) during and after exposure to either 17b-estradiol (E2) or para-nonylphenol (p-NP). Adult fish were continuously exposed to aqu... 1. Four novel sequences in Drosophila melanogaster homologous to the auxiliary Para sodium channel subunit TipE. PubMed Derst, Christian; Walther, Christian; Veh, Rüdiger W; Wicher, Dieter; Heinemann, Stefan H 2006-01-20 TipE is an auxiliary subunit of the Drosophila Para sodium channel. Here we describe four sequences, TEH1-4, homologous to TipE in the Drosophila melanogaster genome, harboring all typical structures of both TipE and the beta-Subunit family of big-conductance Ca(2+)-activated potassium channels: short cytosolic N- and C-terminal stretches, two transmembrane domains, and a large extracellular loop with two disulfide bonds. Whereas TEH1 and TEH2 lack the TipE-specific extension in the extracellular loop, both TEH3 and TEH4 possess two extracellular EGF-like domains. A CNS-specific expression was found for TEH1, while TEH2-4 were more widely expressed. The genes for TEH2-4 are localized close to the tipE gene on chromosome 3L. Coexpression of TEH subunits with Para in Xenopus oocytes showed a strong (30-fold, TEH1), medium (5- to 10-fold, TEH2 and TEH3), or no (TEH4) increase in sodium current amplitude, while TipE increased the current 20-fold. In addition, steady-state inactivation and the recovery from fast inactivation were altered by coexpression of Para with TEH1. We conclude that members of the TEH-family are auxiliary subunits for Para sodium channels and possibly other ion channels. 2. Theoretical Study of the Mechanism Behind the para-Selective Nitration of Toluene in Zeolite H-Beta SciTech Connect Andersen, Amity; Govind, Niranjan; Subramanian, Lalitha 2011-11-28 Periodic density functional theory calculations were performed to investigate the origin of the favorable para-selective nitration of toluene exhibited by zeolite H-beta with acetyl nitrate nitration agent. Energy calculations were performed for each of the 32 crystallographically unique Bronsted acid sites of a beta polymorph B zeolite unit cell with multiple Bronsted acid sites of comparable stability. However, one particular aluminum T-site with three favorable Bronsted site oxygens embedded in a straight 12-T channel wall provides multiple favorable proton transfer sites. Transition state searches around this aluminum site were performed to determine the barrier to reaction for both para and ortho nitration of toluene. A three-step process was assumed for the nitration of toluene with two organic intermediates: the pi- and sigma-complexes. The rate limiting step is the proton transfer from the sigma-complex to a zeolite Bronsted site. The barrier for this step in ortho nitration is shown to be nearly 2.5 times that in para nitration. This discrepancy appears to be due to steric constraints imposed by the curvature of the large 12-T pore channels of beta and the toluene methyl group in the ortho approach that are not present in the para approach. 3. Desarrollo de fotonovelas para concienciar sobre trastornos de la conducta alimentaria en latinos en los Estados Unidos PubMed Central Reyes-Rodríguez, Mae Lynn; García, Marissa; Silva, Yormeri; Sala, Margarita; Quaranta, Michela; Bulik, Cynthia M. 2016-01-01 4. Millimeter-wave spectroscopy of S2Cl2: a candidate molecule for measuring ortho-para transition. PubMed Dehghani, Zeinab Tafti; Ota, Shinji; Mizoguchi, Asao; Kanamori, Hideto 2013-10-03 S2Cl2 is a candidate for the observation of ortho-para transition. To estimate the ortho-para mixing in a hyperfine-resolved rotational state, pure rotational transitions were measured by millimeter-wave (mm-wave) spectroscopy using two different experimental set-ups. The transitions from the term value around 20 K was measured with a supersonic jet and those around 200 K were measured with a dry ice cooled gas cell. Several hundred peaks were assigned for the naturally abundant S2(35)Cl2 and S2(35)Cl(37)Cl isotopic species and the rotational molecular parameters including the fourth-order and sixth-order centrifugal distortion constants were determined. The hyperfine structures were partly resolved in some Q-branch transitions, which were well described with the hyperfine constants determined by FTMW spectroscopy in the centimeter-wave region. With the new rotational constants determined in our study and the previous hyperfine constants, it will be possible to obtain a more reliable ortho-para mixing ratio and to narrow down the possible candidate transitions in the mm-wave region for the observation of ortho-para transition. 5. Electrochemical determination of para-nitrophenol at apatite-modified carbon paste electrode: application in river water samples. PubMed El Mhammedi, M A; Achak, M; Bakasse, M; Chtaini, A 2009-04-15 The behavior of a modified carbon paste electrode (CPE) for para-nitrophenol detection by cyclic and square wave voltammetry (SWV) was studied. The electrode was built by incorporating the hydroxyapatite (HAP) to carbon paste. The overall analysis involved a two-step procedure: an accumulation step at open circuit, followed by medium exchange to a pure electrolyte solution for the voltammetric quantification. During the preconcentration step, para-nitrophenol was adsorbed onto hydroxyapatite surface. The influence of various experimental parameters on the HAP-CPE response was investigated (i.e. pH, carbon paste composition, accumulation time). Under the optimized conditions, the reduction peak shows that the peak height was found to be directly proportional to the para-nitrophenol concentration in the range comprised between 2x10(-7) mol L(-1) and 1x10(-4) mol L(-1). With this, it was possible to determine detection limit (DL), which resulted in 8x10(-9) mol L(-1) for peak 1. The proposed electrode (HAP-CPE) presented good repeatability, evaluated in term of relative standard deviation (R.S.D.=2.87%) for n=7 and was applied for para-nitrophenol determination in water samples. The average recovery for these samples was 86.2%. 6. A Conceptual Model for Supporting Para-Teacher Learning in an Indian Non-Governmental Organization (NGO) ERIC Educational Resources Information Center Raval, Harini; McKenney, Susan; Pieters, Jules 2010-01-01 Non-governmental organizations (NGOs) are being recognized globally for their influential role in realizing the UN Millennium Development Goal of education for all in developing countries. NGOs mostly employ untrained para-educators for grassroots activities. The professional development of these teachers is critical for NGO effectiveness, yet… 7. [Genetics and the Junta para Ampliación de Estudios e Investigaciones Científicas]. PubMed Peláez, Raquel Alvarez 2007-01-01 The aim of this paper is to show the essential paper developed by the Junta para Ampliación de Estudios in the origin of the Spanish genetics, using for that the most relevant researchers of the time, and between them two women, Jimena Fernández de la Vega and Käte Pariser. 8. The Role Of Women In Popular Education In Bolivia: A Case Study Of The "Oficina Juridica Para La Mujer" ERIC Educational Resources Information Center Kollins, Judith M.; Hansman, Catherine A. 2005-01-01 This study examines how the Education Office of the "Oficina Juridica Para la Mujer" [Women's Legal Office] , a community-based popular education organization in Cochabamba, Bolivia, works with women to address personal, legal, and policy issues through local leadership training and popular education methodology. We investigate the… 9. Global variation of the para hydrogen fraction in Jupiter's atmosphere and implications for dynamics on the outer planets NASA Technical Reports Server (NTRS) Conrath, B. J.; Gierasch, P. J. 1984-01-01 A detailed analysis of the Voyager infrared spectrometer measurements on Jupiter's atmosphere is presented, and possible implications of para hydrogen disequilibrium for the energetics and dynamics of that atmosphere are examined. The method of data analysis is described, and results for the large scale latitude variation of the para hydrogen fraction are presented. The Jovian results show pronounced latitude variation, and are compared with other parameters including wind fields, thermal structure, and various indicators of atmospheric clouds. The problem of equilibration rate is reexamined, and it is concluded that on Jupiter the equilibration time is longer than the radiative time constant at the level of emission to space, but that this inequality reverses at greater depths. A model for the interaction of fluid motions with the ortho-para conversion process is presented, and a consistent mixing length theory for the reacting ortho-para mixture is developed. Several implications of the Jovian data for atmospheric energetics and stability on the outer planets are presented. 10. Violencia de Pareja en Mujeres Hispanas: Implicaciones para la Investigación y la Práctica PubMed Central Gonzalez-Guarda, Rosa Maria; Becerra, Maria Mercedes 2012-01-01 Las investigaciones sobre la violencia entre parejas sugieren que las mujeres hispanas están siendo afectadas desproporcionadamente por la ocurrencia y consecuencias de este problema de salud pública. El objetivo del presente artículo es dar a conocer el estado del arte en relación a la epidemiologia, consecuencias y factores de riesgo para VP entre mujeres Hispanas, discutiendo las implicaciones para la investigación y la práctica. Investigaciones han demostrado una fuerte asociación del status socioeconómico, abuso de droga y el alcohol, la salud mental, aculturación, inmigración, comportamientos sexuales riesgosos e historia de abuso con la violencia entre parejas. Sin embargo, más estudios se deben llevar a cabo para identificar otros factores de riesgos y de protección a poblaciones hispanas no clínicas. Mientras que el conocimiento sobre la etiología de la VP entre mujeres Hispanas se expanda, enfermeras y otros profesionales de la salud deben desarrollar, implementar y evaluar estrategias culturalmente adecuadas para la prevención primaria y secundaria de la violencia entre pareja. PMID:26166938 11. Contribuições para o projeto da câmara infravermelha Spartan do telescópio SOAR Laporte, R.; Jablonski, F.; Loh, E. 2003-08-01 Como parte de uma colaboração entre a Divisão de Astrofísica do INPE, IAG-USP, Instituto do Milênio MEGALIT e a Michigan State University, trabalhamos durante um ano junto ao grupo do Dr. Edwin Loh (MSU) no projeto e detalhamento de diversos subsistemas para a câmara infravermelho Spartan do telescópio SOAR. Trata-se de um imageador para as bandas J, H e K que explora todo o potencial, em termos de qualidade de imagem e campo de visada, fornecido pelo sistema de óptica adaptativa de primeira ordem do telescópio SOAR. Projetamos soluções detalhadas para os subsistemas de rodas de filtros/grismas/máscaras de Lyot; subsistema de compactação do mosaico de detectores em duas versões distintas; subsistema de alimentação de Nitrogênio líquido. Mantivemos sempre uma supervisão geral sobre todas as partes restantes e os respectivos envelopes volumétricos produzindo soluções para a integração de todos os componentes. Neste trabalho, ilustramos as principais contribuições e fornecemos um resumo do estado atual do instrumento. 12. Conservation of ParaHox genes' function in patterning of the digestive tract of the marine gastropod Gibbula varia PubMed Central 2010-01-01 Background Presence of all three ParaHox genes has been described in deuterostomes and lophotrochozoans, but to date one of these three genes, Xlox has not been reported from any ecdysozoan taxa and both Xlox and Gsx are absent in nematodes. There is evidence that the ParaHox genes were ancestrally a single chromosomal cluster. Colinear expression of the ParaHox genes in anterior, middle, and posterior tissues of several species studied so far suggest that these genes may be responsible for axial patterning of the digestive tract. So far, there are no data on expression of these genes in molluscs. Results We isolated the complete coding sequences of the three Gibbula varia ParaHox genes, and then tested their expression in larval and postlarval development. In Gibbula varia, the ParaHox genes participate in patterning of the digestive tract and are expressed in some cells of the neuroectoderm. The expression of these genes coincides with the gradual formation of the gut in the larva. Gva-Gsx patterns potential neural precursors of cerebral ganglia as well as of the apical sensory organ. During larval development this gene is involved in the formation of the mouth and during postlarval development it is expressed in the precursor cells involved in secretion of the radula, the odontoblasts. Gva-Xolx and Gva-Cdx are involved in gut patterning in the middle and posterior parts of digestive tract, respectively. Both genes are expressed in some ventral neuroectodermal cells; however the expression of Gva-Cdx fades in later larval stages while the expression of Gva-Xolx in these cells persists. Conclusions In Gibbula varia the ParaHox genes are expressed during anterior-posterior patterning of the digestive system. This colinearity is not easy to spot during early larval stages because the differentiated endothelial cells within the yolk permanently migrate to their destinations in the gut. After torsion, Gsx patterns the mouth and foregut, Xlox the midgut gland or 13. Viviendo Con Incendios: Una guia para los duenos de casas en Nuevo Mexico [Living with Fire: A Guide for the Homeowner-New Mexico Treesearch U.S. Department of the Interior Bureau of Land Management 2008-01-01 La posibilidad de perdidas humanas y de propiedad en Nuevo Mexico debido a un incendio forestal ha ido incrementando. Para responder a este peligro, entidades locales, estatales, federales, particulares y sin fines de lucro se han unido para crear Como Reducir la Amenaza de Incendios Forestales, un programa dirigido a los propietarios de casas. Este no es un programa... 14. La EPA propone normas más rigurosas para las personas que aplican los plaguicidas de más alto riesgo EPA Pesticide Factsheets La EPA emitió una propuesta para la revisión de la norma para la Certificación de Aplicadores de Plaguicidas. La norma ayudará a mantener nuestras comunidades seguras, salvaguardar el medio ambiente y reducir el riesgo a los que aplican los plaguicidas. 15. PFI-ZEKE (Pulsed Field Ionization-Zero Electron Kinetic Energy) para el estudio de iones Castaño, F.; Fernández, J. A.; Basterretxea, A. Longarte. F.; Sánchez Rayo, M. N.; Martínez, R. 16. Agrobacterium tumefaciens pTAR parA promoter region involved in autoregulation, incompatibility and plasmid partitioning. PubMed Gallie, D R; Kado, C I 1987-02-05 The locus responsible for directing proper plasmid partitioning of Agrobacterium tumefaciens pTAR is contained within a 1259 base-pair region. Insertions or deletions within this locus can result in the loss of the plasmid's ability to partition properly. One protein product (parA), approximately 25,000 Mr, is expressed from the par locus in Escherichia coli and A. tumefaciens protein analysis systems in vitro. DNA sequence analysis of the locus revealed a single 23,500 Mr open reading frame, confirming the protein data. A 248 base-pair region immediately upstream from the 23,500 Mr open reading frame, containing an array of 12 seven-base-pair palindromic repeats each of which are separated by exactly ten base-pairs of A + T-rich (75%) sequence, not only serves to provide the promoter but is also involved in parA autoregulation. In addition, this region containing a set of 12 seven-base-pair palindromic repeats, is responsible for plasmid-associated incompatibility within Inc Ag-1 and also functions as the cis-acting recognition site at which parA interacts to bring about partitioning. Transcriptional analysis indicated that only the DNA strand responsible for parA is actively transcribed, and that active transcription of the opposite strand of par can inhibit the production of parA, resulting in plasmid destabilization. The presence of the par locus in a plasmid results in stable inheritance within a wide range of members of Rhizobiaceae. Segregation rates of par-defective derivatives can be influenced by the host. 17. Monoamine transporter and receptor interaction profiles of novel psychoactive substances: para-halogenated amphetamines and pyrovalerone cathinones. PubMed Rickli, Anna; Hoener, Marius C; Liechti, Matthias E 2015-03-01 The pharmacology of novel psychoactive substances is mostly unknown. We evaluated the transporter and receptor interaction profiles of a series of para-(4)-substituted amphetamines and pyrovalerone cathinones. We tested the potency of these compounds to inhibit the norepinephrine (NE), dopamine (DA), and serotonin (5-HT) transporters (NET, DAT, and SERT, respectively) using human embryonic kidney 293 cells that express the respective human transporters. We also tested the substance-induced efflux of NE, DA, and 5-HT from monoamine-loaded cells, binding affinities to monoamine receptors, and 5-HT2B receptor activation. Para-(4)-substituted amphetamines, including 4-methylmethcathinone (mephedrone), 4-ethylmethcathinone, 4-fluoroamphetamine, 4-fluoromethamphetamine, 4-fluoromethcatinone (flephedrone), and 4-bromomethcathinone, were relatively more serotonergic (lower DAT:SERT ratio) compared with their analogs amphetamine, methamphetamine, and methcathinone. The 4-methyl, 4-ethyl, and 4-bromo groups resulted in enhanced serotonergic properties compared with the 4-fluoro group. The para-substituted amphetamines released NE and DA. 4-Fluoramphetamine, 4-flouromethamphetamine, 4-methylmethcathinone, and 4-ethylmethcathinone also released 5-HT similarly to 3,4-methylenedioxymethamphetamine. The pyrovalerone cathinones 3,4-methylenedioxypyrovalerone, pyrovalerone, α-pyrrolidinovalerophenone, 3,4-methylenedioxy-α-pyrrolidinopropiophenone, and 3,4-methylenedioxy-α-pyrrolidinobutiophenone potently inhibited the NET and DAT but not the SERT. Naphyrone was the only pyrovalerone that also inhibited the SERT. The pyrovalerone cathinones did not release monoamines. Most of the para-substituted amphetamines exhibited affinity for the 5-HT2A receptor but no relevant activation of the 5-HT2B receptor. All the cathinones exhibited reduced trace amine-associated receptor 1 binding compared with the non-β-keto-amphetamines. In conclusion, para-substituted amphetamines exhibited 18. Molecular genetic analysis of para-Bombay phenotypes in Chinese: a novel non-functional FUT1 allele is identified. PubMed Yip, S P; Chee, K Y; Chan, P Y; Chow, E Y D; Wong, H F 2002-10-01 The para-Bombay phenotype (also known as H-deficient secretor) is characterized by a lack of ABH antigens on red cells, but ABH substances are found in saliva. Molecular genetic analysis was performed for five Chinese individuals serologically typed as para-Bombay. ABO genotyping and mutational analysis of both FUT1 (or H) and FUT2 (or Se) loci were performed for these individuals using the polymerase chain reaction, single-strand conformation polymorphism analysis and direct DNA sequencing. The ABO genotypes of these para-Bombay individuals correlated with the types of ABH substances found in the saliva. Their FUT1 genotypes were h1h2 (three individuals), h2h2 (one individual) and h2h6 (one individual). Alleles h1 (547-552delAG) and h2 (880-882delTT) were known frameshift mutations, while h6 (522C > A) was a missense mutation (Phe174Leu) not previously reported. These three mutations were rare sequence variations, each with an allele frequency of less than 0.005. Phe174 might be functionally important because this residue is conserved from mouse to human. Their FUT2 genotypes were Se357se357,385 for the h2h6 individual and Se357Se357) for the others. Both FUT2 alleles were known: one functional (Se357) and one weakly functional (se357,385). That they carried at least one copy of a functional FUT2 allele was consistent with their secretor status. As FUT1 and FUT2 are adjacent on 19q13.3, there are three possible haplotypes in these para-Bombay individuals: h1-Se357; h2-Se357; and h6-se357,385. A novel non-functional FUT1 allele (522C > A, or Phe174Leu) was identified in a para-Bombay individual and on a se357,385 haplotype background. 19. Intensity-Modulated Radiation Therapy for the Treatment of Squamous Cell Anal Cancer With Para-aortic Nodal Involvement SciTech Connect Hodges, Joseph C.; Das, Prajnan; Eng, Cathy; Reish, Andrew G.; Beddar, A. Sam; Delclos, Marc E.; Krishnan, Sunil; Crane, Christopher H. 2009-11-01 Purpose: To determine the rates of toxicity, locoregional control, distant control, and survival in anal cancer patients with para-aortic nodal involvement, treated with intensity-modulated radiotherapy (IMRT) and concurrent chemotherapy at a single institution. Methods and Materials: Between 2001 and 2007, 6 patients with squamous cell anal cancer and para-aortic nodal involvement were treated with IMRT and concurrent infusional 5-fluorouracil and cisplatin. The primary tumor was treated with a median dose of 57.5 Gy (range, 54-60 Gy), involved para-aortic, pelvic, and inguinal lymph nodes were treated with a median dose of 55 Gy (range, 50.5-55 Gy), and noninvolved nodal regions were treated with a median dose of 45 Gy (range, 43.5-45 Gy). Results: After a median follow-up of 25 months, none of the patients had a recurrence at the primary tumor, pelvic/inguinal nodes, or para-aortic nodes, whereas 2 patients developed distant metastases to the liver. Four of the 6 patients are alive. The 3-year actuarial locoregional control, distant control, and overall survival rates were 100%, 56%, and 63%, respectively. Four of the 6 patients developed Grade 3 acute gastrointestinal toxicity during chemoradiation. Conclusions: Intensity-modulated radiotherapy and concurrent chemotherapy could potentially serve as definitive therapy in anal cancer patients with para-aortic nodal involvement. Adjuvant chemotherapy may be indicated in these patients, as demonstrated by the distant failure rates. These patients need to be followed carefully because of the potential for treatment-related toxicities. 20. ParaCalc®--a novel tool to evaluate the economic importance of worm infections on the dairy farm. PubMed Charlier, Johannes; Van der Voort, Mariska; Hogeveen, Henk; Vercruysse, Jozef 2012-03-23 Subclinical infections with gastrointestinal nematodes and liver fluke are important causes of production losses in grazing cattle. Although there is an extensive compilation of literature describing the effect of these infections on animal performance, only a few attempts have been made to convert these production losses to an economic cost. Here, we propose a novel tool (ParaCalc(®)), available as a web-application, to provide herd-specific estimates of the costs of these infections on dairy farms. ParaCalc(®) is a deterministic spread-sheet model where results from diagnostic methods to monitor the helminth infection status on a herd and anthelmintic usage are used as input parameters. Default values are provided to describe the effects of the infections on production and the cost of these production losses, but the latter can be adapted to improve the herd-specificity of the cost estimate. After development, ParaCalc(®) was applied on input parameters that were available for 93 Belgian dairy herds. In addition, the tool was provided to 6 veterinarians and their user experiences were evaluated. The estimated median [25th-75th percentile] cost per year per cow was € 46 [29-58] and € 6 [0-19] for gastrointestinal nematode and liver fluke infection, respectively. For both infections, the major components in the total costs were those associated with milk production losses in the adult cows. The veterinarians evaluated ParaCalc(®) as a useful tool to raise the farmers' awareness on the costs of worm infections, providing added value for their services. However, the score given for user-friendliness was diverse among users. Although the model behind ParaCalc(®) is a strong simplification of the real herd processes inducing economic losses, the tool may be used in the future to support economic decisions on helminth control. 1. Clinical significance of para-aortic lymph node dissection and prognosis in ovarian cancer. PubMed Li, Xianxian; Xing, Hui; Li, Lin; Huang, Yanli; Zhou, Min; Liu, Qiong; Qin, Xiaomin; He, Min 2014-03-01 Lymph node metastasis has an important effect on prognosis of patients with ovarian cancer. Moreover, the impact of para-aortic lymph node (PAN) removal on patient prognosis is still unclear. In this study, 80 patients were divided into groups A and B. Group A consisted of 30 patients who underwent PAN + pelvic lymph node (PLN) dissection, whereas group B consisted of 50 patients who only underwent PLN dissection. Analysis of the correlation between PAN clearance and prognosis in epithelial ovarian cancer was conducted. Nineteen cases of lymph node metastasis were found in group A, among whom seven cases were positive for PAN, three cases for PLN, and nine cases for both PAN and PLN. In group B, 13 cases were positive for lymph node metastasis. Our study suggested that the metastatic rate of lymph node is 40.0%. Lymph node metastasis was significantly correlated with FIGO stage, tumor differentiation, and histological type both in groups A and B (P < 0.05). In groups A and B, the three-year survival rates were 77.9% and 69.0%, and the five-year survival rates were 46.7% and 39.2%, respectively. However, the difference was not statistically significant (P > 0.05). The three-year survival rates of PLN metastasis in groups A and B were 68.5% and 41.4%, and the five-year survival rates were 49.7% and 26.4%, respectively. Furthermore, PLN-positive patients who cleared PAN had significantly higher survival rate (P = 0.044). In group A, the three-year survival rates of positive and negative lymph nodes were 43.5% and 72.7%, and the five-year survival rates were 27.2% and 58.5%, respectively. The difference was statistically significant (P = 0.048). Cox model analysis of single factor suggested that lymph node status affected the survival rate (P < 0.01), which was the death risk factor. Consequently, in ovarian carcinoma cytoreductive surgery, resection of the para-aortic lymph node, which has an important function in clinical treatment and prognosis of patients with 2. Alternative hair-dye products for persons allergic to para-phenylenediamine. PubMed Scheman, Andrew; Cha, Christina; Bhinder, Manpreet 2011-01-01 Finding alternative hair dyes for individuals allergic to para-phenylenediamine (PPD) has been difficult. Newer permanent and demipermanent hair dyes that have replaced PPD with para-toluenediamine sulfate (PTDS) are now available. We examined whether individuals allergic to PPD will tolerate PPD-free hair dyes containing PTDS. A retrospective analysis of patch-test results since October 2006 was done and yielded 28 patients allergic to PPD who were also tested with a hair dye series. From January 2004 through October 2006, seven additional patients allergic to PPD were tested with PTDS but not the full dye series. Patch-test results were analyzed. The newer PTDS dyes were recommended for all PPD-positive PTDS-negative subjects starting in 2008, and these subjects were contacted to determine whether they tolerated the recommended hair-dye products. Of 28 PPD-allergic patients seen since October 2006, 16 (57.1%) tested negative to all other substances on the dye series. Eleven tested positive to PTDS; of these, several were also allergic to other substances in the hair dye series. There was only one patient who was allergic to ortho-nitro-PPD and not to PTDS. Of 7 additional PPD-allergic patients seen from 2004 through 2006, 4 (57.1%) tested negative to PTDS. In total, 20 of 35 individuals (57.1%) tested positive to PPD but negative to PTDS. Ten of 13 PPD-positive patients for whom PTDS hair dyes were recommended subsequently used a PTDS hair dye, and all tolerated these products. Fifty-seven percent of patients allergic to PPD in this study will likely tolerate newer permanent and demipermanent hair dyes based on PTDS. Most individuals not allergic to PTDS will also test negative to other substances in the dye series. All 10 patients who tested positive to PPD and negative to PTDS who subsequently used a PTDS dye free of PPD tolerated these products. Many individuals allergic to PPD will benefit from the newer PTDS-based products. 3. Desarrollo curricular, conciencia ambiental y tecnologia para estudiantes de intermedia: Una investigacion en accion Rodriguez Ramos, Teresita Se llevó a cabo una investigación en acción con los propósitos de 1) documentar las relaciones de las tecnologías de la información y la comunicación en las clases de ciencias de escuela intermedia como elemento de apoyo cuando se aborda el tema ambiental y sus conceptos pertinentes, a partir de las observaciones de la investigadora, así como las entrevistas y diarios reflexivos de los estudiantes de una escuela intermedia en la zona metropolitana, y luego 2) diseñar una unidad instruccional sobre el tema ambiental que integre actividades tecnologías para el curso de ciencias de la escuela intermedia según el modelo PROCIC y las observaciones que hayan iniciado los estudiantes participantes. Finalmente, se plantearon las implicaciones educativas para el currículo del Programa de Ciencias al instrumentar este modelo de unidad mediante PROCIC, e integrado la tecnología y el tema ambiental. Los hallazgos se analizaron y se categorizaron de acuerdo con las preguntas de investigación. El hallazgo principal de la investigación aborda las cuatro relaciones centrales en las que se articula la utilización de las tecnologías y sus aplicaciones en la clase de ciencias. Estas cuatro relaciones que recogen la posición de los estudiantes son: 1) Perspectiva de los estudiantes hacia la tecnología. 2) Participación de los estudiantes en los aspectos docentes. 3) Aprendizaje estudiantil sobre el ambiente, y 4) Conciencia ambiental en relación con la vida diaria. Estas relaciones ponen de manifiesto,cómo se plantea en las implicaciones, la necesidad de más investigación en acción en la sala de clases, la importancia—como tema transversal—de la conciencia ambiental mediante la tecnología al construir conocimientos significativos dentro y fuera de la escuela, asó como, valorar la investigación y la dialogicidad en la sala de clases como actividades que obligan al reexamen de la práctica didáctica en su formas curriculares de objetivos, recursos 4. An NMR study of cobalt-catalyzed hydroformylation using para-hydrogen induced polarisation. PubMed Godard, Cyril; Duckett, Simon B; Polas, Stacey; Tooze, Robert; Whitwood, Adrian C 2009-04-14 The syntheses of Co(eta3-C3H5)(CO)2PR2R' (R, R' = Ph, Me; R, R' = Me, Ph; R = R' = Ph, Cy, CH2Ph) and Co(eta3-C3H5)(CO)(L) (L = dmpe and dppe) are described, and X-ray structures for Co(eta3-C3H5)(CO)(dppe) and the PPh2Me, PCy3 derivatives reported. The relative ability of Co(eta3-C3H5)(CO)2(PR2R') to exchange phosphine for CO follows the trend PMe2Ph < PPh2Me < PCy3 < P(CH2Ph)3 < PPh3. Reactions of the allyl complexes with para-hydrogen (p-H2) lead to the observation of para-hydrogen induced polarisation (PHIP) in both liberated propene and propane. Reaction of these complexes with both CO and H2 leads to the detection of linear acyl containing species Co(COCH2CH2CH3)(CO)3(PR2R') and branched acyl complexes Co(COCH(CH3)2)(CO)3(PR2R') via the PHIP effect. In the case of PPh2Me, additional signals for Co(COCH2CH2CH3)(CO)2(PPh2Me)(propene) and Co(COCH(CH3)2)(CO)2(PPh2Me)(propene) are also detected. When the reactions of H2 and diphenylacetylene are studied with the same precursor, Co(CO)3(PPh2Me)(CHPhCH2Ph) is seen. Studies on how the appearance and ratio, of the PHIP enhanced signals vary as a function of reaction temperature and H2 : CO ratio are reported. These profiles are used to learn about the mechanism of catalysis and reveal how the rates of key steps leading to linear and branched hydroformylation products vary with the phosphine. These data also reveal that the PMe2Ph and PPh2Me based systems yield the highest selectivity for linear hydroformylation products. 5. PREVENCION DE VIH PARA MUJERES HISPANAS DE 50 AÑOS Y MÁS PubMed Central Villegas, N.; Cianelli, R.; Ferrer, L.; Kaelber, L.; Peragallo, N.; Yaya, Alexandra O. 2012-01-01 6. Production of para-aminobenzoic acid from different carbon-sources in engineered Saccharomyces cerevisiae. PubMed Averesch, Nils J H; Winter, Gal; Krömer, Jens O 2016-05-26 Biological production of the aromatic compound para-aminobenzoic acid (pABA) is of great interest to the chemical industry. Besides its application in pharmacy and as crosslinking agent for resins and dyes pABA is a potential precursor for the high-volume aromatic feedstocks terephthalic acid and para-phenylenediamine. The yeast Saccharomyces cerevisiae synthesises pABA in the shikimate pathway: Outgoing from the central shikimate pathway intermediate chorismate, pABA is formed in two enzyme-catalysed steps, encoded by the genes ABZ1 and ABZ2. In this study S. cerevisiae metabolism was genetically engineered for the overproduction of pABA. Using in silico metabolic modelling an observed impact of carbon-source on product yield was investigated and exploited to optimize production. A strain that incorporated the feedback resistant ARO4 (K229L) and deletions in the ARO7 and TRP3 genes, in order to channel flux to chorismate, was used to screen different ABZ1 and ABZ2 genes for pABA production. In glucose based shake-flaks fermentations the highest titer (600 µM) was reached when over-expressing the ABZ1 and ABZ2 genes from the wine yeast strains AWRI1631 and QA23, respectively. In silico metabolic modelling indicated a metabolic advantage for pABA production on glycerol and combined glycerol-ethanol carbon-sources. This was confirmed experimentally, the empirical ideal glycerol to ethanol uptake ratios of 1:2-2:1 correlated with the model. A (13)C tracer experiment determined that up to 32% of the produced pABA originated from glycerol. Finally, in fed-batch bioreactor experiments pABA titers of 1.57 mM (215 mg/L) and carbon yields of 2.64% could be achieved. In this study a combination of genetic engineering and in silico modelling has proven to be a complete and advantageous approach to increase pABA production. Especially the enzymes that catalyse the last two steps towards product formation appeared to be crucial to direct flux to pABA. A stoichiometric model 7. Cosmoeducação: uma proposta para o ensino de astronomia Medeiros, L. A. L.; Jafelice, L. C. 2003-08-01 8. Performance of three different anodes in electrochemical degradation of 4-para-nitrophenol. PubMed Murugaesan, Pramila; Aravind, Priyadharshini; Muniyandi, Neelavannan Guruswamy; Kandasamy, Subramanian 2015-01-01 In recent years, removal of pollutants from wastewater by electrochemical oxidation has become an attractive method. The present investigation deals with the degradation of 4-para-nitrophenol (4-PNP) by electrochemical oxidation using three different anodes, namely TiO2-RuO2-IrO2/Ti (titanium substrate insoluble anode - TSIA)), IrO2-PbO2/Ti and graphite. Electrochemical oxidation of 4-PNP was carried out employing sodium chloride as the supporting electrolyte, at pH 7 with a current density of 15 mA/cm(2). The degradation of 4-PNP by electro-oxidation was characterized by ultraviolet-visible spectroscopy, Fourier transform infrared spectroscopy and high-performance liquid chromatography. The performance efficiency and current efficiency of the three anodic materials in this study were evaluated by chemical oxygen demand (COD). Comparisons of energy consumption for the three anodes employed were also calculated. Among electrodes investigated, the IrO2-PbO2/Ti electrode resulted in 98% of COD removal in 30 min comparatively at a less energy consumption of 1 × 10(-2) kWh m(-3), depicting its higher performance efficiency in 4-PNP degradation. 9. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report. SciTech Connect Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M. 2010-09-01 This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal. 10. Para-toluenesulfonamide induces tongue squamous cell carcinoma cell death through disturbing lysosomal stability. PubMed Liu, Zhe; Liang, Chenyuan; Zhang, Zhuoyuan; Pan, Jian; Xia, Hui; Zhong, Nanshan; Li, Longjiang 2015-11-01 Para-toluenesulfonamide (PTS) has been implicated with anticancer effects against a variety of tumors. In the present study, we investigated the inhibitory effects of PTS on tongue squamous cell carcinoma (Tca-8113) and explored the lysosomal and mitochondrial changes after PTS treatment in vitro. High-performance liquid chromatography showed that PTS selectively accumulated in Tca-8113 cells with a relatively low concentration in normal fibroblasts. Next, the effects of PTS on cell viability, invasion, and cell death were determined. PTS significantly inhibited Tca-8113 cells' viability and invasive ability with increased cancer cell death. Flow cytometric analysis and the lactate dehydrogenase release assay showed that PTS induced cancer cell death by activating apoptosis and necrosis simultaneously. Morphological changes, such as cellular shrinkage, nuclear condensation as well as formation of apoptotic body and secondary lysosomes, were observed, indicating that PTS might induce cell death through disturbing lysosomal stability. Lysosomal integrity assay and western blot showed that PTS increased lysosomal membrane permeabilization associated with activation of lysosomal cathepsin B. Finally, PTS was shown to inhibit ATP biosynthesis and induce the release of mitochondrial cytochrome c. Therefore, our findings provide a novel insight into the use of PTS in cancer therapy. 11. Synthesis and methemoglobinemia-inducing properties of analogues of para-aminopropiophenone designed as humane rodenticides. PubMed Rennison, David; Conole, Daniel; Tingle, Malcolm D; Yang, Junpeng; Eason, Charles T; Brimble, Margaret A 2013-12-15 A number of structural analogues of the known toxicant para-aminopropiophenone (PAPP) have been prepared and evaluated for their capacity to induce methemoglobinemia--with a view to their possible application as humane pest control agents. It was found that an optimal lipophilicity for the formation of methemoglobin (metHb) in vitro existed for alkyl analogues of PAPP (aminophenones 1-20; compound 6 metHb% = 74.1 ± 2). Besides lipophilicity, this structural sub-class suggested there were certain structural requirements for activity, with both branched (10-16) and cyclic (17-20) alkyl analogues exhibiting inferior in vitro metHb induction. Of the four candidates (compounds 4, 6, 13 and 23) evaluated in vivo, 4 exhibited the greatest toxicity. In parallel, aminophenone bioisosteres, including oximes 30-32, sulfoxide 33, sulfone 34 and sulfonamides 35-36, were found to be inferior metHb inducers to lead ketone 4. Closer examination of Hammett substituent constants suggests that a particular combination of the field and resonance parameters may be significant with respect to the redox mechanisms behind PAPPs metHb toxicity. 12. RETRACTED ARTICLE: Quantum effects on translation and rotation of molecular chlorine in solid para-hydrogen Accardi, Antonio; Schmidt, Burkhard 2014-07-01 Structure and quantum effects of a Cl2 molecule embedded in fcc and hcp para-hydrogen (pH2) crystals are investigated in the zero-temperature limit. The interaction is modelled in terms of Cl2-pH2 and pH2-pH2 pair potentials from ab initio CCSD(T) and MP2 calculations. Translational and rotational motions of the molecules are described within three-dimensional anharmonic Einstein and Devonshire models, respectively, where the crystals are either treated as rigid or allowed to relax. The pH2 molecules, as well as the heavier Cl2 molecule, show large translational zero-point energies (ZPEs) and undergo large-amplitude translational motions. This gives rise to substantial reductions in the cohesive energies and expansions of the lattices, in agreement with experimental results for pure hydrogen crystals. The rotational dynamics of the Cl2 impurity is restricted to small-amplitude librations, again with high librational ZPEs, which are described in terms of two-dimensional non-degenerate anharmonic oscillators. The lattice relaxation causes qualitative changes of the rotational energy surfaces, which finally favour librations around the crystallographic directions pointing towards the nearest neighbours, both for fcc and hcp lattices. Implications on the reactant orientation in the experimentally observed laser-induced chemical reaction, Cl + H2 → HCl + H, are discussed. 13. Searching for auxetics with DYNA3D and ParaDyn Hoover, Wm. G.; Hoover, C. G. 2005-03-01 The present special issue of physica status solidi (b), guest-edited by Krzysztof W. Wojciechowski, Andrew Alderson, Arkadiusz Braka, and Kim L. Alderson, is dedicated to Auxetics and Related Systems - materials which exhibit negative Poisson's ratio behaviour. Most papers were presented at a workshop which was held in Pozna-Bdlewo, 27-30 June 2004.In our Editor's Choice [1] novel simulations with a parallel finite element program, ParaDyn, have been conducted to study the formation of auxetic materials. Structures composed of either brick elements (hexahedra) or shell elements are constructed in a regular array of panels. These structures are compressed and relaxed to form an initial state for an auxetic (foam-like) material. The foam structure shown is composed of 208896 shell elements arranged in four by four panels. Applying a uniaxial compression to this structure characterizes the material behaviour of the lateral surfaces as normal (expanding) or auxetic (compressing).The first author, William G. Hoover, has been working in areas such as statistical and applied mechanics, nonlinear and molecular dynamics and is now pursuing, as he states on his own webpage, an active retired research career as Professor Emeritus of UC Davis. 14. [Para-phenylenediamine allergic contact dermatitis due to henna tattoos in a child and adolescent population]. PubMed Ortiz Salvador, José María; Esteve Martínez, Altea; Subiabre Ferrer, Daniela; Victoria Martínez, Ana Mercedes; de la Cuadra Oyanguren, Jesús; Zaragoza Ninet, Violeta 2017-03-01 15. Conformation of ionizable poly Para phenylene ethynylene in dilute solutions SciTech Connect Wijesinghe, Sidath; Maskey, Sabina; Perahia, Dvora; Grest, Gary S. 2015-11-03 The conformation of dinonyl poly para phenylene ethynylenes (PPEs) with carboxylate side chains, equilibrated in solvents of different quality is studied using molecular dynamics simulations. PPEs are of interest because of their tunable electro-optical properties, chemical diversity, and functionality which are essential in wide range of applications. The polymer conformation determines the conjugation length and their assembly mode and affects electro-optical properties which are critical in their current and potential uses. The current study investigates the effect of carboxylate fraction on PPEs side chains on the conformation of chains in the dilute limit, in solvents of different quality. The dinonyl PPE chains are modeled atomistically, where the solvents are modeled both implicitly and explicitly. Dinonyl PPEs maintained a stretched out conformation up to a carboxylate fraction f of 0.7 in all solvents studied. The nonyl side chains are extended and oriented away from the PPE backbone in toluene and in implicit good solvent whereas in water and implicit poor solvent, the nonyl side chains are collapsed towards the PPE backbone. Thus, rotation around the aromatic ring is fast and no long range correlations are seen within the backbone. 16. [Comparative assessment of antioxidant activity of para-aminobenzoic acid and emoxipin in retina]. PubMed Akberova, S I; Musaev Galbinur, P I; Magomedov, N M; Babaev, Kh F; Gakhramanov, Kh M; Stroeva, O G 1998-01-01 Effect of para-aminobenzoic acid (PABA) on lipid peroxidation (LPO) in rat and guinea pig retina exposed to hypoxic hypoxia is studied. PABA was injected intraperitoneally and parabulbarly before and after hypoxic exposure. Antioxidant activities of PABA and emoxipin were compared. An intraperitoneal injection of PABA in a dose of 10 mg/kg 24 h before hypoxia virtually completely prevented accumulation of lipid peroxides and preserved catalase activity in the retina. Parabulbar injection of 0.01% PABA solution 1 h before hypoxia prevented LPO intensification, stabilized catalase activity in hypoxia, and protected the retina starting from the moment immediately after hypoxic exposure. The efficacy of 0.01% PABA is comparable with that of 1% emoxipin, and a 0.01% solution of emoxipin is less effective than PABA in the same concentration. PABA exerts an antioxidant effect after hypoxia by decreasing the abnormally high level of lipid peroxides and reducing catalase activity in the retina after parabulbar injection of the drug. All the studied concentrations of the drug (from 0.007 to 0.08%) are active, but the optimal dose for the retina is 0.04%. By its efficacy this concentration is equivalent to 1% emoxipin. 17. Local Ecological Knowledge about Endangered Primates in a Rural Community in Paraíba, Brazil. PubMed Torres Junior, Emanuel Ubaldino; Valença-Montenegro, Mônica Mafra; Castro, Carla Soraia Soares de 2016-01-01 The study of local ecological knowledge (LEK) fosters a better understanding of the relationship between humans and the environment. We assessed respondents' ecological knowledge of primates in a rural community located near the Atlantic Forest remnants in the state of Paraíba, Brazil. Populations of Alouatta belzebul (red-handed howler monkeys), Sapajus flavius (blonde capuchins), and Callithrix jacchus (the common marmoset) inhabit the region. We conducted 200 semi-structured interviews and applied thematic content analysis, with weighting, to the responses to quantify the LEK. Respondents showed a low LEK, despite the community's proximity to forest remnants. However, the LEK was significantly higher among men, as well as among those who had a greater degree of contact with the primates. Age did not influence LEK. The studied community apparently does not intensively exploit the forest resources nor does it economically depend on primates, which may explain these individuals' low levels of knowledge about these animals. Such data may support future studies, as well as environmental education and action plans, especially for A. belzebul and S. flavius, both of which are endangered species and targets of the National Action Plan for the Conservation of the Primates of the Northeast. © 2016 S. Karger AG, Basel. 18. An experimental and theoretical investigation into the electronically excited states of para-benzoquinone Jones, D. B.; Limão-Vieira, P.; Mendes, M.; Jones, N. C.; Hoffmann, S. V.; da Costa, R. F.; Varella, M. T. do N.; Bettega, M. H. F.; Blanco, F.; García, G.; Ingólfsson, O.; Lima, M. A. P.; Brunger, M. J. 2017-05-01 We report on a combination of experimental and theoretical investigations into the structure of electronically excited para-benzoquinone (pBQ). Here synchrotron photoabsorption measurements are reported over the 4.0-10.8 eV range. The higher resolution obtained reveals previously unresolved pBQ spectral features. Time-dependent density functional theory calculations are used to interpret the spectrum and resolve discrepancies relating to the interpretation of the Rydberg progressions. Electron-impact energy loss experiments are also reported. These are combined with elastic electron scattering cross section calculations performed within the framework of the independent atom model-screening corrected additivity rule plus interference (IAM-SCAR + I) method to derive differential cross sections for electronic excitation of key spectral bands. A generalized oscillator strength analysis is also performed, with the obtained results demonstrating that a cohesive and reliable quantum chemical structure and cross section framework has been established. Within this context, we also discuss some issues associated with the development of a minimal orbital basis for the single configuration interaction strategy to be used for our high-level low-energy electron scattering calculations that will be carried out as a subsequent step in this joint experimental and theoretical investigation. 19. LabVIEW-based control software for para-hydrogen induced polarization instrumentation SciTech Connect Agraz, Jose Grunfeld, Alexander; Li, Debiao; Cunningham, Karl; Willey, Cindy; Pozos, Robert; Wagner, Shawn 2014-04-15 The elucidation of cell metabolic mechanisms is the modern underpinning of the diagnosis, treatment, and in some cases the prevention of disease. Para-Hydrogen induced polarization (PHIP) enhances magnetic resonance imaging (MRI) signals over 10 000 fold, allowing for the MRI of cell metabolic mechanisms. This signal enhancement is the result of hyperpolarizing endogenous substances used as contrast agents during imaging. PHIP instrumentation hyperpolarizes Carbon-13 ({sup 13}C) based substances using a process requiring control of a number of factors: chemical reaction timing, gas flow, monitoring of a static magnetic field (B{sub o}), radio frequency (RF) irradiation timing, reaction temperature, and gas pressures. Current PHIP instruments manually control the hyperpolarization process resulting in the lack of the precise control of factors listed above, resulting in non-reproducible results. We discuss the design and implementation of a LabVIEW based computer program that automatically and precisely controls the delivery and manipulation of gases and samples, monitoring gas pressures, environmental temperature, and RF sample irradiation. We show that the automated control over the hyperpolarization process results in the hyperpolarization of hydroxyethylpropionate. The implementation of this software provides the fast prototyping of PHIP instrumentation for the evaluation of a myriad of {sup 13}C based endogenous contrast agents used in molecular imaging. 20. Para-veterinary professionals and the development of quality, self-sustaining community-based services. PubMed Catley, A; Leyland, T; Mariner, J C; Akabwai, D M O; Admassu, B; Asfaw, W; Bekele, G; Hassan, H Sh 2004-04-01 Livestock are a major asset for rural households throughout the developing world and are increasingly regarded as a means of reducing poverty. However, many rural areas are characterised by limited or no accessibility to veterinary services. Economic theory indicates that primary level services can be provided by para-veterinary professionals working as private operators and as an outreach component of veterinary clinics and pharmacies in small urban centres. Experience from the development of community-based animal health worker (CAHW) systems indicates that these workers can have a substantial impact on livestock morbidity and mortality through the treatment or prevention of a limited range of animal health problems. Factors for success include community involvement in the design and implementation of these systems, and involvement of the private sector to supply and supervise CAHWs. Examples of privatised and veterinary supervised CAHW networks are cited to show the considerable potential of this simple model to improve primary animal health services in marginalised areas. An analysis of constraints indicates that inappropriate policies and legislation are a major concern. By referring to the section on the evaluation of Veterinary Services in the OIE (World organisation for animal health) Terrestrial Animal Health Code, the paper proposes guidelines to assist governments in improving the regulation, quality, and co-ordination of privatised, veterinary supervised CAHW systems. PubMed Gumus, Pinar; Luquette, Mark; Haymon, Marie Louise; Valerie, Evans; Morales, Jaime; Vargas, Alfonso 2011-01-01 Adrenocortical tumors are rare in childhood and adolescence. Virilization, alone or in combination with signs of overproduction of other adrenal hormones, is the most common clinical presentation. Here we report an unusual case of an African-American female adolescent presenting with idiopathic acquired generalized anhidrosis, dysregulation of body temperature, absence of adult body odor and dry skin in the face of a virilizing para-adrenocortical adenoma. Virilization signs regressed soon after removal of the tumor, but normalization of the 3alpha-androstenediol glucuronide (3alpha-AG) took longer compared to other measurable androgens; accompanied by anhidrosis. The association of remitting anhidrosis with normalized levels of 3alpha-AG suggests it might be a possible mechanism for anhidrosis. High 3alpha-AG levels might implicate the increased peripheral conversion of weak pro-androgens with different biochemical structure. We recommend obtaining 3alpha-AG beside other androgens in virilized patients with atypical dermatological symptoms in the face of hyperandrogenism. 2. RNA editing of the Drosophila para Na(+) channel transcript. Evolutionary conservation and developmental regulation. PubMed Central Hanrahan, C J; Palladino, M J; Ganetzky, B; Reenan, R A 2000-01-01 Post-transcriptional editing of pre-mRNAs through the action of dsRNA adenosine deaminases results in the modification of particular adenosine (A) residues to inosine (I), which can alter the coding potential of the modified transcripts. We describe here three sites in the para transcript, which encodes the major voltage-activated Na(+) channel polypeptide in Drosophila, where RNA editing occurs. The occurrence of RNA editing at the three sites was found to be developmentally regulated. Editing at two of these sites was also conserved across species between the D. melanogaster and D. virilis. In each case, a highly conserved region was found in the intron downstream of the editing site and this region was shown to be complementary to the region of the exonic editing site. Thus, editing at these sites would appear to involve a mechanism whereby the edited exon forms a base-paired secondary structure with the distant conserved noncoding sequences located in adjacent downstream introns, similar to the mechanism shown for A-to-I RNA editing of mammalian glutamate receptor subunits (GluRs). For the third site, neither RNA editing nor the predicted RNA secondary structures were evolutionarily conserved. Transcripts from transgenic Drosophila expressing a minimal editing site construct for this site were shown to faithfully undergo RNA editing. These results demonstrate that Na(+) channel diversity in Drosophila is increased by RNA editing via a mechanism analogous to that described for transcripts encoding mammalian GluRs. PMID:10880477 3. Origin of the low-energy emission band in epitaxially grown para-sexiphenyl nanocrystallites Kadashchuk, A.; Schols, S.; Heremans, P.; Skryshevski, Yu.; Piryatinski, Yu.; Beinik, I.; Teichert, C.; Hernandez-Sosa, G.; Sitter, H.; Andreev, A.; Frank, P.; Winkler, A. 2009-02-01 A comparative study of steady-state and time-resolved photoluminescence of para-sexiphenyl (PSP) films grown by organic molecular beam epitaxy (OMBE) and hot wall epitaxy (HWE) under comparable conditions is presented. Using different template substrates [mica(001) and KCl(001) surfaces] as well as different OMBE growth conditions has enabled us to vary greatly the morphology of the PSP crystallites while keeping their chemical structure virtually untouched. We prove that the broad redshifted emission band has a structure-related origin rather than being due to monomolecular oxidative defects. We conclude that the growth conditions and type of template substrate impacts substantially on the film morphology (measured by atomic force microscopy) and emission properties of the PSP films. The relative intensity of the defect emission band observed in the delayed spectra was found to correlate with the structural quality of PSP crystallites. In particular, the defect emission has been found to be drastically suppressed when (i) a KCl template substrate was used instead of mica in HWE-grown films, and (ii) in the OMBE-grown films dominated by growth mounds composed of upright standing molecules as opposed to the films consisting of crystallites formed by molecules lying parallel to the substrate. 4. Identification of para-Substituted Benzoic Acid Derivatives as Potent Inhibitors of the Protein Phosphatase Slingshot. PubMed Li, Kang-shuai; Xiao, Peng; Zhang, Dao-lai; Hou, Xu-Ben; Ge, Lin; Yang, Du-xiao; Liu, Hong-da; He, Dong-fang; Chen, Xu; Han, Ke-rui; Song, Xiao-yuan; Yu, Xiao; Fang, Hao; Sun, Jin-peng 2015-12-01 Slingshot proteins form a small group of dual-specific phosphatases that modulate cytoskeleton dynamics through dephosphorylation of cofilin and Lim kinases (LIMK). Small chemical compounds with Slingshot-inhibiting activities have therapeutic potential against cancers or infectious diseases. However, only a few Slingshot inhibitors have been investigated and reported, and their cellular activities have not been examined. In this study, we identified two rhodanine-scaffold-based para-substituted benzoic acid derivatives as competitive Slingshot inhibitors. The top compound, (Z)-4-((4-((4-oxo-2-thioxo-3-(o-tolyl)thiazolidin-5-ylidene)methyl)phenoxy)methyl)benzoic acid (D3) had an inhibition constant (Ki) of around 4 μm and displayed selectivity over a panel of other phosphatases. Moreover, compound D3 inhibited cell migration and cofilin dephosphorylation after nerve growth factor (NGF) or angiotensin II stimulation. Therefore, our newly identified Slingshot inhibitors provide a starting point for developing Slingshot-targeted therapies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 5. The Protonation Site of para-Dimethylaminobenzoic Acid Using Atmospheric Pressure Ionization Methods Chai, Yunfeng; Weng, Guofeng; Shen, Shanshan; Sun, Cuirong; Pan, Yuanjiang 2015-04-01 The protonation site of para-dimethylaminobenzoic acid ( p-DMABA) was investigated using atmospheric pressure ionization methods (ESI and APCI) coupled with collision-induced dissociation (CID), nuclear magnetic resonance (NMR), and computational chemistry. Theoretical calculations and NMR experiments indicate that the dimethyl amino group is the preferred site of protonation both in the gas phase and aqueous solution. Protonation of p-DMABA occurs at the nitrogen atom by ESI independent of the solvents and other operation conditions under typical thermodynamic control. However, APCI produces a mixture of the nitrogen- and carbonyl oxygen-protonated p-DMABA when aprotic organic solvents (acetonitrile, acetone, and tetrahydrofuran) are used, exhibiting evident kinetic characteristics of protonation. But using protic organic solvents (methanol, ethanol, and isopropanol) in APCI still leads to the formation of thermodynamically stable N-protonated p-DMABA. These structural assignments were based on the different CID behavior of the N- and O-protonated p-DMABA. The losses of methyl radical and water are the diagnostic fragmentations of the N- and O-protonated p-DMABA, respectively. In addition, the N-protonated p-DMABA is more stable than the O-protonated p-DMABA in CID revealed by energy resolved experiments and theoretical calculations. 6. Low density specific heat measurements of para-hydrogen adsorbed on exfoliated graphite Chaves, F. A. B.; Cortez, M. E. B. P.; Rapp, R. E.; Lerner, E. 1985-02-01 Heat-capacity measurements of para-hydrogen adsorbed on bare graphite foam for coverages below about the ( 3× 3) R30° configuration between 0.0126 and 0.0622 Å -2 were made as a function of temperature in the range of 5 to 25 K using a quasiadiabatic method. From the data a phase diagram is suggested having three distinct regions: a pure ( 3× RCD>3)R30° ; phase, a coexistence region between ( 3× 3)R30° and fluid and a pure fluid region. A tricritical point seems to exist at TTC ⋍ 12.4 K and nTC ⋍ 0.039 Å-2. The critical exponent α = 0.37 ± 0.03 was calculated which, within our experimental measurements, is the theoretical 3-state Potts value of {1}/{3}. Considering the adatoms behaving as localized Einstein oscillators, for the 0.0636 Å -2 density, an average Einstein temperature of 54.4 ± 4.9 K was obtained. Our results are compared with data obtained by other authors. 7. Origin of the low-energy emission band in epitaxially grown para-sexiphenyl nanocrystallites SciTech Connect Kadashchuk, A.; Schols, S.; Heremans, P.; Skryshevski, Yu.; Piryatinski, Yu.; Beinik, I.; Teichert, C.; Hernandez-Sosa, G.; Sitter, H.; Andreev, A.; Frank, P.; Winkler, A. 2009-02-28 A comparative study of steady-state and time-resolved photoluminescence of para-sexiphenyl (PSP) films grown by organic molecular beam epitaxy (OMBE) and hot wall epitaxy (HWE) under comparable conditions is presented. Using different template substrates [mica(001) and KCl(001) surfaces] as well as different OMBE growth conditions has enabled us to vary greatly the morphology of the PSP crystallites while keeping their chemical structure virtually untouched. We prove that the broad redshifted emission band has a structure-related origin rather than being due to monomolecular oxidative defects. We conclude that the growth conditions and type of template substrate impacts substantially on the film morphology (measured by atomic force microscopy) and emission properties of the PSP films. The relative intensity of the defect emission band observed in the delayed spectra was found to correlate with the structural quality of PSP crystallites. In particular, the defect emission has been found to be drastically suppressed when (i) a KCl template substrate was used instead of mica in HWE-grown films, and (ii) in the OMBE-grown films dominated by growth mounds composed of upright standing molecules as opposed to the films consisting of crystallites formed by molecules lying parallel to the substrate. 8. Hyperfine excitation of C2H and C2D by para-H2 Dumouchel, Fabien; Lique, François; Spielfiedel, Annie; Feautrier, Nicole 2017-10-01 The [C2H]/[C2D] abundance ratio is a useful tool to explore the physical and chemical conditions of cold molecular clouds. Hence, an accurate determination of both the C2H and C2D abundances is of fundamental interest. Due to the low density of the interstellar medium, the population of the energy levels of the molecules is not at local thermodynamical equilibrium. Thus, the accurate modelling of the emission spectra requires the calculation of collisional rate coefficients with the most abundant interstellar species. Hence, we provide rate coefficients for the hyperfine excitation of C2H and C2D by para-H2(j=0), the most abundant collisional partner in cold molecular clouds. State-to-state rate coefficients between the lowest levels were computed for temperatures ranging from 5 to 80 K. For both isotopologues, the Δj = ΔF propensity rule is observed. The comparison between C2H and C2D rate coefficients shows that differences by up to a factor of two exist, mainly for Δj = ΔN = 1 transitions. The new rate coefficients will significantly help in the interpretation of recent observed spectra. 9. LabVIEW-based control software for para-hydrogen induced polarization instrumentation. PubMed Agraz, Jose; Grunfeld, Alexander; Li, Debiao; Cunningham, Karl; Willey, Cindy; Pozos, Robert; Wagner, Shawn 2014-04-01 The elucidation of cell metabolic mechanisms is the modern underpinning of the diagnosis, treatment, and in some cases the prevention of disease. Para-Hydrogen induced polarization (PHIP) enhances magnetic resonance imaging (MRI) signals over 10,000 fold, allowing for the MRI of cell metabolic mechanisms. This signal enhancement is the result of hyperpolarizing endogenous substances used as contrast agents during imaging. PHIP instrumentation hyperpolarizes Carbon-13 ((13)C) based substances using a process requiring control of a number of factors: chemical reaction timing, gas flow, monitoring of a static magnetic field (Bo), radio frequency (RF) irradiation timing, reaction temperature, and gas pressures. Current PHIP instruments manually control the hyperpolarization process resulting in the lack of the precise control of factors listed above, resulting in non-reproducible results. We discuss the design and implementation of a LabVIEW based computer program that automatically and precisely controls the delivery and manipulation of gases and samples, monitoring gas pressures, environmental temperature, and RF sample irradiation. We show that the automated control over the hyperpolarization process results in the hyperpolarization of hydroxyethylpropionate. The implementation of this software provides the fast prototyping of PHIP instrumentation for the evaluation of a myriad of (13)C based endogenous contrast agents used in molecular imaging. 10. Geochemical and isotopic constraints on the tectonic setting of Serra dos Carajas belt, eastern Para, Brazil NASA Technical Reports Server (NTRS) Olszewski, W. J., Jr.; Gibbs, A. K.; Wirth, K. R. 1986-01-01 The lower part of the Serra dos Carajas belt is the metavolcanic and metasedimentary Grao para Group (GPG). The GPG is thought to unconformably overlie the older (but undated) Xingu Complex, composed of medium and high-grade gneisses and amphibolite and greenstone belts. The geochemical data indicate that the GPG has many features in common with ancient and modern volcanic suites erupted through continental crust. The mafic rocks clearly differ from those of most Archean greenstone belts, and modern MORB, IAB, and hot-spot basalts. The geological, geochemical, and isotopic data are all consistent with deposition on continental crust, presumably in a marine basin formed by crustal extension. The isotopic data also suggest the existence of depleted mantle as a source for the parent magmas of the GPG. The overall results suggest a tectonic environment, igneous sources, and petrogenesis similar to many modern continental extensional basins, in contrast to most Archean greenstone belts. The Hammersley basin in Australia and the circum-Superior belts in Canada may be suitable Archean and Proterozoic analogues, respectively. 11. Health promotion programs related to the Athens 2004 Olympic and Para Olympic games PubMed Central Soteriades, Elpidoforos S; Hadjichristodoulou, Christos; Kremastinou, Jeni; Chelvatzoglou, Fotini C; Minogiannis, Panagiotis S; Falagas, Matthew E 2006-01-01 Background The Olympic Games constitute a first-class opportunity to promote athleticism and health messages. Little is known, however on the impact of Olympic Games on the development of health-promotion programs for the general population. Our objective was to identify and describe the population-based health-promotion programs implemented in relation to the Athens 2004 Olympic and Para Olympic Games. Methods A cross-sectional survey of all stakeholders of the Games, including the Athens 2004 Organizing Committee, all ministries of the Greek government, the National School of Public Health, all municipalities hosting Olympic events and all official private sponsors of the Games, was conducted after the conclusion of the Games. Results A total of 44 agencies were surveyed, 40 responded (91%), and ten (10) health-promotion programs were identified. Two programs were implemented by the Athens 2004 Organizing Committee, 2 from the Greek ministries, 2 from the National School of Public Health, 1 from municipalities, and 3 from official private sponsors of the Games. The total cost of the programs was estimated at 943,000 Euros; a relatively small fraction (0.08%) of the overall cost of the Games. Conclusion Greece has made a small, however, significant step forward, on health promotion, in the context of the Olympic Games. The International Olympic Committee and the future hosting countries, including China, are encouraged to elaborate on this idea and offer the world a promising future for public health. PMID:16504120 12. Enhanced production of para-hydroxybenzoic acid by genetically engineered Saccharomyces cerevisiae. PubMed Averesch, Nils J H; Prima, Alex; Krömer, Jens O 2017-08-01 Saccharomyces cerevisiae is a popular organism for metabolic engineering; however, studies aiming at over-production of bio-replacement precursors for the chemical industry often fail to overcome proof-of-concept stage. When intending to show real industrial attractiveness, the challenge is twofold: formation of the target compound must be increased, while minimizing the formation of side and by-products to maximize titer, rate and yield. To tackle these, the metabolism of the organism, as well as the parameters of the process, need to be optimized. Addressing both we show that S. cerevisiae is well-suited for over-production of aromatic compounds, which are valuable in chemical industry and are particularly useful in space technology. Specifically, a strain engineered to accumulate chorismate was optimized for formation of para-hydroxybenzoic acid. Then a fed-batch bioreactor process was developed, which delivered a final titer of 2.9 g/L, a maximum rate of 18.625 mgpHBA/(gCDW × h) and carbon-yields of up to 3.1 mgpHBA/gglucose. 13. [Psychometric properties of the Escala de Autoeficacia para el Afrontamiento del Estrés (EAEAE)]. PubMed Godoy Izquierdo, Débora; Godoy García, Juan F; López-Chicheri García, Isabel; Martínez Delgado, Antonio; Gutiérrez Jiménez, Susana; Vázquez Vázquez, Luisa 2008-02-01 This paper presents the theoretical construct of and an instrument for its assessment, the Escala de Autoeficacia para el Afrontamiento del Estrés (EAEAE; in English, Coping with Stress Self-Efficacy Scale), as well as the results obtained concerning its psychometric properties from an adult population. 812 individuals, aged 18 to 64 years old ( M = 26.46, SD = 9.93, 62.6% females and 37.4% males), recruited from various contexts, participated in this study. Participants completed the EAEAE along with other measures of constructs theoretically related to this specific self-efficacy. The EAEAE shows appropriate reliability in its complete form as well as in its two subscales of Efficacy Expectations and Outcome Expectations, and adequate factorial construct validity (which reveals the bi-dimensionality of the instrument), and convergent validity with the remaining measures. The characteristics of brevity and ease of application of the scale, in addition to its adequate psychometric properties, indicate that the EAEAE is an appropriate tool to assess and investigate coping with stress self-efficacy in research as well as clinical settings. 14. Time-resolved rotational spectroscopy of para-difluorobenzene·Ar Weichert, A.; Riehn, C.; Matylitsky, V. V.; Jarzeba, W.; Brutschy, B. 2002-07-01 We report on time-resolved rotational spectroscopy experiments of the cluster para-difluorobenzene·Ar ( pDFB·Ar) by picosecond laser pulses in a supersonic expansion. Rotational coherences of pDFB·Ar are generated by resonant electronic excitation and probed by time-resolved fluorescence depletion spectroscopy and time-resolved photoionization ((1+1') PPI) spectroscopy. The former allows the determination of both ground and excited state rotational constants, whereas the latter technique enables the separate study of the excited state with the benefit of mass-selective detection. Since pDFB·Ar represents a near symmetric oblate rotor, persistent J-type transients with tJ≈ n/2( A+ B) could be measured. From their analysis, (A″+B″)=2234.9±2 MHz and (A'+B')=2237.9±2 MHz were obtained. A structural investigation, based on data of the pDFB monomer, is presented resulting in a pDFB·Ar center-of-mass distance of both moieties of R z=3.543±0.017 Å with a change of ΔR z=-0.057±0.009 Å upon electronic excitation. These results are compared to data of former frequency-resolved experiments and ab initio computations. 15. LabVIEW-based control software for para-hydrogen induced polarization instrumentation Agraz, Jose; Grunfeld, Alexander; Li, Debiao; Cunningham, Karl; Willey, Cindy; Pozos, Robert; Wagner, Shawn 2014-04-01 The elucidation of cell metabolic mechanisms is the modern underpinning of the diagnosis, treatment, and in some cases the prevention of disease. Para-Hydrogen induced polarization (PHIP) enhances magnetic resonance imaging (MRI) signals over 10 000 fold, allowing for the MRI of cell metabolic mechanisms. This signal enhancement is the result of hyperpolarizing endogenous substances used as contrast agents during imaging. PHIP instrumentation hyperpolarizes Carbon-13 (13C) based substances using a process requiring control of a number of factors: chemical reaction timing, gas flow, monitoring of a static magnetic field (Bo), radio frequency (RF) irradiation timing, reaction temperature, and gas pressures. Current PHIP instruments manually control the hyperpolarization process resulting in the lack of the precise control of factors listed above, resulting in non-reproducible results. We discuss the design and implementation of a LabVIEW based computer program that automatically and precisely controls the delivery and manipulation of gases and samples, monitoring gas pressures, environmental temperature, and RF sample irradiation. We show that the automated control over the hyperpolarization process results in the hyperpolarization of hydroxyethylpropionate. The implementation of this software provides the fast prototyping of PHIP instrumentation for the evaluation of a myriad of 13C based endogenous contrast agents used in molecular imaging. 16. Long-Range Ruthenium-Amine Electronic Communication through the para-Oligophenylene Wire Shen, Jun-Jian; Zhong, Yu-Wu 2015-09-01 The studies of long-range electronic communication are hampered by solubility and potential-splitting issues. A “hybridized redox-asymmetry” method using a combination of organic and inorganic redox species is proposed and exemplified to overcome these two issues. Complexes 1(PF6)-6(PF6) (from short to long in length) with the organic redox-active amine and inorganic cyclometalated ruthenium termini bridged by the para-oligophenylene wire have been prepared. Complex 6 has the longest Ru-amine geometrical distance of 27.85 Å. Complexes 3(PF6) and 4(PF6) show lamellar crystal packing on the basis of a head-to-tail anti-parallelly aligned dimeric structure. Two redox waves are observed for all complexes in the potential region between +0.2 and +0.9 V vs Ag/AgCl. The electrochemical potential splitting is 410, 220, 143, 112, 107, and 105 mV for 1(PF6) through 6(PF6), respectively. Ruthenium (+2) to aminium (N•+) charge transfer transitions have been identified for the odd-electron compounds 12+-62+ by spectroelectrochemical measurements. The electronic communication between amine and ruthenium decreases exponentially with a decay slope of -0.137 Å-1. DFT calculations have been performed to complement these experimental results. 17. Long-Range Ruthenium-Amine Electronic Communication through the para-Oligophenylene Wire. PubMed Shen, Jun-Jian; Zhong, Yu-Wu 2015-09-07 The studies of long-range electronic communication are hampered by solubility and potential-splitting issues. A "hybridized redox-asymmetry" method using a combination of organic and inorganic redox species is proposed and exemplified to overcome these two issues. Complexes 1(PF6)-6(PF6) (from short to long in length) with the organic redox-active amine and inorganic cyclometalated ruthenium termini bridged by the para-oligophenylene wire have been prepared. Complex 6 has the longest Ru-amine geometrical distance of 27.85 Å. Complexes 3(PF6) and 4(PF6) show lamellar crystal packing on the basis of a head-to-tail anti-parallelly aligned dimeric structure. Two redox waves are observed for all complexes in the potential region between +0.2 and +0.9 V vs Ag/AgCl. The electrochemical potential splitting is 410, 220, 143, 112, 107, and 105 mV for 1(PF6) through 6(PF6), respectively. Ruthenium (+2) to aminium (N(•+)) charge transfer transitions have been identified for the odd-electron compounds 1(2+)-6(2+) by spectroelectrochemical measurements. The electronic communication between amine and ruthenium decreases exponentially with a decay slope of -0.137 Å(-1). DFT calculations have been performed to complement these experimental results. 18. Theoretical study of para-nitro-aniline adsorption on the Au(111) surface Li, Cui; Monti, Susanna; Li, Xin; Rinkevicius, Zilvinas; Ågren, Hans; Carravetta, Vincenzo 2016-07-01 The electronic structure, bonding properties and dynamics of para-nitro-aniline (PNA) adsorbed on the Au(111) surface for a sub-monolayer coverge have been investigated by density-functional theory (DFT) static calculations and quantum molecular dynamics simulations. Four main adsorption geometries have been identified by DFT energy optimization with the gradient corrected PBE functional and accounting for the role of the van del Waals (vdW) interaction. Quantum dynamics calculations starting from the four different structures have been performed at room temperature to estimate the relative stability of the adsorbates and the presence of barriers for their interconversion. Quantum simulations suggest that the most stable adsorption geometry at room temperature is that of PNA with a slightly distorted molecular plane almost parallel to the Au(111) surface. In a second less populated configuration the PNA molecule interacts with the substrate by its NO2 group while the molecular plane is orthogonal to the surface. The N 1s electron photoemission spectrum has been simulated for the identified adsorbate geometries and a measurable variation of the absolute and relative chemical shift for the two nitrogen atoms in comparison with the known values for PNA in gas phase is predicted. 19. Dynamics and lithium binding energies of polyelectrolytes based on functionalized poly(para-phenylene terephthalamide). PubMed Grozema, F C; Best, A S; van Eijck, L; Stride, J; Kearley, G J; de Leeuw, S W; Picken, S J 2005-04-28 Polyelectrolyte materials are an interesting class of electrolytes for use in fuel cell and battery applications. Poly(para-phenylene terephthalamide) (PPTA, Kevlar) is a liquid crystalline polymer that, when sulfonated, is a polyelectrolyte that exhibits moderate ion conductivity at elevated temperatures. In this work, quasi-elastic neutron scattering (QENS) experiments were performed to gain insight into the effect of the presence of lithium counterions on the chain dynamics in the material. It was found that the addition of lithium ions decreases the dynamics of the chains. Additionally, the binding of lithium ions to the sulfonic acids groups was investigated by density functional theory (DFT) calculations. It was found that the local surroundings of the sulfonic acid group have very little effect on the lithium-ion binding energy. Binding energies for a variety of different systems were all calculated to be around 150 kcal/mol. The DFT calculations also show the existence of a structure in which a single lithium ion interacts with two sulfonic acid moieties on different chains. The formation of such "electrostatic cross-links" is believed to be the source of the increased tendency to aggregate and the reduced dynamics in the presence of lithium ions. 20. Metabolic Engineering of Pseudomonas putida KT2440 for the Production of para-Hydroxy Benzoic Acid. PubMed Yu, Shiqin; Plan, Manuel R; Winter, Gal; Krömer, Jens O 2016-01-01 para-Hydroxy benzoic acid (PHBA) is the key component for preparing parabens, a common preservatives in food, drugs, and personal care products, as well as high-performance bioplastics such as liquid crystal polymers. Pseudomonas putida KT2440 was engineered to produce PHBA from glucose via the shikimate pathway intermediate chorismate. To obtain the PHBA production strain, chorismate lyase UbiC from Escherichia coli and a feedback resistant 3-deoxy-d-arabino-heptulosonate-7-phosphate synthase encoded by gene aroG(D146N) were overexpressed individually and simultaneously. In addition, genes related to product degradation (pobA) or competing for the precursor chorismate (pheA and trpE) were deleted from the genome. To further improve PHBA production, the glucose metabolism repressor hexR was knocked out in order to increase erythrose 4-phosphate and NADPH supply. The best strain achieved a maximum titer of 1.73 g L(-1) and a carbon yield of 18.1% (C-mol C-mol(-1)) in a non-optimized fed-batch fermentation. This is to date the highest PHBA concentration produced by P. putida using a chorismate lyase. 1. Health promotion programs related to the Athens 2004 Olympic and Para Olympic games. PubMed Soteriades, Elpidoforos S; Hadjichristodoulou, Christos; Kremastinou, Jeni; Chelvatzoglou, Fotini C; Minogiannis, Panagiotis S; Falagas, Matthew E 2006-02-24 The Olympic Games constitute a first-class opportunity to promote athleticism and health messages. Little is known, however on the impact of Olympic Games on the development of health-promotion programs for the general population. Our objective was to identify and describe the population-based health-promotion programs implemented in relation to the Athens 2004 Olympic and Para Olympic Games. A cross-sectional survey of all stakeholders of the Games, including the Athens 2004 Organizing Committee, all ministries of the Greek government, the National School of Public Health, all municipalities hosting Olympic events and all official private sponsors of the Games, was conducted after the conclusion of the Games. A total of 44 agencies were surveyed, 40 responded (91%), and ten (10) health-promotion programs were identified. Two programs were implemented by the Athens 2004 Organizing Committee, 2 from the Greek ministries, 2 from the National School of Public Health, 1 from municipalities, and 3 from official private sponsors of the Games. The total cost of the programs was estimated at 943,000 Euros; a relatively small fraction (0.08%) of the overall cost of the Games. Greece has made a small, however, significant step forward, on health promotion, in the context of the Olympic Games. The International Olympic Committee and the future hosting countries, including China, are encouraged to elaborate on this idea and offer the world a promising future for public health. 2. Para-Phenylenediamine Induces Apoptotic Death of Melanoma Cells and Reduces Melanoma Tumour Growth in Mice PubMed Central Bhowmick, Debajit; Bhar, Kaushik; Mallick, Sanjaya K.; Das, Subhadip; Chatterjee, Nabanita; Sarkar, Tuhin Subhra; Chakrabarti, Rajarshi; Das Saha, Krishna; Siddhanta, Anirban 2016-01-01 Melanoma is one of the most aggressive forms of cancer, usually resistant to standard chemotherapeutics. Despite a huge number of clinical trials, any success to find a chemotherapeutic agent that can effectively destroy melanoma is yet to be achieved. Para-phenylenediamine (p-PD) in the hair dyes is reported to purely serve as an external dyeing agent. Very little is known about whether p-PD has any effect on the melanin producing cells. We have demonstrated p-PD mediated apoptotic death of both human and mouse melanoma cells in vitro. Mouse melanoma tumour growth was also arrested by the apoptotic activity of intraperitoneal administration of p-PD with almost no side effects. This apoptosis is shown to occur primarily via loss of mitochondrial membrane potential (MMP), generation of reactive oxygen species (ROS), and caspase 8 activation. p-PD mediated apoptosis was also confirmed by the increase in sub-G0/G1 cell number. Thus, our experimental observation suggests that p-PD can be a potential less expensive candidate to be developed as a chemotherapeutic agent for melanoma. PMID:27293892 3. Functionalized ferrocenes: The role of the para substituent on the phenoxy pendant group. PubMed Vera, José L; Rullán, Jorge; Santos, Natasha; Jiménez, Jesús; Rivera, Joshua; Santana, Alberto; Briggs, Jon; Rheingold, Arnold L; Matta, Jaime; Meléndez, Enrique 2014-01-01 Six ferrocenecarboxylates with phenyl, 4-(1H-pyrrol-1-yl)phenyl, 4-fluorophenyl, 4-chlorophenyl, 4-bromophenyl, 4-iodophenyl as pendant groups were synthesized and fully characterized by spectroscopic, electrochemical and X-ray diffraction methods. The anti-proliferative activity of these complexes were investigated in hormone dependent MCF-7 breast cancer and MCF-10A normal breast cell lines, to determine the role of the para substituent on the phenoxy pendant group. The 4-fluorophenyl ferrocenecarboxylate is inactive in both cell lines while 4-(1H-pyrrol-1-yl)phenyl ferrocenecarboxylate is highly cytotoxic in both cell lines. 4-chlorophenyl and 4-bromophenyl ferrocenecarboxylates have moderate to good anti-proliferative activity in MCF-7 and low anti-proliferative activity on normal breast cell line, MCF-10A whereas the 4-iodophenyl analog is highly toxic on normal breast cell line. The phenyl ferrocenecarboxylate has proliferative effects on MCF-7 and is inactive in MCF-10A. Docking studies between the complexes and the alpha-estrogen receptor (ERα) were performed to search for key interactions which may explain the anti-proliferative activity of 4-bromophenyl ferrocenecarboxylate. Docking studies suggest the anti-proliferative activity of these ferrocenecarboxylates is attributed to the cytotoxic effects of the ferrocene group and not to anti-estrogenic effects. 4. Metabolic Engineering of Pseudomonas putida KT2440 for the Production of para-Hydroxy Benzoic Acid PubMed Central Yu, Shiqin; Plan, Manuel R.; Winter, Gal; Krömer, Jens O. 2016-01-01 para-Hydroxy benzoic acid (PHBA) is the key component for preparing parabens, a common preservatives in food, drugs, and personal care products, as well as high-performance bioplastics such as liquid crystal polymers. Pseudomonas putida KT2440 was engineered to produce PHBA from glucose via the shikimate pathway intermediate chorismate. To obtain the PHBA production strain, chorismate lyase UbiC from Escherichia coli and a feedback resistant 3-deoxy-d-arabino-heptulosonate-7-phosphate synthase encoded by gene aroGD146N were overexpressed individually and simultaneously. In addition, genes related to product degradation (pobA) or competing for the precursor chorismate (pheA and trpE) were deleted from the genome. To further improve PHBA production, the glucose metabolism repressor hexR was knocked out in order to increase erythrose 4-phosphate and NADPH supply. The best strain achieved a maximum titer of 1.73 g L−1 and a carbon yield of 18.1% (C-mol C-mol−1) in a non-optimized fed-batch fermentation. This is to date the highest PHBA concentration produced by P. putida using a chorismate lyase. PMID:27965953 5. Acute allergic contact dermatitis due to para-phenylenediamine after temporary henna painting. PubMed Nawaf, Al-Mutairi; Joshi, Arun; Nour-Eldin, Osama 2003-11-01 The use of temporary natural henna painting for body adornment and hair dyeing is very common in several countries of the Indian subcontinent, Middle East, and North Africa, and the fad is spreading in other parts of the world. Several cases of para-phenylenediamine (PPD) contaminated, temporary traditional/natural henna induced sensitization and acute allergic reaction have been reported, along with occasional serious long term and rare fatal consequences. We report here a 17-year-old girl with blisters over her hands of five-days duration that appeared within 72 hours of applying a temporary henna paint to her hands during a social occasion. Similar lesions were noted on her face. She had previously applied black henna only once, a year earlier without developing any lesions. Clinical diagnosis of acute allergic contact dermatitis (ACD) was made. After a short course of oral corticosteroids, topical mometasone furaote 1.0% cream, and oral antihistamines, the lesions healed completely over the next four weeks leaving post-inflammatory hypopigmentation. Patch testing done with standard European battery, PPD 1% in petrolatum, and commercially available natural henna powder revealed a 3+ reaction to PPD at 48 hours. No reaction was seen at the natural henna site. Awareness of the condition among physicians and the public and regulation regarding warnings of the risks of using such products is urgently warranted. 6. Attitudes of the public to medical care: Part 5-Para-medical services. PubMed Dixon, C W; Dodge, J S; Emery, G M; Salmond, G C; Spears, G F 1975-07-09 A sample of the population of Auckland and Dunedin was asked a series of six questions concerning their attitude to para-medical services as provided by a Plunket nurse; a public health or school nurse; a district nurse; a medico-social worker from a hospital and the ambulance service. Analysis of the replies shows some differences in utilisation of specific services comparing one city to the other. Respondents' opinions on the methods of financing these services show a general vote for preservation of the status quo but with some increased Government support. These are indications that the public is unaware of current methods of financing such services. Public acceptance of the idea of employment of trained nurses and social workers in general practice was high and when the question was made more specific by referring to the respondent's own family doctor the acceptance was much higher. Reasons for non-acceptance do not indicate any major difficulties in the employment of such staff in general practice, at least as far as the patients are concerned. 7. Determination of para-Phenylenediamine (PPD) in Henna in the United Arab Emirates PubMed Central Al-Suwaidi, Ayesha; Ahmed, Hafiz 2010-01-01 Henna is very popular in the United Arab Emirates (UAE); it is part of the culture and traditions. Allergy to natural henna is not usual; however the addition of para-phenylenediamine (PPD) to the natural henna increases the risk of allergic contact dermatitis. The objectives of the study were to identify the presence and concentration of PPD in henna available in UAE. Fifteen henna salons were selected randomly from three cities in UAE. Twenty five henna samples were acquired from these selected salons. The presence of PPD in henna samples was determined qualitatively and quantitatively using High Performance Liquid Chromatography (HPLC). The study showed that PPD was present in all of the black henna samples at concentrations ranging between 0.4% and 29.5% and higher than that recommended for hair dyes in most of the black henna samples. The presence of PPD in the black henna increases the risk of allergic contact dermatitis among users of black henna and a number of cases have already been reported in UAE. PMID:20617053 8. Functionalized ferrocenes: The role of the para substituent on the phenoxy pendant group PubMed Central Vera, José L.; Rullán, Jorge; Santos, Natasha; Jiménez, Jesús; Rivera, Joshua; Santana, Alberto; Briggs, Jon; Rheingold, Arnold L.; Matta, Jaime; Meléndez, Enrique 2016-01-01 Six ferrocenecarboxylates with phenyl, 4-(1H-pyrrol-1-yl)phenyl, 4-fluorophenyl, 4-chlorophenyl, 4-bromophenyl, 4-iodophenyl as pendant groups were synthesized and fully characterized by spectroscopic, electrochemical and X-ray diffraction methods. The anti-proliferative activity of these complexes were investigated in hormone dependent MCF-7 breast cancer and MCF-10A normal breast cell lines, to determine the role of the para substituent on the phenoxy pendant group. The 4-fluorophenyl ferrocenecarboxylate is inactive in both cell lines while 4-(1H-pyrrol-1-yl)phenyl ferrocenecarboxylate is highly cytotoxic in both cell lines. 4-chlorophenyl and 4-bromophenyl ferrocenecarboxylates have moderate to good anti-proliferative activity in MCF-7 and low anti-proliferative activity on normal breast cell line, MCF-10A whereas the 4-iodophenyl analog is highly toxic on normal breast cell line. The phenyl ferrocenecarboxylate has proliferative effects on MCF-7 and is inactive in MCF-10A. Docking studies between the complexes and the alpha-estrogen receptor (ERα) were performed to search for key interactions which may explain the anti-proliferative activity of 4-bromophenyl ferrocenecarboxylate. Docking studies suggest the anti-proliferative activity of these ferrocenecarboxylates is attributed to the cytotoxic effects of the ferrocene group and not to anti-estrogenic effects. PMID:27453588 9. Does Para-chloroaniline Really Form after Mixing Sodium Hypochlorite and Chlorhexidine? PubMed Orhan, Ekim Onur; Irmak, Özgür; Hür, Deniz; Yaman, Batu Can; Karabucak, Bekir 2016-03-01 Mixing sodium hypochlorite (NaOCl) with chlorhexidine (CHX) forms a brown-colored precipitate. Previous studies are not in agreement whether this precipitate contains para-chloroaniline (PCA). Tests used for analysis may demonstrate different outcomes. Purpose of this study was to determine whether PCA is formed through the reaction of mixing NaOCl and CHX by using high performance liquid chromatography, proton nuclear magnetic resonance spectroscopy, gas chromatography, thin layer chromatography, infrared spectroscopy, and gas chromatography/mass spectrometry. To obtain a brown precipitate, 4.99% NaOCl was mixed with 2.0% CHX. This brown precipitate was analyzed and compared with signals obtained from commercially available 4.99% NaOCl, 2% solutions, and 98% PCA in powder form. Chromatographic and spectroscopic analyses showed that brown precipitate does not contain free PCA. This study will be a cutoff proof for the argument on PCA formation from reaction of CHX and NaOCl. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved. 10. Long-Range Ruthenium-Amine Electronic Communication through the para-Oligophenylene Wire PubMed Central Shen, Jun-Jian; Zhong, Yu-Wu 2015-01-01 The studies of long-range electronic communication are hampered by solubility and potential-splitting issues. A “hybridized redox-asymmetry” method using a combination of organic and inorganic redox species is proposed and exemplified to overcome these two issues. Complexes 1(PF6)–6(PF6) (from short to long in length) with the organic redox-active amine and inorganic cyclometalated ruthenium termini bridged by the para-oligophenylene wire have been prepared. Complex 6 has the longest Ru-amine geometrical distance of 27.85 Å. Complexes 3(PF6) and 4(PF6) show lamellar crystal packing on the basis of a head-to-tail anti-parallelly aligned dimeric structure. Two redox waves are observed for all complexes in the potential region between +0.2 and +0.9 V vs Ag/AgCl. The electrochemical potential splitting is 410, 220, 143, 112, 107, and 105 mV for 1(PF6) through 6(PF6), respectively. Ruthenium (+2) to aminium (N•+) charge transfer transitions have been identified for the odd-electron compounds 12+–62+ by spectroelectrochemical measurements. The electronic communication between amine and ruthenium decreases exponentially with a decay slope of −0.137 Å−1. DFT calculations have been performed to complement these experimental results. PMID:26344929 11. Epidemiology of bee stings in Campina Grande, Paraíba state, Northeastern Brazil PubMed Central 2014-01-01 Background The present study aims to investigate the clinical-epidemiological characteristics of bee sting cases recorded between 2007 and 2012 in the city of Campina Grande, Paraíba state, Brazil. Data were collected from the database of the Injury Notification Information System of the Brazilian Ministry of Health. Results A total of 459 bee sting cases were retrospectively analyzed. The average annual incidence was 19 cases per 100,000 inhabitants. Cases were distributed in all months of the year, with higher prevalence in September and February. Most victims were men aged between 20 and 29 years. The highest incidence of cases was recorded in urban areas. Victims were stung mainly on the head and torso and received medical assistance predominantly 1 to 3 hours after being stung. The most frequent clinical manifestations were pain, edema and itching. Most cases were classified as mild, and three deaths were reported. Conclusions The high incidence of envenomations provoked by bees in Campina Grande suggests that it may be an important risk area for accidents. Since several medical records lacked information, clinical-epidemiological profile of bee sting cases in the studied region could not be accurately determined. The current study provides relevant data for the development of strategies to promote control and prevention of bee stings in this area. Further training for health professionals seems to be necessary to improve their skills in recording clinical-epidemiological information as well as in treating bee sting victims. PMID:24694193 12. Quantification of para-phenylenediamine and heavy metals in henna dye. PubMed Kang, Ik-Joon; Lee, Mu-Hyoung 2006-07-01 Henna (Lawsonia inermis, family Lythraceae) is a shrub cultivated in India, Sri Lanka and North Africa and contains the active dye lawsone (2-hydroxy-1,4-naphthoquinone). Henna dye is obtained from the dried leaves, which are powdered and mixed with oil or water and are used to prepare hair and body dyes. Temporary henna tattoos are readily available worldwide, last on the skin for several weeks and offer a self-limited, convenient alternative to a permanent tattoo. The addition of para-phenylenediamine (PPD), which is widely recognised as a sensitizer, increases the risk of allergic contact dermatitis from henna tattoo mixtures, and a number of cases have been reported. We examined 15 henna samples available in Korea for the presence of PPD and heavy metals such as nickel, cobalt, chromium, lead and mercury using high-performance liquid chromatography (HPLC), atomic absorption spectroscopy (AAS), mercury analyser and inductively coupled plasma emission spectroscopy. PPD, nickel and cobalt were detected in 3, 11 and 4 samples, respectively. 13. Simultaneous manipulation and observation of multiple ro-vibrational eigenstates in solid para-hydrogen Katsuki, Hiroyuki; Ohmori, Kenji 2016-09-01 We have experimentally performed the coherent control of delocalized ro-vibrational wave packets (RVWs) of solid para-hydrogen (p-H2) by the wave packet interferometry (WPI) combined with coherent anti-Stokes Raman scattering (CARS). RVWs of solid p-H2 are delocalized in the crystal, and the wave function with wave vector k ˜ 0 is selectively excited via the stimulated Raman process. We have excited the RVW twice by a pair of femtosecond laser pulses with delay controlled by a stabilized Michelson interferometer. Using a broad-band laser pulse, multiple ro-vibrational states can be excited simultaneously. We have observed the time-dependent Ramsey fringe spectra as a function of the inter-pulse delay by a spectrally resolved CARS technique using a narrow-band probe pulse, resolving the different intermediate states. Due to the different fringe oscillation periods among those intermediate states, we can manipulate their amplitude ratio by tuning the inter-pulse delay on the sub-femtosecond time scale. The state-selective manipulation and detection of the CARS signal combined with the WPI is a general and efficient protocol for the control of the interference of multiple quantum states in various quantum systems. 14. Kinetic analyses and pyrolytic behavior of Para grass (Urochloa mutica) for its bioenergy potential. PubMed 2017-01-01 The biomass of Urochloa mutica was subjected to thermal degradation analyses to understand its pyrolytic behavior for bioenergy production. Thermal degradation experiments were performed at three different heating rates, 10, 30 and 50°Cmin(-1) using simultaneous thermogravimetric-differential scanning calorimetric analyzer, under an inert environment. The kinetic analyses were performed using isoconversional models of Kissenger-Akahira-Sunose (KAS) and Flynn-Wall-Ozawa (FWO). The high heating value was calculated as 15.04MJmol(-1). The activation energy (E) values were shown to be ranging from 103 through 233 kJmol(-1). Pre-exponential factors (A) indicated the reaction to follow first order kinetics. Gibbs free energy (ΔG) was measured to be ranging from 169 to 173kJmol(-1) and 168 to 172kJmol(-1), calculated by KAS and FWO methods, respectively. We have shown that Para grass biomass has considerable bioenergy potential comparable to established bioenergy crops such as switchgrass and miscanthus. 15. Polycrystalline para-terphenyl scintillator adopted in a β- detecting probe for radio-guided surgery Solfaroli Camillocci, E.; Bellini, F.; Bocci, V.; Collamati, F.; De Lucia, E.; Faccini, R.; Marafini, M.; Mattei, I.; Morganti, S.; Paramatti, R.; Patera, V.; Pinci, D.; Recchia, L.; Russomando, A.; Sarti, A.; Sciubba, A.; Senzacqua, M.; Voena, C. 2015-06-01 A radio-guided surgery technique exploiting β- emitters is under development. It aims at a higher target-to-background activity ratio implying both a smaller radiopharmaceutical activity and the possibility of extending the technique to cases with a large uptake of surrounding healthy organs. Such technique requires a dedicated intraoperative probe detecting β- radiation. A first prototype has been developed relying on the low density and high light yield of the diphenylbutadiene doped para-therphenyl organic scintillator. The scintillation light produced in a cylindrical crystal, 5 mm in diameter and 3 mm in height, is guided to a photo-multiplier tube by optical fibres. The custom readout electronics is designed to optimize its usage in terms of feedback to the surgeon, portability and remote monitoring of the signal. Tests show that with a radiotracer activity comparable to those administered for diagnostic purposes the developed probe can detect a 0.1 ml cancerous residual of meningioma in a few seconds. 16. Foco Nasmyth para el telescopio 2,15mts. de CASLEO Casagrande, A. R. En principio, este proyecto intenta lograr el mayor aprovechamiento posible del instrumental que se dispone, buscando la manera de optimizar y hacer más eficiente el servicio que brinda el CASLEO a la comunidad astronómica. El mismo consiste en utilizar dispositivos ya existentes en el telescopio, y darle una utilidad. Tal es el caso del camino óptico destinado al foco Coude. Si tenemos en cuenta que disponemos de un tercer espejo Coude, con todos sus mecanismos automatizados, (actualmente sin uso), una distancia apropiada del plano focal, el espacio y el lugar físico necesario para instalar un periférico, es posible la habilitación de un foco Nasmyth en el telescopio 2,15mts. El hecho de contar con este nuevo foco, redundará en importantes beneficios. En primer lugar, posibilitará la observación, casi simultánea, con dos instrumentos. Otro aspecto a tener en cuenta, es que disminuirá el frecuente cambio del instrumental periférico, motivo este que degrada su ideal puesta a punto. Por último, también de interés, es de destacar su escaso costo de ejecución. 17. Uso de Sustancias en Mujeres con Desventaja Social: Riesgo para el Contagio de VIH/SIDA PubMed Central Cianelli, R.; Ferrer, L; Bernales, M.; Miner, S.; Irarrázabal, L.; Molina, Y. 2009-01-01 Antecedentes La caracterización epidemiológica en Chile apunta a feminización, pauperización y heterosexualización de la epidemia del VIH, lo que implica un mayor riesgo para las mujeres en desventaja social. Si a esto se suma la utilización de sustancias, la vulnerabilidad de este grupo frente al VIH/SIDA aumenta. Objetivo Describir el uso de sustancias en mujeres con desventaja social e identificar factores de riesgo de contagio de VIH, asociados a este consumo. Material y Método 52 mujeres fueron entrevistadas como parte del proyecto “Testeando una intervención en prevención de VIH/SIDA en mujeres chilenas” GRANT # RO1 TW 006977. Se describen variables sociodemográficas y de consumo de sustancias a través de estadísticas descriptivas y se analiza la relación entre variables a través de pruebas de correlación. Resultados Los resultados indican un perfil sociodemográfico que sitúa a las mujeres en situación de vulnerabilidad frente al contagio de VIH/SIDA, con alto índice de uso de sustancias que acentúa el riesgo. Conclusiones Los hallazgos apuntan a la necesidad de considerar intervenciones que se enfoquen en la prevención de VIH en mujeres, abordando los riesgos asociados al consumo de sustancias. PMID:21197380 18. ParaKMeans: Implementation of a parallelized K-means algorithm suitable for general laboratory use. PubMed Kraj, Piotr; Sharma, Ashok; Garge, Nikhil; Podolsky, Robert; McIndoe, Richard A 2008-04-16 During the last decade, the use of microarrays to assess the transcriptome of many biological systems has generated an enormous amount of data. A common technique used to organize and analyze microarray data is to perform cluster analysis. While many clustering algorithms have been developed, they all suffer a significant decrease in computational performance as the size of the dataset being analyzed becomes very large. For example, clustering 10000 genes from an experiment containing 200 microarrays can be quite time consuming and challenging on a desktop PC. One solution to the scalability problem of clustering algorithms is to distribute or parallelize the algorithm across multiple computers. The software described in this paper is a high performance multithreaded application that implements a parallelized version of the K-means Clustering algorithm. Most parallel processing applications are not accessible to the general public and require specialized software libraries (e.g. MPI) and specialized hardware configurations. The parallel nature of the application comes from the use of a web service to perform the distance calculations and cluster assignments. Here we show our parallel implementation provides significant performance gains over a wide range of datasets using as little as seven nodes. The software was written in C# and was designed in a modular fashion to provide both deployment flexibility as well as flexibility in the user interface. ParaKMeans was designed to provide the general scientific community with an easy and manageable client-server application that can be installed on a wide variety of Windows operating systems. 19. [La Junta para Ampliación de Estudios and the development of Spanish psychology]. PubMed Carpintero, Helio; Herrero, Fania 2007-01-01 During the last decades of the XIXth century, there was an awakening of consciousness for the need of a Spanish cultural renovation, of which one of the aims was to create and develop a Spanish science, resembling the scientific models already established in more advanced countries. There was a desire of Europeanization. Since it was a global social objective, it was necessary to start from the educator's training. In this climate the Junta para Ampliación de Estudios e Investigaciones Científicas, appeared. The role that the Junta played in Spanish research and in the innovation in the Psychopedagogical field through the first third of the XXth century was extremely important. The Junta's policy of scholarship was one of its most substantial achievements, for it made possible that the country reached in a few decades (1907-1936) the European scientific and psychological level. The relations among Spanish teachers and the Institute J.J. Rousseau is to highlighted, as "Geneva School" was to influence deeply the further development of psychology in Spain. 20. Transport properties of liquid para-hydrogen: The path integral centroid molecular dynamics approach Yonetani, Yoshiteru; Kinugawa, Kenichi 2003-11-01 Several fundamental transport properties of a quantum liquid para-hydrogen (p-H2) at 17 K have been numerically evaluated by means of the quantum dynamics simulation called the path integral centroid molecular dynamics (CMD). For comparison, classical molecular dynamics (MD) simulations have also been performed under the same condition. In accordance with the previous path integral simulations, the calculated static properties of the liquid agree well with the experimental results. For the diffusion coefficient, thermal conductivity, and shear viscosity, the CMD predicts the values closer to the experimental ones though the classical MD results are far from the reality. The agreement of the CMD result with the experimental one is especially good for the shear viscosity with the difference less than 5%. The calculated diffusion coefficient and the thermal conductivity agree with the experimental values at least in the same order. We predict that the ratio of bulk viscosity to shear viscosity for liquid p-H2 is much larger than classical van der Waals simple liquids such as rare gas liquids. 1. Usina de ciências: um espaço pedagógico para aprendizagens múltiplas Martin, V. A. F.; Poppe, P. C. R.; Orrico, A. C. P.; Pereira, M. G. 2003-08-01 2. Positive Pacing Strategies Are Utilized by Elite Male and Female Para-cyclists in Short Time Trials in the Velodrome PubMed Central Wright, Rachel L. 2016-01-01 In para-cycling, competitors are classed based on functional impairment resulting in cyclists with neurological and locomotor impairments competing against each other. In Paralympic competition, classes are combined by using a factoring adjustment to race times to produce the overall medallists. Pacing in short-duration track cycling events is proposed to utilize an “all-out” strategy in able-bodied competition. However, pacing in para-cycling may vary depending on the level of impairment. Analysis of the pacing strategies employed by different classification groups may offer scope for optimal performance; therefore, this study investigated the pacing strategy adopted during the 1-km time trial (TT) and 500-m TT in elite C1 to C3 para-cyclists and able-bodied cyclists. Total times and intermediate split times (125-m intervals; measured to 0.001 s) were obtained from the C1-C3 men's 1-km TT (n = 28) and women's 500-m TT (n = 9) from the 2012 Paralympic Games and the men's 1-km TT (n = 19) and women's 500-m TT (n = 12) from the 2013 UCI World Track Championships from publically available video. Split times were expressed as actual time, factored time (for the para-cyclists) and as a percentage of total time. A two-way analysis of variance was used to investigate differences in split times between the different classifications and the able-bodied cyclists in the men's 1-km TT and between the para-cyclists and able-bodied cyclists in the women's 500-m TT. The importance of position at the first split was investigated with Kendall's Tau-b correlation. The first 125-m split time was the slowest for all cyclists, representing the acceleration phase from a standing start. C2 cyclists were slowest at this 125-m split, probably due to a combination of remaining seated in this acceleration phase and a high proportion of cyclists in this group being trans-femoral amputees. Not all cyclists used aero-bars, preferring to use drop, flat or bullhorn handlebars. Split times 3. Different ortho and para electronic effects on hydrolysis and cytotoxicity of diamino bis(phenolato) "salan" Ti(IV) complexes. PubMed Peri, Dani; Meker, Sigalit; Manna, Cesar M; Tshuva, Edit Y 2011-02-07 Bis(isopropoxo) Ti(IV) complexes of diamino bis(phenolato) "salan" ligands were prepared, their hydrolysis in 1:9 water/THF solutions was investigated, and their cytotoxicity toward colon HT-29 and ovarian OVCAR-1 cells was measured. In particular, electronic effects at positions ortho and para to the binding phenolato unit were analyzed. We found that para substituents of different electronic features, including Me, Cl, OMe, and NO(2), have very little influence on hydrolysis rate, and all para-substituted ortho-H complexes hydrolyze slowly to give O-bridged clusters with a t(1/2) of 1-2 h for isopropoxo release. Consequently, no clear cytotoxicity pattern is observed as well, where the largest influence of para substituents appears to be of a steric nature. These complexes exhibit IC(50) values of 2-18 μM toward the cells analyzed, with activity which is mostly higher than those of Cp(2)TiCl(2), (bzac)(2)Ti(OiPr)(2) and cisplatin. On the contrary, major electronic effects are observed for substituents at the ortho position, with an influence that exceeds even that of steric hindrance. Ortho-chloro or -bromo substituted compounds possess extremely high hydrolytic stability where no major isopropoxo release as isopropanol occurs for days. In accordance, very high cytotoxicity toward colon and ovarian cells is observed for ortho-Cl and -Br complexes, with IC(50) values of 1-8 μM, where the most cytotoxic complexes are the ortho-Cl-para-Me and ortho-Br-para-Me derivatives. In this series of ortho-substituted complexes, the halogen radius is of lesser influence both on hydrolysis and on cytotoxicity, while OMe substituents do not impose similar effect of hydrolytic stability and cytotoxicity enhancement. Therefore, hydrolytic stability and cytotoxic activity are clearly intertwined, and thus this family of readily available Ti(IV) salan complexes exhibiting both features in an enhanced manner is highly attractive for further exploration. 4. Adapting a Common Photographic Camera to Take Pictures of the Sky. (Spanish Title: Adaptando Una Camara Fotografica Comun Para Obtener Fotografias del Cielo.) Adaptando Uma Câmera Fotográfica Manual Simples Para Fotografar o Céu Danhoni Neves, Marcos Cesar; Pereira, Ricardo Francisco 2007-12-01 In this paper will be introduced a method of astrophotography using a non-reflex photographic camera (a low-cost method). It will be revised some photographic processes commonly used nowadays for comparison with the aims of this paper. En este trabajo será introducido un método de astrofotografia que utiliza una cámara fotográfica non-reflex (un método de bajo costo). Serán revisados algunos procesos fotográficos comúnmente utilizados actualmente para comparación con los objetivos de este trabajo. O presente artigo procura introduzir um método de astrofotografia utilizando uma câmera fotográfica não reflex, de baixo custo. É feita uma revisão do processo fotográfico comumente empregado para fins de comparação com os objetivos pretendidos no presente trabalho. 5. Genetic organization and embryonic expression of the ParaHox genes in the sea urchin S. purpuratus: insights into the relationship between clustering and colinearity. PubMed Arnone, Maria I; Rizzo, Francesca; Annunciata, Rosella; Cameron, R Andrew; Peterson, Kevin J; Martínez, Pedro 2006-12-01 The ANTP family of homeodomain transcription factors consists of three major groups, the NKL, the extended Hox, and the Hox/ParaHox family. Hox genes and ParaHox genes are often linked in the genome forming two clusters of genes, the Hox cluster and the ParaHox cluster, and are expressed along the major body axis in a nested fashion, following the relative positions of the genes within these clusters, a property called colinearity. While the presences of a Hox cluster and a ParaHox cluster appear to be primitive for bilaterians, few taxa have actually been examined for spatial and temporal colinearity, and, aside from chordates, even fewer still manifest it. Here we show that the ParaHox genes of the sea urchin Strongylocentrotus purpuratus show both spatial and temporal colinearity, but with peculiarities. Specifically, two of the three ParaHox genes-discovered through the S. purpuratus genome project-Sp-lox and Sp-Cdx, are expressed in the developing gut with nested domains in a spatially colinear manner. However, transcripts of Sp-Gsx, although anterior of Sp-lox, are detected in the ectoderm and not in the gut. Strikingly, the expression of the three ParaHox genes would follow temporal colinearity if they were clustered in the same order as in chordates, but each ParaHox gene is actually found on a different genomic scaffold (>300 kb each), which suggests that they are not linked into a single coherent cluster. Therefore, ParaHox genes are dispersed in the genome and are used during embryogenesis in a temporally and spatially coherent manner, whereas the Hox genes, now fully sequenced and annotated, are still linked and are employed as a complex only during the emergence of the adult body plan in the larva. 6. Analysis of Metastatic Regional Lymph Node Locations and Predictors of Para-aortic Lymph Node Involvement in Endometrial Cancer Patients at Risk for Lymphatic Dissemination. PubMed Altay, Ayse; Toptas, Tayfun; Dogan, Selen; Simsek, Tayup; Pestereli, Elif 2015-05-01 The aim of this study was to provide detailed knowledge of the metastatic lymph node (LN) locations and to determine factors predicting para-aortic LN metastasis in endometrial cancer patients at risk (intermediate/high) for LN involvement. A prospective case series with planned data collection was conducted in a total of 173 patients who treated with systematic pelvic para-aortic lymphadenectomy up to the renal vessels. All the LNs removed from pelvic and para-aortic basins—low or high according to the level of the inferior mesenteric artery—were evaluated separately. Logistic regression analyses were performed to determine the impact of variables on para-aortic metastasis. Lymph node metastasis was observed in 21.9% of the patients, pelvic LN involvement in 17.9%, para-aortic LN involvement in 15.0%, both pelvic and para-aortic LN involvement in 10.9%, and isolated para-aortic LN involvement in 4.0%. The most common metastatic LN locations were the external iliac (50.0%), obturator (50.0%), and low precaval regions (36.8%). The least common location of metastasis was the high precaval region (5.3%). Among patients with para-aortic LN metastasis, 42.3% had metastasis above the inferior mesenteric artery. The number of metastatic pelvic LNs greater than or equal to 2 was the only independent predictor of para-aortic metastasis in multivariate analysis (odds ratio, 23.38; 95% confidence interval, 1.35-403.99; P = 0.030), with 96.94% sensitivity, 95.87% specificity, 98.6% positive predictive value, and 97.0% negative predictive value. The current study supports the idea that in patients at risk of LN involvement, the systematic lymphadenectomy should be performed up to the renal vessels due to the high rate of upper level involvement. 7. Trends in electron-ion dissociative recombination of benzene analogs with functional group substitutions: Negative Hammett σpara values Osborne, David; Lawson, Patrick Andrew; Adams, Nigel; Dotan, Itzhak 2014-06-01 An in-depth study of the effects of functional group substitution on benzene's electron-ion dissociative recombination (e-IDR) rate constant has been conducted. The e-IDR rate constants for benzene, biphenyl, toluene, ethylbenzene, anisole, phenol, and aniline have been measured using a Flowing Afterglow equipped with an electrostatic Langmuir probe (FALP). These measurements have been made over a series of temperatures from 300 to 550 K. A relationship between the Hammett σpara values for each compound and rate constant has indicated a trend in the e-IDR rate constants and possibly in their temperature dependence data. The Hammett σpara value is a method to describe the effect a functional group substituted to a benzene ring has upon the reaction rate constant. 8. X-ray photoelectron spectroscopy study of para-substituted benzoic acids chemisorbed to aluminum oxide thin films SciTech Connect Kreil, Justin; Ellingsworth, Edward; Szulczewski, Greg 2013-11-15 A series of para-substituted, halogenated (F, Cl, Br, and I) benzoic acid monolayers were prepared on the native oxide of aluminum surfaces by solution self-assembly and spin-coating techniques. The monolayers were characterized by x-ray photoelectron spectroscopy (XPS) and water contact angles. Several general trends are apparent. First, the polarity of the solvent is critical to monolayer formation. Protic polar solvents produced low coverage monolayers; in contrast, nonpolar solvents produced higher coverage monolayers. Second, solution deposition yields a higher surface coverage than spin coating. Third, the thickness of the monolayers determined from XPS suggests the plane of the aromatic ring is perpendicular to the surface with the carboxylate functional group most likely binding in a bidentate chelating geometry. Fourth, the saturation coverage (∼2.7 × 10{sup 14} molecules cm{sup −2}) is independent of the para-substituent. 9. [Two base deletion of the alpha (1,2) fucosyltransferase gene responsible for para-Bombay phenotype]. PubMed Zhu, Fa-ming; Xu, Xian-guo; Hong, Xiao-zhen; Yan, Li-xing 2004-06-01 To probe into the molecular genetics basis for para-Bombay phenotype. Red blood cell phenotype of the proband was characterized by serological techniques. Exons 6 and 7 of ABO gene, the entire coding region of alpha(1,2) fucosyltransferase (FUT1) gene and FUT2 gene were amplified by polymerase chain reaction (PCR) from genomic DNA of the proband respectively. The PCR products were excised and purified from agarose gels and were directly sequenced. AG at 547-552 deletion homozygous allele was found in the proband, which caused a reading frame shift and a premature stop codon. Parents of proband were heterozygous carriers. Two base deletion at position 547-552 of alpha (1,2) fucosyltransferase gene may cause para-Bombay phenotype. 10. The total neutron cross-section of an ortho-para mixture of gaseous hydrogen at 75K Corradi, G.; Celli, M.; Rhodes, N.; Soper, A. K.; Zoppi, M. 2004-07-01 From the data of a transmission experiment we have extracted the total neutron cross-section of a sample of gaseous hydrogen (T=75.03K, p=84.8bar, n=8.42nm-3) with a thermodynamic equilibrium ortho-para content (48% ortho, 52% para). The experiment was carried out on the PEARL instrument operating at the ISIS pulsed neutron source. After an accurate data reduction, the neutron spectra have been analyzed in the framework of the Modified Young and Koppel (MYK) theory, which is a successful extension to interacting fluids of the original Young and Koppel model valid for a dilute gas of hydrogen molecules. The total cross-section calculated with MYK theory, whose unique unknown parameter-the mean kinetic energy of the molecular centre of mass-was obtained through an independent path integral Monte Carlo simulation, shows a satisfactory agreement with the experimental results. 11. A case of nearly mistaken AB para-Bombay blood group donor transplanted to a group 'O' recipient. PubMed Townamchai, Natavudh; Watanaboonyongcharoen, Phandee; Chancharoenthana, Wiwat; Avihingsanon, Yingyos 2014-10-31 Unintentional ABO mismatch kidney transplantation can cause detrimental hyperacute rejection. We report the first successful ABO incompatible kidney transplantation from an AB para-Bombay donor to O recipient. At the initial evaluation, the donor's ABO type was discordance on the cell typing and serum typing, which typed to be 'O' as cell typing and 'AB' as serum typing. At the second investigation, it was confirmed that the donor had a unique, rare but not uncommon blood type AB para-Bombay which was incompatible with the recipient's blood group. The kidney transplantation was successfully performed by an ABO incompatible preconditioning, double filtration plasmapheresis (DFPP) and rituximab. The serum creatinine at 12 months post-transplantation was 1.3 mg/dL. The pathology of the kidney biopsy showed no signs of rejection. 12. Connection between the observable and centroid structural properties of a quantum fluid: application to liquid para-hydrogen. PubMed Blinov, Nicholas; Roy, Pierre-Nicholas 2004-02-22 It is shown that the discrepancy between path integral Monte Carlo [M. Zoppi et al., Phys. Rev. B 65, 092204 (2002)] and path integral centroid molecular dynamics [F. J. Bermejo et al., Phys. Rev. Lett. 84, 5359 (2000)] calculations of the static structure factor of liquid para-hydrogen can be explained based on a deconvolution equation connecting centroid and physical radial distribution functions. An explicit expression for the kernel of the deconvolution equation has been obtained using functional derivative techniques. In the superposition approximation, this kernel is given by the functional derivative of the effective potential with respect to the pairwise classical potential. Results of path integral Monte Carlo calculations for the radial distribution function and the static structure factor of liquid para-hydrogen are presented. 13. Connection between the observable and centroid structural properties of a quantum fluid: Application to liquid para-hydrogen Blinov, Nicholas; Roy, Pierre-Nicholas 2004-02-01 It is shown that the discrepancy between path integral Monte Carlo [M. Zoppi et al., Phys. Rev. B 65, 092204 (2002)] and path integral centroid molecular dynamics [F. J. Bermejo et al., Phys. Rev. Lett. 84, 5359 (2000)] calculations of the static structure factor of liquid para-hydrogen can be explained based on a deconvolution equation connecting centroid and physical radial distribution functions. An explicit expression for the kernel of the deconvolution equation has been obtained using functional derivative techniques. In the superposition approximation, this kernel is given by the functional derivative of the effective potential with respect to the pairwise classical potential. Results of path integral Monte Carlo calculations for the radial distribution function and the static structure factor of liquid para-hydrogen are presented. 14. Spectral characteristics of ortho, meta and para dihydroxy benzenes in different solvents, pH and beta-cyclodextrin. PubMed Stalin, T; Devi, R Anitha; Rajendiran, N 2005-09-01 Spectral characteristics of ortho, meta and para dihydroxy benzenes (DHB's) have been studied in different solvents, pH and beta-cyclodextrin. Solvent study shows that: (i) the interaction of OH group with the aromatic ring is less than that of amino group both in the ground and excited states, (ii) in absorption, the charge transfer interaction of OH group in para position is larger than ortho and meta positions. pH studies reveals that DHB's are more acidic than phenol. The higher pK(a) value of oDHB (monoanion-dianion) indicates that the formed monoanion is more stabilized by intramolecular hydrogen bonding. DHB's forms a 1:1 inclusion complex with beta-CD. In beta-CD medium, absorption spectra of DHB's mono and dianions shows unusual blue shifts, whereas in the excited state, the spectral characteristics of DHB's follow the same trend in both aqueous and beta-CD medium. 15. LBA-ECO TG-07 Soil CO2 Flux by Automated Chamber, Para, Brazil: 2001-2003 Treesearch R.K. Varner; M.M. Keller 2009-01-01 Measurements of the soil-atmosphere flux of CO2 were made at the km 67 flux tower site in the Tapajos National Forest, Santarem, Para, Brazil. Eight chambers were set up to measure trace gas exchange between the soil and atmosphere about 5 times a day (during daylight and night) at this undisturbed forest site from April 2001 to April 2003. CO2 soil efflux data are... 16. Increasing the Analytical Sensitivity by Oligonucleotides Modified with Para- and Ortho-Twisted Intercalating Nucleic Acids – TINA PubMed Central Schneider, Uffe V.; Géci, Imrich; Jøhnk, Nina; Mikkelsen, Nikolaj D.; Pedersen, Erik B.; Lisby, Gorm 2011-01-01 The sensitivity and specificity of clinical diagnostic assays using DNA hybridization techniques are limited by the dissociation of double-stranded DNA (dsDNA) antiparallel duplex helices. This situation can be improved by addition of DNA stabilizing molecules such as nucleic acid intercalators. Here, we report the synthesis of a novel ortho-Twisted Intercalating Nucleic Acid (TINA) amidite utilizing the phosphoramidite approach, and examine the stabilizing effect of ortho- and para-TINA molecules in antiparallel DNA duplex formation. In a thermal stability assay, ortho- and para-TINA molecules increased the melting point (Tm) of Watson-Crick based antiparallel DNA duplexes. The increase in Tm was greatest when the intercalators were placed at the 5′ and 3′ termini (preferable) or, if placed internally, for each half or whole helix turn. Terminally positioned TINA molecules improved analytical sensitivity in a DNA hybridization capture assay targeting the Escherichia coli rrs gene. The corresponding sequence from the Pseudomonas aeruginosa rrs gene was used as cross-reactivity control. At 150 mM ionic strength, analytical sensitivity was improved 27-fold by addition of ortho-TINA molecules and 7-fold by addition of para-TINA molecules (versus the unmodified DNA oligonucleotide), with a 4-fold increase retained at 1 M ionic strength. Both intercalators sustained the discrimination of mismatches in the dsDNA (indicated by ΔTm), unless placed directly adjacent to the mismatch – in which case they partly concealed ΔTm (most pronounced for para-TINA molecules). We anticipate that the presented rules for placement of TINA molecules will be broadly applicable in hybridization capture assays and target amplification systems. PMID:21673988 17. Synthesis of isoxazoles en route to semi-aromatized polyketides: dehydrogenation of benzonitrile oxide-para-quinone acetal cycloadducts. PubMed Hashimoto, Yoshimitsu; Takada, Akiomi; Takikawa, Hiroshi; Suzuki, Keisuke 2012-08-14 A variety of highly functionalized polycyclic isoxazoles are prepared by a two-step protocol: (1) 1,3-dipolar cycloaddition of o,o'-disubstituted benzonitrile oxides to para-quinone mono-acetals, then (2) dehydrogenation. The cycloaddition proceeds in a regioselective manner, favouring the formation of the 4-acyl cycloadducts, which are suitable intermediates for the synthesis of semi-aromatized polycyclic targets derived from polyketide type-II biosynthesis. 18. [A case of advanced colon cancer with metastases to both supraclavicular and para-aortic lymph nodes effectively treated by radiation and S-1 therapy]. PubMed Sasaki, Yoshiyuki; Kunieda, Katsuyuki; Imai, Tateharu; Sakuratani, Takuji; Tajima, Jeshi Yu; Kanematsu, Masako; Yamada, Atsuko; Matsuhashi, Nobuyasu; Tanaka, Chihiro; Nishina, Takuo; Nagao, Narutoshi; Kawai, Masahiko; Furuichi, Nobuaki; Yanagawa, Shigeo 2010-12-01 We report the case of a 60-year-old woman with multiple lymph node metastases after ascending colon cancer who received radiation therapy and then chemotherapy with S-1. She was diagnosed with lymph node metastasis of the para aorta and left upper clavicle 10 months after surgery. We performed radiation therapy for the left upper clavicle (64 Gy)and para aorta (40 Gy). Consequently, we administered S-1(100mg/day)orally. After three months, the upper clavicle lymph nodes had disappeared and the para-aortic lymph nodes reduced. All metastatic lesions disappeared after 10 months. She survived for 32 months after the radiation therapy. 19. [Cases of fatal para methoxy amphetamine (PMA) poisoning in the material of the Forensic Medicine Department, Medical University Of Białystok, Poland]. PubMed Ptaszyńska-Sarosiek, Iwona; Wardaszka, Zofia; Sackiewicz, Adam; Okłota, Magdalena; Niemcunowicz-Janica, Anna 2009-01-01 The issue of sudden deaths due to acute para methoxy amphetamine (PMA) poisoning is presented in the report. The analysis included three cases autopsied at the Forensic Medicine Department in Białystok at the beginning of 2009. The toxicological analysis of samples of blood and urine did not confirm the presence of MDMA, also known as ecstasy, but it revealed the presence of para methoxy amphetamine (PMA). During post-mortem examinations, the cause of the death was not established in either case. Based on the above investigations it may be said that the common cause of death was acute para methoxy amphetamine (PMA) poisoning. 20. Importance of the Anchor Group Position (Para versus Meta) in Tetraphenylmethane Tripods: Synthesis and Self-Assembly Features. PubMed Lindner, Marcin; Valášek, Michal; Homberg, Jan; Edelmann, Kevin; Gerhard, Lukas; Wulfhekel, Wulf; Fuhr, Olaf; Wächter, Tobias; Zharnikov, Michael; Kolivoška, Viliam; Pospíšil, Lubomír; Mészáros, Gábor; Hromadová, Magdaléna; Mayor, Marcel 2016-09-05 The efficient synthesis of tripodal platforms based on tetraphenylmethane with three acetyl-protected thiol groups in either meta or para positions relative to the central sp(3) carbon for deposition on Au (111) surfaces is reported. These platforms are intended to provide a vertical arrangement of the substituent in position 4 of the perpendicular phenyl ring and an electronic coupling to the gold substrate. The self-assembly features of both derivatives are analyzed on Au (111) surfaces by low-temperature ultra-high-vacuum STM, high-resolution X-ray photoelectron spectroscopy, near-edge X-ray absorption fine structure spectroscopy, and reductive voltammetric desorption studies. These experiments indicated that the meta derivative forms a well-ordered monolayer, with most of the anchoring groups bound to the surface, whereas the para derivative forms a multilayer film with physically adsorbed adlayers on the chemisorbed para monolayer. Single-molecule conductance values for both tripodal platforms are obtained through an STM break junction experiment.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5535823106765747, "perplexity": 14452.130091853112}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805265.10/warc/CC-MAIN-20171119023719-20171119043719-00733.warc.gz"}
http://mathhelpforum.com/algebra/41329-word-problems-print.html
# Word problems • Jun 11th 2008, 03:21 PM King_nic Word problems 1) Person A runs around a circular track in 40 seconds. He meets person B coming the other way every 15 seconds. How fast does Person B run around the track in seconds? 2) Five kilometers upstream from his starting point, a rower passed a raft flowing with the current. He rowed upstream for one more hour and then rowed back and reached his starting point at the same time as the raft. Find the speed of the current. (Edit: I could not reconstruct the escalator problem. - Dan) • Jun 11th 2008, 05:02 PM TheEmptySet Quote: Originally Posted by King_nic 1) Person A runs around a circular track in 40 seconds. He meets person B coming the other way every 15 seconds. How fast does Person B run around the track in seconds? If anyone could post the steps in finishing these, it would be appreciated. Since the track is circular the distance around it is $2\pi r$ units So runner A runs $\frac{2 \pi r}{40}=\frac{\pi r}{20}$ units per second. So in 15 seconds he runs $15 \cdot \frac{\pi r}{20}=\frac{3 \pi r}{4}$ units so runner B ran $\frac{5 \pi r}{4}$ units in 15 seconds $\frac{5 \pi r}{4}=15 s$ now multiply both sides by $\frac{8}{5}$ to get $2 \pi r = 24s$ I hope this helps. • Jun 11th 2008, 05:11 PM Soroban Hello, King_nic! Quote: 1) Person A runs around a circular track in 40 seconds. He meets person B coming the other way every 15 seconds. How fast does Person B run around the track in seconds? $A$ runs 360° in 40 seconds. . . He runs: $\frac{360}{40} = 9$ degrees per second. $B$ runs 360° in $x$ seconds. . . He runs: $\frac{360}{x}$ degrees per second. They approach each other at a combined speed of: . $9 + \frac{360}{x}$ degrees per second. Together, they cover 360° in: . $\frac{360}{9 + \frac{360}{x}} \,=\,\frac{360x}{9x + 360}$ seconds. But we are told that this happens every 15 seconds. There is our equation! . . . $\frac{360x}{9x+360} \:=\:15$ . . $360x \:=\:135x + 5400\quad\Rightarrow\quad 225x \:=\:5400\quad\Rightarrow\quad x \:=\:24$ Therefore, $B$ runs a lap in 24 seconds. Edit: Too fast for me, $\emptyset$ ! • Jun 11th 2008, 05:14 PM King_nic Thanks Thanks alot for the posts guys, just two more to unravel • Jun 11th 2008, 07:17 PM Soroban Hello again, King_nic! Quote: 2) Five kilometers upstream from his starting point, a rower passed a raft flowing with the current. He rowed upstream for one more hour and then rowed back and reached his starting point at the same time as the raft. Find the speed of the current. Let $b$ = speed of the boat in still water (km/hr). Let $c$ = speed of the current (km/hr). Code:       C    b-c    B        5          A       * - ← - ← - * - ← - ← - ← - ← - - *       * - → - → - * - → - → - → - → - - *                   * - → - → - → - → - - *                   B          5          A Going upstream, the boat's speed is $(b-c)$ km/hr. It started at $A$, went 5 km to $B$, where it met the floating log. It continued upstream for an hour, traveling $(b-c)$ km to $C.$ Going downstream, the boat's speed is $(b+c)$ km/hr. It went downstream $(b-c)$ km to $B.$ . . This took: . $\frac{b-c}{b+c}$ hours. Then it went 5 km to $A.$ . . This took: . $\frac{5}{b+c}$ hours. Since it met the log, the boat traveled for: . $1 + \frac{b-c}{b+c} + \frac{5}{b+c}$ hours. .[1] During this same time, the log went 5 km at $c$ km/hr. . . This took: . $\frac{5}{c}$ hours. .[2] Equate [1] and [2]: . $1 + \frac{b-c}{b+c} + \frac{5}{b+c} \;=\;\frac{5}{c}$ Multiply by $c(b-c)(b+c)\!:\;\;c(b-c)(b+c) + c(b-c)^2 + 5c(b-c) \;=\;5(b-c)(b+c)$ . . which simplifies to: . $2b^2c - 2bc^2 + 5bc - 5b^2 \:=\:0$ Since $b \neq 0$, divide by $b\!:\;\;2bc - 2c^2 - 5b + 5c\:=\:0$ Factor: . $2c(b - c) -5(b-c) \:=\:0$ Factor: . $(b-c)(2c-5) \:=\:0$ If $b-c \:=\:0$, then: . $b = c$ The speed of the current equals the speed of the boat. . . Then the boat could not have gone upstream at all. Therefore: . $2c - 5 \:=\:0 \quad\Rightarrow\quad c \:=\:\frac{5}{2}$ The speed of the current is 2.5 km/hr. • Jun 12th 2008, 07:37 AM galactus For the escalator problem we could let s=the number of steps in the escalator standing still. It is moving against the woman so she moves s-28 steps. Let r be her rate down, so we have: $\frac{s-28}{r}=16$ and $\frac{s-22}{r}=24$ Solve the sytem and we see s=40 steps. • Jun 12th 2008, 09:45 PM mr fantastic Quote: Originally Posted by King_nic Thank you for the help guys. Last edited by King_nic : Today at 07:43 PM. Please don't delete questions once they've been answered. The questions will be useful to others. • Jun 13th 2008, 04:24 AM galactus Yes, why did you delete them?. They were fun little algebra problems. A wee bit tougher than most of those kinds of problems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 46, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7828826308250427, "perplexity": 4878.4526657397555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00368-ip-10-171-10-70.ec2.internal.warc.gz"}
https://innovationdiscoveries.space/how-to-calculate-no-of-street-light-poles/
# How To Calculate No Of Street Light Poles Space light fixtures to provide uniform distribution and Illumination of roadways and sidewalks. Consider the locations of obstruction such as trees or billboards. Height. Standard poles for sidewalks and bike facilities are 4.5–6 m. Light poles for roadbeds vary according to the street typology and land use. In most contexts, standard heights for narrow streets in residential, commercial, And historical contexts are between 8–10 m. Taller poles between 10 m and, 12 m are appropriate for wider streets in commercial or industrial areas. ### Spacing The spacing between two light poles should be roughly 2.5–3 times the height of the pole. Shorter light poles should be installed at closer intervals. The density, speed of travel, and the type of light source along a corridor will Also, determine the ideal height and spacing. ### Light Cone The light cone has roughly the same diameter, As the height of the fixture from the ground. The height will, therefore, determine the maximum suggested Distance between two light poles to avoid dark areas. ### 1- Calculate Distance between each Street Light Pole Example Calculate Distance between each streetlight pole having the following Details • Pole Details: The height of the Pole is 26.5 Feet. • Luminaire of each Pole: Wattage of Luminaries is 250 Watt, Lamp OutPut (LL) is 33200 Lumen, Required Lux Level (Eh) is 5 Lux, Coefficient of Utilization Factor (Cu) is 0.18, Lamp Lumen Depreciation Factor (LLD) is 0.8, Lamp Lumen Depreciation Factor (LLD) is 0.9. • Space Height Ratio should be less than 3. Calculation • Spacing between each Pole=(LL*CU*LLD*LDD) / Eh*W • Spacing between each Pole=(33200×0.18×0.8×0.9) / (5×11.5) • Spacing between each Pole= 75 Foot. • Space Height Ratio = Distance between Pole / Road width • Space Height Ratio = 3. Which is less than define value. Spacing between each Pole is 75 Feet. ### 2- Calculate Street Light Luminaire Watt Example Calculate Streetlight Watt of each Luminaire of Street Light Pole having following Details, • Road Details: The width of the road is 7 Meters. The distance between each Pole (D) is 50 Meters. • Required Illumination Level for Street Light (L) is 6.46 Lux per Square Meter. Luminous efficacy is 24 Lumen/Watt. • Maintenance Factor (mf) 0.29, Coefficient of Utilization Factor (Cu) is 0.9. Calculation: • Average Lumen of Lamp (Al) = 8663 Lumen. • Average Lumen of Lamp (Al) =(LxWxD) / (mfxcu) • Average Lumen of Lamp (Al)= (6.46x7x50) / (0.29×0.9) • Average Lumen of Lamp (Al)=8663 Lumen. • Watt of Each Street Light Luminar = Average Lumen of Lamp / Luminous efficacy • Watt of Each Street Light Laminar = 8663 / 24 • Watt of Each Street Light Luminaire = 361 Watt ### 3- Calculate Required Power for Street Light Area Example Calculate Streetlight Watt of following Street Light Area, • Required Illumination Level for Street Light (L) is 6 Lux per Square Meter. • Luminous efficacy (En) is 20 Lumen per Watt. • Required Street Light Area to be illuminated (A) is 1 Square Meter. Calculation: • Required Streetlight Watt = (Lux per Sq.Meter X Surface Area of Street Light) / Lumen per Watt. • Required Streetlight Watt = (6 X 1) / 20. • Required Streetlight Watt = 0.3 watts per Square Meter. See More:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410044312477112, "perplexity": 11926.338785403243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00365.warc.gz"}
https://prepinsta.com/hcl/aptitude/geometry/quiz-1/
# HCL Geometry Quiz-1 Question 1 Time: 00:00:00 If the Area of a Square is given as 5184 meters square then find the side of that Square. 73 m 73 m 72 m 72 m 75 m 75 m None None Once you attempt the question then PrepInsta explanation will be displayed. Start Question 2 Time: 00:00:00 In a society Children play on the ground which is rectangular in shape. What will be the perimeter of that ground if the Length is 23 meters and the width is 17 meters. 60 m 60 m 50 m 50 m 80 m 80 m 70 m 70 m Once you attempt the question then PrepInsta explanation will be displayed. Start Question 3 Time: 00:00:00 If the side of a cube is given as 9 cm. Then find the sum of the area of all faces. $486 cm^{2}$ $486 cm^{2}$ $286 cm^{2}$ $286 cm^{2}$ $386 cm^{2}$ $386 cm^{2}$ $586 cm^{2}$ $586 cm^{2}$ Once you attempt the question then PrepInsta explanation will be displayed. Start Question 4 Time: 00:00:00 Two cylinders of the same height and radii are in a ratio of 10:11. Find the ratio of the Curved Surface Area of that Cylinder. 7:10 7:10 10:7 10:7 11:10 11:10 10:11 10:11 Once you attempt the question then PrepInsta explanation will be displayed. Start Question 5 Time: 00:00:00 If the area of a Rectangular plot is given as 1984 meters square and the length is given 62 m then find the width of that Rectangular plot? 30 m 30 m 36 m 36 m 42 m 42 m None None Once you attempt the question then PrepInsta explanation will be displayed. Start Question 6 Time: 00:00:00 There is a window glass in a car in the shape of a trapezium. The length of its parallel sides is given as 20 meters and 10 meters and the distance between them is 5 meters. Then find the area of that window glass. $75 m^{2}$ $75 m^{2}$ $70 m^{2}$ $70 m^{2}$ $85 m^{2}$ $85 m^{2}$ $80 m^{2}$ $80 m^{2}$ Once you attempt the question then PrepInsta explanation will be displayed. Start Question 7 Time: 00:00:00 What is the capacity of a cylindrical container whose radius is 12 meters and height is 14 meters. $6331 m^{3}$ $6331 m^{3}$ $6336 m^{3}$ $6336 m^{3}$ $5346 m^{3}$ $5346 m^{3}$ $6236 m^{3}$ $6236 m^{3}$ Once you attempt the question then PrepInsta explanation will be displayed. Start Question 8 Time: 00:00:00 How much space is occupied by cubical shape snow if the side of that snow is 28 cm? $21952 cm^{3}$ $21952 cm^{3}$ $20952 cm^{3}$ $20952 cm^{3}$ $19952 cm^{3}$ $19952 cm^{3}$ $11952 cm^{3}$ $11952 cm^{3}$ Once you attempt the question then PrepInsta explanation will be displayed. Start Question 9 Time: 00:00:00 What will be the area of a parallelogram if the base of that parallelogram is 3 times of height, and height is given as 6 cm. 108 cm 108 cm 69 cm 69 cm 88 cm 88 cm 96 cm 96 cm Once you attempt the question then PrepInsta explanation will be displayed. Start Question 10 Time: 00:00:00 If the area of a rhombus is 630 square cm and one of its diagonal is 42 cm. Then find another diameter. 22 cm 22 cm 25 cm 25 cm 30 cm 30 cm 36 cm 36 cm Once you attempt the question then PrepInsta explanation will be displayed. Start ["0","40","60","80","100"] ["Need more practice!","Keep trying!","Not bad!","Good work!","Perfect!"] Personalized Analytics only Availble for Logged in users Analytics below shows your performance in various Mocks on PrepInsta Your average Analytics for this Quiz Rank - Percentile 0% Completed 0/0 Accuracy 0% Prime #### Prime Video Complete Video Course for HCL (For HCL) Get Prime Video Prime #### Prime Mock Mock Subscription for HCL (For HCL) Get Prime Mock
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 43, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.771806001663208, "perplexity": 3680.3352632631227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00294.warc.gz"}
http://archiv.siofok.hu/3gdkp/shape-diagram-chemistry-1b25f8
I went to what I consider the "definitive source" for such Walsh diagrams: Orbital Interactions in Chemistry by Albright, Burdett and Whangbo. Because of the wave-like character of matter, the orbital corresponds to a standing-wave pattern in 3-dimensional space that we can often represent more clearly in a 2-dimensional cross section. ... Closely packed particles with a fixed volume and shape; Liquid: Fluid particles with a fixed volume but variable shape; In chemistry the term chicken wire is used in different contexts. Boundary Surface Diagrams It is surface in the space where probability density is constant for a given orbital. Helium has two electrons; therefore, it can completely fill the 1s orbital with its two electrons. Although useful to explain the reactivity and chemical bonding of certain elements, the Bohr model of the atom does not accurately reflect how electrons are spatially distributed surrounding the nucleus. CC licensed content, Specific attribution, http://en.wiktionary.org/wiki/electron_shell, http://cnx.org/content/m44390/latest/?collection=col11448/latest, http://cnx.org/content/m44390/latest/Figure_02_01_07.jpg, http://www.chem1.com/acad/webtext/chembond/cb06.html, http://en.wikipedia.org/wiki/Molecular_orbital_diagram, http://en.wiktionary.org/wiki/molecular_orbital, http://en.wikipedia.org/w/index.php?title=File:Dihydrogen-MO-Diagram.svg&page=1. Two same-sign orbitals have a constructive overlap forming a molecular orbital with the bulk of the electron density located between the two nuclei. DOI: 10.1021/ac503069g. 2d and 3d shapes. Free Download Chemistry Experiment Diagram Template. We begin ou$$r$$ discussion of orbital energies by considering atoms o$$r$$ ions with only a single electron (such as H o$$r$$ He +).. PLEASE READ MY DISCLOSURE FOR MORE INFO. Require: Disallow: Allow: Biological Properties : Chemical Reactions : Imaging Agent : Journal Publishers via MeSH : Metabolic Pathways : Molecular Libraries Screening Center Network Predict the VSEPR shape … VESPR stands for valence shell electron pair repulsion. The electron is a quantum particle and cannot have a distinct location, but the electron’s orbital can be defined as the region of space around the nucleus in which the probability of finding the electron exceeds some arbitrary threshold value, such as 90% or 99%. Describe how atomic orbitals combine to form molecular orbitals. Search by Structure or Substructure. Carbon in Group 4/14. Atomic orbitals can also interact with each other out-of-phase, which leads to destructive cancellation and no electron density between the two nuclei at the so-called nodal plane depicted as a perpendicular dashed line. dia-shapes. Nanocrystal growth, disappearance, and shape transitions are all consonant with a near-equilibrium system constrained by mass conservation and characterized by interisland repulsions. Each p orbital consists of two sections called lobes that are on either side of the plane that passes through the nucleus. ball-and-stick model. The shape … Neon (Ne), on the other hand, has a total of ten electrons: two are in its innermost 1s orbital, and eight fill its second shell (two each in the 2s and three p orbitals). All right reserved. Another orbital, based on out-of-phase mixing of the orbitals, will be higher in energy and termed anti-bonding. Top Quizzes Today. ... You can use the so-called AXE method to calculate the shape of a … It becomes an invaluable tool when dealing with entities that contain many-to-many, one-to-many and other complex relationships. In chemistry the term chicken wire is used in different contexts. An activity diagram visually presents a series of actions or flow of control in a system similar to a flowchart or a data flow diagram. They limit rotational freedom about the double bond because a parallel orientation of the p-orbitals must be preserved to maintain the double or triple bond. Mirzo Kanoatov, Coraline Retif, Leonid T. Cherney, and Sergey N. Krylov . Water Chemistry 2 Sampling and Presenting Water Analyses There is much to know and we only scratch the surface here. In graphical representations of orbitals, orbital phase is depicted either by a plus or minus sign (which have no relationship to electric charge) or by shading one lobe. Essentially, the vertices in the diagram are carbon atoms and the number of lines represents the covalent bond order; 1 line (eg connecting an OH to the benzene ring) means a single covalent bond. The Bohr model of the atom does not accurately reflect how electrons are spatially distributed around the nucleus as they do not circle the nucleus like the earth orbits the sun. Distinguish between electron orbitals in the Bohr model versus the quantum mechanical orbitals. ER Diagram Uses. Two atomic orbitals can overlap in two ways depending on their phase relationship. : The boundary surface diagram of s orbital is spherical, so it will be spherical for 1s, 2s, 3s and 4s or for any general ns. They do not circle the nucleus like the earth orbits the sun, but are rather found in electron orbitals. NEW FEATURES OF SHAPE 2.1 - Easier plotting of shape maps and minimal distortion pathways. Subshells d and f have more complex shapes and contain five and seven orbitals, respectively. One orbital, based on in-phase mixing of the orbitals, will be lower in energy and termed bonding. We construct an equilibrium shape diagram from experimentally determined free energy differences between island shapes and use it to resolve several anomalies that have been noted for the Ge on Si(001) system. Phase diagram is a graphical representation of the physical states of a substance under different conditions of temperature and pressure. A phase diagram is a chart showing the thermodynamic conditions of a substance at different pressures and temperatures.The regions around the lines show the phase of the substance and the lines show where the phases are in equilibrium. Chemistry for Non-Majors. It is called the 1s orbital because it is spherical around the nucleus. The shape doesn’t depend on the principle quantum number (n). Orbitals with $$\ell = 1$$ are p orbitals and contain a nodal plane that includes the nucleus, giving rise to a dumbbell shape. It applies a theory called VESPR for short. Shape Venn Diagram II 3; Shape Venn Diagram III 1; Top User Quizzes in Science. The 1s orbital is always filled before any other orbital. These cases involve rings of carbon atoms which are surprisingly awkward to draw tidily in a normal structural formula. I went to what I consider the "definitive source" for such Walsh diagrams: Orbital Interactions in Chemistry by Albright, Burdett and Whangbo. The shape that results is one that keeps repulsive forces to a minimum (i.e. Take the quiz now! Carbonate ion, CO 3 2-is trigonal planar in shape with a O-C-O bond angle of 120 o because of three groups of bonding electrons and no lone pairs of electrons. Because of their wavelike nature, two or more orbitals (i.e., two or more functions ψ) can be combined both in-phase and out-of-phase to yield a pair of resultant orbitals that, to be useful, must have squares that describe actual electron distributions in the atom or molecule. Hydrogen has one electron; therefore, it has only one spot within the 1s orbital occupied. > The general concept is that the pairs of electrons repel each other and try to locate themselves as far as possible from each other about a given nucleus. The orbital names s, p, d, and f describe electron configuration. Orbitals are simply mathematical functions that describe particular standing-wave patterns that can be plotted on a graph but have no physical reality. With the popularity of the Eclipse Rich Client Platform, which leads to development of editors for more than just code, the importance of GEF is certai… Moving away from the nucleus, the number of electrons and orbitals found in the energy levels increases. Progressing from one atom to the next in the periodic table, the electron structure can be worked out by fitting an extra electron into the next available orbital. This is composed of a σ framework and a π-bond. The ionic charge is shown on the centre of only one of the ions. Re: 3D shape Lewis Diagrams Post by Jordan Foster » Wed Nov 08, 2017 1:09 am It seems like the best way to go about it for me is to just memorize all of the shapes and their respective bond angles, but just continue to draw the 2d Lewis structures. © 2007-2019 . The electron, being a quantum particle, cannot have a distinct location; but the electron’s orbital can be defined as the region of space around the nucleus in which the mathematical probability threshold of finding the electron exceeds some arbitrary value, such as 90% or 99%. Each sphere is a single orbital. Diagram of the S and P orbitals: The s subshells are shaped like spheres. The size of the s orbital increases with increase in n, that is, 4s > 3s > 2s > 1s and the electron is located further away from the nucleus as the The molecular shape is predicted to be trigonal planar around each carbon atom. - Distinct labelling of crystallographically non equivalent atoms. This diagram means that it really doesn't matter if you choose diagram 1b) or 2b). Simple and intuitive, it is designed for students and pupils to help them draw diagrams of common laboratory equipment and lab setup of science experiments. The probability density function is zero on the plane where the two lobes touch each other. Draw the Lewis diagram for carbon disulphide. σ framework π-bond Overall structure Question: Identify the σ framework and the π-bonds in acetylene, C2H2, H-C≡C-H. A crucial factor in understanding the chemical reactions is the knowledge of the molecular structure of the various chemical species. An answer of "less than 109 o 28' is not very good because that could include something ridiculous like 36 o , but saying "between 106 and 108 o " would be fine, as would "just less than 109 o 28'". The Valence Shell Electron Pair Repulsion Theory (VSEPR) helps us to understand the 3D structure of molecules. physics, maths and science for students in school , college and those preparing for competitive exams. This shell contains another spherical s orbital and three “dumbbell” shaped p orbitals, each of which can hold two electrons. The sign of the phase itself does not have physical meaning except when mixing orbitals to form molecular orbitals. OpenStax College, Atoms, Isotopes, Ions, and Molecules: The Building Blocks. This molecular orbital is called the bonding orbital and its energy is lower than that of the original atomic orbitals. This diagram of CH 4 illustrates the standard convention of displaying a three-dimensional molecule on a two-dimensional surface. This is designated as 1s2, referring to the two electrons of helium in the 1s orbital. Activity Diagram What is an Activity Diagram? Draw the Lewis diagram for carbon disulphide. File:Dihydrogen-MO-Diagram.svg - Wikipedia, the free encyclopedia. October 16, 2013. There are a total of 5 regions of electron density. Orbitals with $$\ell = 2$$ are d orbitals and have more complex shapes with at least two nodal surfaces. Free Download Chemistry Experiment Diagram Template. Although the π-bond is not as strong as the original σ-bond, its strength is added to the existing single bond. ... No Brain Too Small CHEMISTRY ... Work out shape of each from their Lewis diagrams. To see all my Chemistry videos, check outhttp://socratic.org/chemistryThis is an introduction to the basics of VSEPR Theory. Diagram of the S and P orbitals: The s subshells are shaped like spheres. The particles in solids, liquids and gases have different amounts of energy. This orbital is equivalent to the innermost electron shell of the Bohr model of the atom. The area where an electron is most likely to be found is called its orbital. eg PCl 3 Its shape will be pyramidal but although you should be able to work this out and predict that the bond angle is a bit less than 109 o 28', you cannot say exactly what the bond angle is. The area where an electron is most likely to be found is called its orbital. All orbitals with values of $$n > 1$$ and $$ell = 0$$ contain one o$$r$$ more nodes. On the periodic table, hydrogen and helium are the only two elements in the first row (period); this is because they are the sole elements to have electrons only in their first shell, the 1s orbital. The electron pair repulsion theory In 1940 Sidgwick and Powell pointed out that the shape of molecules could be explained in terms of electron pair repulsions. ... Every shape can be edited and rearranged. The closest orbital to the nucleus, called the 1s orbital, can hold up to two electrons. Orbital, in chemistry and physics, a mathematical expression, called a wave function, that describes properties characteristic of no more than two electrons in the vicinity of an atomic nucleus or of a system of nuclei as in a molecule. This theory basically says that bonding and non-bonding electron pairs of the central atom in a molecule will repel (push away from) each other in three dimensional space and this gives the molecules their shape. MOLECULAR MODELS The size and shape of molecules are as much a part of molecular structure as The phase of an orbital is a direct consequence of the wave-like properties of electrons. DOI: 10.1021/ac503069g. OpenStax College, Biology. Analytical Chemistry 2015, 87 (4) , 2100-2106. The term phase may also be used to describe equilibrium states on a phase diagram. (a) The diagram below represents a part of the structure of sodium chloride. The geometry of the molecules decides the fate of a reaction to a large extent. Mathematical equations from quantum mechanics known as wave functions can predict within a certain level of probability where an electron might be at any given time. Top Quizzes Today in Science. The in-phase combination of the s orbitals from the two hydrogen atoms provides a bonding orbital that is filled, whereas the out-of-phase combination provides an anti-bonding orbital that remains unfilled. The diagram below shows a simplified view of one of these ions. Orbital Energies. p subshells are made up of three dumbbell-shaped orbitals. A bond involving molecular orbitals that are symmetric with respect to rotation around the bond axis is called a sigma bond (σ-bond). The number and type of orbitals increases with increasing atomic number, filling in various electron shells. Upload a structure file or draw using a molecule editor. Thus, it is an inert gas and energetically stable: it rarely forms a chemical bond with other atoms. The kinetic particle theory explains the properties of the different states of matter. The three lone pairs are in one plane in the shape of a triangle around the central atom (can be thought of the x,y plane) while the 3 atoms (Cl-I-Cl) are in a straight line in the z plane. To make it even simpler, the Hydrogen is left out. ; The shape is deduced below using dot and cross diagrams and VSEPR theory and illustrated below. (c) Draw a diagram, including all the outer electrons, to represent the bonding present in CS2 (6) (Total 6 narks) 12. I have deliberately left the charges off the ion, because obviously they will vary from case to case. After the 1s orbital is filled, the second electron shell is filled, first filling its 2s orbital and then its three p orbitals. Principal shell 3n has s, p, and d subshells and can hold 18 electrons. This one was not shown. Hydrogen molecular orbitals: The dots here represent electrons. The orbitals in an atom are organized into different layers or electron shells. Get the detailed answer: Draw the Lewis diagram for carbon disulphide. This gallery only gives an impression of the types of diagrams in the Commons at present. Each sphere is a single orbital. bond-line diagram. For a corresponding σ-bonding orbital, such an orbital would be symmetrical but differentiated from it by an asterisk, as in σ*. Draw chemistry testing diagram readily from examples and templates! A schematic diagram in chemistry can help someone understand all the reactions that were performed to yield a final product, without showing the actual products themselves. All the s -orbital are Spherical shape Analytical Chemistry 2015, 87 (4) , 2100-2106. Phase Diagram Definition . At the onset of your study of organic chemistry, you should write out the formulas rather completely until you are thoroughly familiar with what these abbreviations stand for. Two electrons fill the 1s orbital, and the third electron then fills the 2s orbital. October 16, 2013. Shape makes it easier to compare samples Especially if displayed on maps Na ++K Ca2+ Mg2+ Cl-HCO 3-SO 4 2- Chemistry! Peak-Shape Correction to Symmetry for Pressure-Driven Sample Injection in Capillary Electrophoresis. Electrons in π-bonds are often referred to as π- electrons. For example, liquid mixtures can exist in multiple phases, such as an oil phase and an aqueous phase. They help us to visualize how data is connected in a general way, and are particularly useful for constructing a relational database. Therefore, the shape is linear. In the following quiz, on the scientific topic of chemistry, we’re going to be taking our microscopes and focusing in on some of the more intriguing little things you’ll come across in the study; molecules! Its electron configuration is 1s22s1. Learn with examples at BYJU’S. These relatively complex shapes result from the fact that electrons behave not just like particles, but also like waves. Most of them relate to the similarity of the regular hexagonal (honeycomb-like) patterns found in certain chemical compounds to the mesh structure commonly seen in real chicken wire The second electron shell may contain eight electrons. The five d-orbitals are designated as $d_{xy}$, $d_{yz}$,$d_{xz}$,$d_{x^2 - y^2}$ and $d_{z^2}$. Electron pairs whether bonding or non-bonding repel each other and will arrange themselves in space to be as far apart as possible. The Physics, Chemistry and Mathematics shapes have been designed to take the tedium out of creating commonly used visual notations such as molecular diagram, chemistry test, physics illustration, and circuit diagrams. It only takes a few seconds to choose a basic template, add equipment and customize the appearance. The phase diagram for water is shown in the Figure below . Diashapes. dia-shapes is a Debian package with all sheets and shapes from the Dia Shape … Diashapes is a small tool to download and install the above mentioned shapes. in Chapter 5. View VSEPR practice.pdf from CHEM 111 at Massaponax High. Vector chemistry laboratory equipment drawing software and includes lots of scientific illustration template and examples which is easy to draw Laboratory Equipment Diagram. When combining orbitals to describe a bonding interaction between two species, the symmetry requirements for the system dictate that the two starting orbitals must make two new orbitals. Water is a unique substance in many ways. In this anti-bonding molecular orbital with energy much higher than the original atomic orbitals, any electrons present are located in lobes pointing away from the central internuclear axis. Any 'spare' bonds are where a C-H bond is. 2.Which two particle diagrams represent mixtures of diatornic elements? Here's their diagram for $\ce{H2S}$ (the linear on the left, the bent on the right): The straight lines are in the plane of the page, the solid wedged line is coming out of the plane toward the reader, and the dashed wedged line is going out of the plane away from the reader. It only takes a few seconds to choose a basic template, add equipment and customize the appearance. Our aim is to help students learn subjects like DISCLOSURE: THIS PAGE MAY CONTAIN AFFILIATE LINKS, MEANING I GET A COMMISSION IF YOU DECIDE TO MAKE A PURCHASE THROUGH MY LINKS, AT NO COST TO YOU. Diagram a) is not the usual way to draw benzene. All the s -orbital are Spherical shape This gives a good representation of the shape of the orbital. Note that all the C-O bonds are identical due to delocalisation of some of the electrons (σ sigma and π pi bonding) dot-cross diagram. Phase Diagrams. Size of the surface diagram: This may be portrayed, for example, as a series of boxes connected together with arrows, with words depicting the various elements and conditions that were used throughout the process. Ammonia molecule . Nanocrystal growth, disappearance, and shape transitions are all consonant with a near-equilibrium system constrained by mass conservation and characterized by interisland repulsions. Chemix is an online editor for drawing lab diagrams and school experiment apparatus. Two p-orbitals forming a π-bond: If two parallel p-orbitals experience sideways overlap on adjacent atoms in a molecule, then a double or triple bond can develop. the arrangement that keeps the regions of negative charge as far apart as possible). This makes the π-bond a weaker bond than the original σ-bond that connects two neighboring atoms; however the fact that its strength is added to the underlying σ-bond bond makes for a stronger overall linkage. Water Chemistry 2 Sampling and Presenting Water Analyses There is much to know and we only scratch the surface here. We’ll be taking a look at the shapes of some molecules and asking you to define some of the particular characteristics of each one. An orbital often is depicted as a three-dimensional region State the electron-pair and molecular geometries of the molecule, and sketch a wedge Switch to ... Chemistry. If the phase changes, the bond becomes a pi bond (π-bond). In molecular geometry, square pyramidal geometry describes the shape of certain compounds with the formula ML 5 where L is a ligand.If the ligand atoms were connected, the resulting shape would be that of a pyramid with a square base. Σ-Bond due to the different electronegativities of C and O and three “ ”. Where an electron is high shape of a data store rarely forms a chemical bond with other.! Illustrates the standard convention of displaying a three-dimensional region carbon in Group 4/14 (! Choose from 500 different sets of shape 2.1 - Easier plotting of shape maps and minimal distortion pathways high of... Invaluable tool when dealing with entities that contain many-to-many, one-to-many and other complex relationships know! I have deliberately left the charges off the ion, because obviously they will vary case... Repulsive forces to a minimum ( i.e the usual way to draw tidily in a bipyramidal... Term phase may also be drawn as a hexagon with a circle in it bond becomes pi. And gases have different amounts of energy structural formula water is shown in the Bohr model versus the mechanical. But differentiated from it by an asterisk, as in σ * testing diagram readily from examples templates. Is shaped like spheres orbitals, respectively scratch the surface here illustrated below flow diagram display... To two electrons p, and molecules: the s subshells are made up of dumbbell-shaped! They all have the same shape - all that differs is the nature of the types of diagrams also waves. Examiners ( see below ) ; Top User Quizzes in Science one that keeps the of! In-Phase mixing of the electron density located between the two nuclei ’ t depend on the of... Multiple ways increases the understanding of that system quantum number region carbon in Group 4/14 diagrams! Is depicted as a hexagon with a near-equilibrium system constrained by mass conservation and characterized by interisland.! Be symmetrical but differentiated from it by an asterisk, as in *! And we only scratch the surface diagram of the shape of the space in which an electron is most to... Subshell, but are rather found in the Commons at present draw chemistry testing diagram readily from examples and!... Orbitals in the Commons at present used in business process modeling ) comparative energies different... Earth orbits the sun, but also like waves or electron shells the first and shells! Electrons of helium in the 1s orbital is a free online editor for drawing lab diagrams in π-bonds are referred. A doughnut around its middle Analyses there is much to know and we only scratch the surface diagram the... Filled before any other orbital these line groups are called sharp, principal, diffuse, shape! Two-Dimensional surface can also describe the steps in a σ-bond due to the existing single bond solids... Case diagram side of the surface diagram of the original atomic orbitals can overlap two... Shape that results is one of the shape that results is one that keeps repulsive forces a. Electron at a given orbital Presenting water Analyses there is much to and... Symmetric with respect to rotation around the bond becomes a pi bond ( )! Carbon atom the ions be lower in energy and termed anti-bonding Group 4/14,! Stable: it rarely forms a chemical bond with other atoms, respectively 1n. Differentiated from it by an asterisk, as in σ * two electrons file: -. Fill the 1s orbital the earth orbits the sun, but shell 1 does not for a corresponding orbital! Just like particles, but are rather found in electron orbitals displaying a three-dimensional on. Of which can hold 32 electrons sigma bond ( σ-bond ) temperature on the principle quantum number diagram. On in-phase mixing of the orbitals, each of which can hold two electrons ; therefore they... One of these ions bond with other atoms molecule, and molecules: the boundary surface diagrams it is online! Are commonly used in different contexts negative charge as far apart as.... Practice.Pdf from CHEM 111 at Massaponax high another orbital, but are rather found in electron orbitals are mathematical. In energy and termed bonding ; Kennst … Activity diagram Pressure-Driven Sample Injection in Capillary.! Illustrates the standard convention of displaying a three-dimensional region carbon in shape diagram chemistry 4/14 structure or. Displaying a three-dimensional region carbon in Group 4/14 here represent electrons orbitals have a overlap! We see in our daily lives to visualize how data is connected in a general,! Ch 4 illustrates the standard convention of displaying a three-dimensional region carbon in Group 4/14 of diagrams the... Chemistry flashcards on Quizlet density located between the two lobes touch each and! May also be drawn as a hexagon with a near-equilibrium system constrained by conservation. Atom are organized into different layers or electron shells and Sergey N. Krylov lobes are... Earth orbits the sun, but also like waves always filled before any other orbital d... A near-equilibrium system constrained by mass conservation and characterized by interisland repulsions increasing atomic number, in. Increases with increasing atomic number, filling in various electron shells earth orbits the sun, also. And d subshells and can hold up to two electrons in Edraw easy. Of which can hold up to two electrons, because obviously they will vary from case to case free.! Mixing of the orbitals, making up the third electron shell of the molecule, and fundamental overlap less! That system case to case Isotopes, ions, and fundamental symmetrical but differentiated from it by asterisk. Regions of negative charge as far apart as possible ) electron within the 1s orbital it. Date_Pd._ Name and draw Lewis dot diagrams for each of the sphere is in... The dots here represent electrons the headphones '' shape that results one! - Easier plotting of shape chemistry flashcards on Quizlet representations of the orbitals in an atom are into. = 2\ ) are d orbitals and have more complex shapes with at least two nodal.! The diagram below shows a simplified view of one of these ions below ) size and shape molecules! Represent mixtures of diatornic elements stable: it rarely forms a chemical bond other..., respectively connected in a σ-bond due to orbital orientation of each from their Lewis.... 1S1, where the superscripted 1 refers to the two lobes touch each other a hexagon with a system... In space to be as far apart as possible and gases have different amounts of energy overlap is less head-on. Not just like particles, but the size of the orbitals in a use case diagram your examiners ( below... The regions of negative charge as far apart as possible, will higher! T. Cherney, and shape transitions are all consonant with a doughnut around its.. Orbitals and can hold up to two electrons of helium in the 1s orbital.... Elongated dumbbell with a near-equilibrium system constrained by mass conservation and characterized by interisland repulsions σ-bond.! Sample Injection in Capillary Electrophoresis in the energy levels increases one that keeps repulsive forces to a (! Other orbital in the 2n orbital aqueous phase in π-bonds are often in. A large extent of principle quantum number ( n ) like the earth orbits sun. Layers or electron shells diagrams it is an online editor for drawing lab diagrams a consequence... Out-Of-Phase mixing of the molecules decides the fate of a molecule is related to the basics of VSEPR.! The fact that electrons behave not just like particles, but the size the. As π- electrons the same shape - all that differs is the nature of the of. That can be plotted on a phase diagram has pressure on the x-axis regions of density. Two electrons sphere is larger in the Figure below to know and we only scratch the surface here as diagrams! In it way, and Sergey N. Krylov a direct consequence of the two lobes each. Group 4/14 below represents a part of molecular structure as phase diagrams have discussed the shapes of orbitals, up! Five and seven orbitals, the bond axis is called the bonding orbital and its energy lower... Is surface in the Bohr model of the original atomic orbitals combine to form molecular orbitals are... Geometries of the ions, based on in-phase mixing of the orbital install... Analyses there is no general accepted classification of diagrams in the 2n orbital called lobes that are symmetric with to. Kennst … Activity diagram, 2100-2106 and orbitals found in the Figure below contain,. Shows a simplified view of one of these ions and may well not acceptable. Two same-sign orbitals have a constructive overlap forming a molecular orbital with the bulk of the and! Shape - all that differs is the nature of the structure of chloride. Be symmetrical but differentiated from it by an asterisk, as in σ.! Bonds of CO 2 are polar due to the nucleus like the earth orbits the sun, but size! They do not circle the nucleus like the earth orbits the sun, but the and! The sun, but also like waves bond is often is depicted as a hexagon with a flow!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5644186735153198, "perplexity": 1105.7191512915358}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00070.warc.gz"}
http://tex.stackexchange.com/questions/10083/how-to-align-table-or-figure-caption-where-the-table-starts
# How to align table or figure caption where the table starts? HI How can I start the caption where table starts or figure starts? \captionsetup{justification=justified, singlelinecheck=false} does not work for me because it places the caption at the beginning of the paragraph not the beginning of the table. Thanks!! - \documentclass[a4paper]{article} \usepackage[demo]{graphicx} \usepackage{floatrow,blindtext} \begin{document} \blindtext \begin{figure}[!htb] \ffigbox[\FBwidth] {\caption{\blindtext}\label{foo}} {\includegraphics{bar}} \end{figure} \blindtext \begin{table}[!htb] \ttabbox{\caption{A not so long caption.}\label{bar}} {\begin{tabular}{ccc} \hline foo & bar & baz \\\hline \end{tabular}} \end{table} \end{document} - Any ideas about tables? –  Aharoun Baalan Feb 2 '11 at 9:03 @Aharoun: maybe that you are able to replace figure with table and ffigbox with ttabbox ... ;-) –  Herbert Feb 2 '11 at 9:38 @Herbert: The definition of \ttabbox is \floatbox[\captop]{table}[\FBwidth], so you don't need to specify [\FBwidth] explicitly. (Contray to that, \ffigbox equates to \floatbox{figure}.) –  lockstep Feb 2 '11 at 10:02 @lockstep: true, I shouldn't use copy and paste ... ;-) thanks –  Herbert Feb 2 '11 at 10:13 Thanks again, When I compile your code only, it works fine but when I use \usepackage{floatrow} in my document (my work), it gives me this error: ! Package floatrow Error: Do not use float package with floatrow.(floatrow) The latter will be skipped. –  Aharoun Baalan Feb 7 '11 at 11:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.953877329826355, "perplexity": 5187.472585035188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/49949/what-is-the-basis-for-the-universal-enveloping-algebra-of-su2
What is the basis for the Universal Enveloping Algebra of su(2)? Given the standard basis for the Lie algebra $\mathfrak{su}(2)$ of SU(2), $\{i\sigma_1,i\sigma_2,i\sigma_3\}$ where $\sigma_1=\Biggl(\begin{array}{cc} 0&1\\ 1&0\end{array}\Biggr),\quad\sigma_2=\Biggl(\begin{array}{cc} 0&-i\\ i&0\end{array}\Biggr),\quad\sigma_3=\Biggl(\begin{array}{cc} 1&0\\ 0&-1\end{array}\Biggr),$ I want to find a basis for the universal enveloping algebra, $\mathcal{U}(\mathfrak{su}(2))$. By the Poincare-Birkoff-Witt Theorem I believe we have $\{i\sigma_1,i\sigma_2,i\sigma_3,-i\sigma_1\sigma_2,-i\sigma_1\sigma_3,-i\sigma_2\sigma_3,-i\sigma_1\sigma_2\sigma_3\}$, in other words all lexicographically ordered monomials. However, since products of the Pauli matrices are Pauli matrices (ie $\sigma_1\sigma_2=i\sigma_3$) it would seem that the two algebras have the same basis, just with the Lie bracket $[,]$ replaced with matrix multiplication. Can someone tell me if this is correct? - In a canonical monomial the sequences are allowed to be non-decreasing, not just increasing. There are infinitely many such monomials, so your basis should be infinite. The universal enveloping algebra is not the algebra generated by the matrices $\sigma_i$: although $\sigma_1 \sigma_2 = i \sigma_3$ as matrices, this does not imply that $\rho(\sigma_1) \rho(\sigma_2) = i \rho(\sigma_3)$ in any representation $\rho$ of $\mathfrak{su}(2)$. –  Qiaochu Yuan Jul 6 '11 at 20:43 (1) The Poincare-Birkoff-Witt basis is the infinite set $$(i \sigma_1)^a (i \sigma_2)^b (i \sigma_3)^c \ \mbox{for} \ a,\ b,\ c,\ \geq 0.$$ You have only listed the cases where $a$, $b$ and $c$ are $0$ or $1$. (2) The relation $\sigma_1 \sigma_2 = i \sigma_3$ does not hold in $U(\mathfrak{su}_2)$. That relation holds in the standard two dimensional representation of $\mathfrak{su}_2$, but it doesn't hold in (for example) the $3$ dimensional representation. The relations in $U(\mathfrak{su}_2)$ are those which hold in all representations of $\mathfrak{su}_2$. (Are you clear on what a representation of a Lie algebra means?) Ok yes I see my confusion. But I wasn't saying that $\sigma_1\sigma_2=i\sigma_3$ should hold for all dimensions (or all representations) - and yes I am pretty clear on what a representation of a Lie Algebra is. However, I would still like to have some concrete ways of writing (and using) the representations of $\mathcal{U}(\mathfrak{su}(2))$. Can one say anything further then "here are the basis elements and any relations which hold for all representations of the Lie Algebra also hold for these basis elements"? –  levitopher Jul 9 '11 at 17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658253192901611, "perplexity": 184.90227137773937}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558065752.33/warc/CC-MAIN-20141017150105-00211-ip-10-16-133-185.ec2.internal.warc.gz"}
https://hal-insu.archives-ouvertes.fr/insu-03635034
HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information # Dust modeling of the combined ALMA and SPHERE datasets of HD 163296. Is HD 163296 really a Meeus group II disk? Abstract : Context. Multiwavelength observations are indispensable in studying disk geometry and dust evolution processes in protoplanetary disks. Aims: We aim to construct a three-dimensional model of HD 163296 that is capable of reproducing simultaneously new observations of the disk surface in scattered light with the SPHERE instrument and thermal emission continuum observations of the disk midplane with ALMA. We want to determine why the spectral energy distribution of HD 163296 is intermediary between the otherwise well-separated group I and group II Herbig stars. Methods: The disk was modeled using the Monte Carlo radiative transfer code MCMax3D. The radial dust surface density profile was modeled after the ALMA observations, while the polarized scattered light observations were used to constrain the inclination of the inner disk component and turbulence and grain growth in the outer disk. Results: While three rings are observed in the disk midplane in millimeter thermal emission at 80, 124, and 200 AU, only the innermost of these is observed in polarized scattered light, indicating a lack of small dust grains on the surface of the outer disk. We provide two models that are capable of explaining this difference. The first model uses increased settling in the outer disk as a mechanism to bring the small dust grains on the surface of the disk closer to the midplane and into the shadow cast by the first ring. The second model uses depletion of the smallest dust grains in the outer disk as a mechanism for decreasing the optical depth at optical and near-infrared wavelengths. In the region outside the fragmentation-dominated regime, such depletion is expected from state-of-the-art dust evolution models. We studied the effect of creating an artificial inner cavity in our models, and conclude that HD 163296 might be a precursor to typical group I sources. Keywords : Document type : Journal articles Domain : https://hal-insu.archives-ouvertes.fr/insu-03635034 Contributor : Nathalie Pothier Connect in order to contact the contributor Submitted on : Friday, April 8, 2022 - 2:05:29 PM Last modification on : Thursday, May 12, 2022 - 8:56:01 AM ### File aa32299-17.pdf Publisher files allowed on an open archive ### Citation G. A. Muro-Arena, C. Dominik, L. B. F. M. Waters, M. Min, L. Klarmann, et al.. Dust modeling of the combined ALMA and SPHERE datasets of HD 163296. Is HD 163296 really a Meeus group II disk?. Astronomy & Astrophysics, 2018, 614, ⟨10.1051/0004-6361/201732299⟩. ⟨insu-03635034⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8032952547073364, "perplexity": 3077.6836352870796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00233.warc.gz"}
http://mathoverflow.net/questions/43019/geometrical-structure-of-critical-points-of-harmonic-functions?answertab=active
# Geometrical structure of critical points of harmonic functions For a harmonic function $\Phi$ on a simply connected subset $\Gamma$ of $\mathbb{R}^3$, define a guide curve $\gamma: I \mapsto \Gamma$ of $\Phi$ as a simple regular $C^1$ curve such that • all point in $\gamma(I)$ are critical points of $\Phi$, and • for all points $p$ in $\gamma(I)$ there exists a neighborhood $V$ of $p$ so that all critical points of $\Phi$ within $V$ are also in $\gamma(I)$. My question is whether there are any such guide curves which do not have an analytic parametrization? For a concrete example, consider $\Phi(x,y,z)=x\ y\ z$ for which any part of a coordinate axis not including the origin is a guide curve. - Background: The question is relevant to networks of rf ion traps, where the trapping potential is the ponderomotive potential associated with an oscillating electric field. The local amplitude of electric potential oscillations is described by a harmonic function, and for practical reasons it is preferable to trap ions on critical points of this function so that transport would have to take place along guide curves as introduced above. The aim of my question is to establish what intersection topologies are possible for guide curves. –  Janus Wesenberg Oct 21 '10 at 7:28 You need to be a lot more careful with your quantifiers. Let $\Phi$ be a harmonic function without a critical point, say $\Phi(x,y,z) = x$. then trivially any curve $\gamma$ has the property that for every point $p\in \gamma$, for any neighborhood $V$ of $p$, all critical points of $\Phi$ in $V$ (which comprise the empty set) is in $\gamma$. Now, I am not sure what you mean by "analytical parametrization", but given that ALL curves $\gamma$ are allowed in the above example, if there exists curves that does not have analytical parametrization, then you can draw the obvious conclusion. –  Willie Wong Oct 21 '10 at 9:39 In analogy with the imaginary part of $(x + i y)^n$ being zero along $n$ lines through the origin in the plane, my guess is that one can arrange guide curves as lines through the origin in $R^3$ and the vertices of a regular polyhedron, not just the octahedron as you point out. Furthermore I thing harmonic polynomials suffice to do this. What seems less clear is larger numbers of points on the unit sphere. Note that for fixed constants $A,B,C$ the Laplacian commutes with $$A \frac{\partial}{ \partial x} + B \frac{\partial}{ \partial y} + C \frac{\partial}{ \partial z}$$ –  Will Jagy Oct 21 '10 at 18:40 @Willie Wong: Thank you very much for pointing this one out. I have corrected my definition of guide curves so it now hopefully describes what I am looking for. –  Janus Wesenberg Oct 22 '10 at 7:00 Since the laplacian is elliptic with real-analytic coefficients, a harmonic function $f$ is real-analytic in its domain of definition. Hence the set $C$ of critical points of $f$ is a real-analytic subset of $\mathbb{R}^3$, and as such it admits a locally finite partition into real-analytic locally closed smooth submanifolds. Thus if $\dim C \leq 1$, it is locally a finite union of analytic open arcs and singular points (but the curves might not extend smoothly across those points). –  BS. Oct 22 '10 at 8:20 Since the laplacian is elliptic with real-analytic coefficients, a harmonic function $f$ is real-analytic in its domain of definition. Hence the set $C$ of critical points of $f$ is a real-analytic subset of $R^3$, and as such it admits a locally finite partition into real-analytic locally closed smooth submanifolds. Thus if $\dim C≤1$, it is locally a finite union of analytic open arcs and singular points (but the curves might not extend smoothly across those points). A reference on real analytic functions (reedited in 2002) might be S. Krantz, H. Parks, A primer of real analytic functions. Birkhäuser Verlag, 1992. But maybe the "curve selection lemma" in Milnor's "Singular points on complex hypersurfaces" would be enough . edit: it concerns real algebraic subsets. As an example of a curve of critical points not extending throuh a singular point, take the harmonic polynomial $f(x,y,z)=y^3-3x^2y+y^3z-yz^3$, which has critical locus $y=0,z^3=-3x^2$. But of course you have the singular parametrization $x=3t^3, z=-3t^2$. I don't know if they exist in general. Addendum : in fact the critical locus of a harmonic can polynomial can have an arbitrary (real) plane algebraic curve as a union of irreducible components. Let $P(x,y)$ be a real two variable polynomial, and define $$f(x,y,z)=\Sigma_k \frac{z^{2k+1}}{(2k+1)!}(-\Delta_{x,y})^k P(x,y) \; .$$ It is easy to check that $f$ is harmonic, and $df$ vanishes on $z=0$, $P(x,y)=0$. - Thank you very much, BS! Perhaps if I had not been so convinced that all curves of critical points would extend through the any singular points, I would have had better luck finding a counterexample myself :) Thus I learn again that (my) intuition and analysis should not be mixed. This is really amazing -- I have been pushing this problem to mathematician friends for a couple of years now with no progress, and then a few days after posting it to mathoverflow it is solved. The future is now (but then, it's 2010 -- so it'd better be) :) –  Janus Wesenberg Oct 25 '10 at 2:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8943303227424622, "perplexity": 256.43872467368004}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900024.23/warc/CC-MAIN-20141030025820-00078-ip-10-16-133-185.ec2.internal.warc.gz"}
https://cstheory.stackexchange.com/questions/11273/why-valuations-when-defining-fol/11274
# Why valuations when defining FOL? Why does one need valuations in order to define the semantics of first-order logic? Why not just define it for sentences and also define formula substitutions (in he expected way). That should be enough: $$M \models \forall x. \phi \iff \text{for all }d\in \mathrm{dom}(M),\ M \models \phi[x\mapsto d]$$ $$M,v \models \forall x. \phi \iff \text{for all }d\in \mathrm{dom}(M),\ M, v[x\mapsto d] \models \phi$$ It is perfectly possible to define satisfaction using just sentences as you suggest, and in fact, it used to be the standard approach for quite some time. The drawback of this method is that it requires to mix semantic objects into syntax: in order to make an inductive definition of satisfaction of sentences in a model $M$, it is not sufficient to define it for sentences of the original language of $M$. You need to first expand the language with individual constants for all elements of the domain of $M$, and then you can define satisfaction for sentences in the expanded language. This is, I believe, the main reason why this approach went into disuse; if you use valuations, you can maintain a clear conceptual distinction between syntactic formulas of the original language and semantic entities that are used to model them. • I think it depends somewhat on whether the author is approaching things from a proof theory side or a model theory side. In the case of proof theory, the original language is of interest for studying provability of sentences, but in the case of model theory the expanded language is more useful for studying definability. So for example Marker's model theory book defines satisfaction via the extended language, but Enderton's intro logic book uses valuations. – Carl Mummert May 3 '12 at 21:50 The meaning of a closed formula is a truth value $\bot$ or $\top$. The meaning of a formula containing a free variable $x$ ranging over a set $A$ is a function from $A$ to truth values. Functions $A \to \lbrace \bot, \top \rbrace$ form a complete Boolean algebra, so we can interpet first-order logic in it. Similarly, a closed term $t$ denotes an element of some domain $D$, while a term with a free variable denotes a function $D \to D$ because the element depends on the value of the variable. It is therefore natural to interpret a formula $\phi(x_1, \ldots, x_n)$ with free variables $x_1, \ldots, x_n$ in the complete Boolean algebra $D^n \to \lbrace{\bot, \top\rbrace}$ where $D$ is the domain of range of the variables. Whether you phrase the interpretation in this complete Boolean algebra in terms of valuations or otherwise is a technical matter. Mathematicians seem to be generally confused about free variables. They think they are implicitly universally quantified or some such. The cause of this is a meta-theorem stating that $\phi(x)$ is provable if and only if its universal closure $\forall x . \phi(x)$ is provable. But there is more to formulas than their provability. For example, $\phi(x)$ is not generally equivalent to $\forall x . \phi(x)$, so we certainly cannot pretend that these two formulas are interchangable. To summarize: • formulas with free variables are unavoidable, at least in the usual first-order logic, • the meaning of a formula with a free variable is a truth function, • therefore in semantics we are forced to consider complete Boolean algebras $D^n \to \lbrace\bot, \top\rbrace$, which is where valuations come from, • the universal closure of a formula is not equivalent to the original formula, • it is a mistake to equate the meaning of a formula with the meaning of its universal closure, just as it is a mistake to equate a function with its codomain. • Cool. Clear and simple answser! I wonder what the logicians have to say about this? – Uday Reddy May 6 '12 at 12:29 • I am one of "the logicians", it's written on my certificate of PhD. – Andrej Bauer May 6 '12 at 16:39 Simply because it's more natural to say "$x > 2$ is true when $x$ is $\pi$" (that is, on a valuation which sends $x$ to $\pi$) than "$x > 2$ is true when we substitute $\pi$ (the number itself, not the Greek letter) for $x$". Technically the approaches are equivalent. I want to strengthen Alexey's answer, and claim that the reason is that the first definition suffers from technical difficulties, and not just that the second (standard) way is more natural. Alexy's point is that the first approach, i.e.: $M \models \forall x . \phi \iff$ for all $d \in M$: $M \models \phi[x\mapsto d]$ mixes syntax and semantics. For example, let's take Alexey's example: ${(0,\infty)} \models x > 2$ Then in order to show that, one of the things we have to show is: $(0,\infty) \models \pi > 2$ The entity $\pi > 2$ is not a formula, unless our language includes the symbol $\pi$, that is interpreted in the model $M$ as the mathematical constant $\pi \approx 3.141\ldots$. A more extreme case would be to show that $M\models\sqrt[15]{15,000,000} > 2$, and again, the right hand side is a valid formula only if our language contains a binary radical symbol $\sqrt{}$, that is interpreted as the radical, and number constants $15$ and $15,000,000$. To ram the point home, consider what happens when the model we present has a more complicated structure. For example, instead of taking real numbers, take Dedekind cuts (a particular implementation of the real numbers). Then the elements of your model are not just "numbers". They are pairs of sets of rational numbers $(A,B)$ that form a Dedkind cut. Now, look at the object $({q \in \mathbb Q | q < 0 \vee q^2 < 5}, {q \in \mathbb Q | 0 \leq q \wedge q^2 > 5}) > 2$" (which is what we get when we "substitute" the Dedekind cut describing $\sqrt{5}$ in the formula $x > 2$. What is this object? It's not a formula --- it has sets, and pairs and who knows what in it. It's potentially infinite. So in order for this approach to work well, you need to extend your notion of "formula" to include such mixed entities of semantic and syntactic objects. Then you need to define operations such as substitutions on them. But now substitutions would no longer be syntactic functions: $[ x \mapsto t]: Terms \to Terms$. They would be operations on very very large collections of these generalised, semantically mixed terms. It's possible you will be able to overcome these technicalities, but I guess you will have to work very hard. The standard approach keeps the distinction between syntax and semantics. What we change is the valuation, a semantic entity, and keep formulae syntactic. • The key point to the first approach is that given a model $M$ in a language $L$ you first expand to a language $L(M)$ in which there is a new constant symbol for every element in $M$. Then you can just substitute these constant symbols into formulas in the usual way. There are no actual technical difficulties. – Carl Mummert May 3 '12 at 21:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016978144645691, "perplexity": 278.9632580530444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00456.warc.gz"}
http://www.nounsite.com/tma3phy102/
# TMA3/PHY102 – Electricity, Magnetism and Modern Physics ## TMA Quiz Questions TMA: TMA3/PHY102 PHY102 – Electricity, Magnetism and Modern Physics Mr. Ibanga Efiong ([email protected] ) 1 In the circuit shown, each of the cells has an emf of 2V and internal resistance of 0:1Ω 0.1Ω . Find the current in through the 0.5Ω A. 0.13A B. 3.45A C. 1.27A D. 2.44A 2 Calculate the electric field everywhere outside of a very long rod, radius R and charged to a uniform linear charge density λ . A. E⃗ =λπrϵ0r^ B. E⃗ =λπϵ0ln1rr^ C. E⃗ =λ2πrϵ0r^ D. E⃗ =λ2πϵ0ln1rr^ 3 Kirchhof’s junction rule is statement of conservation of ———– A. mass B. energy C. charge D. momentum 4 A galvanometer of resistance 120Ω a full scale deflection with a current of 0.0005A. How would you convert it to an ammeter that reads a maximum current of 5A? A. connect 2000Ω in parallel to it B. connect 200.12Ω in series to it C. connect \$20.10\Omega$$in series to it D. connect 0.012Ω in parallel to it 5 A galvanometer with coil resistance 12.0Ω shows full scale deflection for a current of 2.5mA. How would you convert it into a voltmeter of range 0 to 10.0V? A. 3988Ω in series B. 0.43Ω in parallel C. Ω in parallel D. 1.62Ω in series 6 A series circuit consisting of an uncharged 42ÖF capacitor and 10MΩ resistor is connected to 100V power source. What are the current in the circuit and the charge on the capacitor after one time constant? A. 3.7μA and 126μC B. 4.6μA and 221μC C. 7.2μA and 100μC D. 1.3μA and 52μC 7 A capacitor of 2.0μF is connected to a battery of 2.0V through a resistance of 10kΩ . What is the initial current in the circuit and the current after 0.02s? A. 0.5μA and 0.074mA B. 7.4A and 5.0mA C. 0.2μA and 0.074mA D. 6.2μA and 7.04mA 8 A conductor 2cm long carrying a current of 8A lies at right angles to a magnetic field of which the flux density is 1.0T. Calculate the force exerted on the conductor YOU MAY ALSO LIKE TMA4/CHM192 - Introductory Chemistry Practical II A. 0.20N B. 0.16N C. 0.25N D. 0.45N 9 One end of a simple rectangular wire-loop current balance is inserted into a solenoid. A force of 3.0×10−3 N is found to act on this end when a current of 2.0A is flowing in it. If the length of the conductor forming the end of the the wire-loop is 0. A. 0.043T B. 0.26T C. 0.43T D. 0.015T 10 A staight wire 1.0m long carries a currernt of 100A at right-angles to a uniform magnetic field of 1.0T. Find the mechanical force on the wire and the power required to move it at 15m/s in a plane at right-angles to the field. A. 100N and 1.5kW B. 200N and 2.5kW C. 300N and 1.4kW D. 200N and 2.7kW 11 A rectangular coil of dimensions 20cm by 15cm lies with its plane parallel to a magnetic field of 0.5W/m2 . The coil, carrying a current of 10A experiences a torque of 4.5Nm in the field. How many loops has the coil? A. 100 B. 60 C. 30 D. 20 12 An electric feild of 50kV/m is perpendicular to a magnetic field 0.25T. What is the velocity of a charge whose initial of motion is perpendicular to both fields and which passes through the fields undeflected? A. 3×103 m/s B. 2×105 m/s C. 4×107 m/s D. 5×104 m/s 13 A proton is accelerated through a potential difference of 100V and the enters a region in which it is moving perpendicular to a magnetic field of flux density 0.20T. Find the radius of the circular path in which it will travel. A. 0.9km B. o.7km C. 0.3km D. 0.5km 14 An electron enters a uniform magnetic field 0.20T at an angle of 30o the field. Determine the pitch of the helical path assuming its speed is 3×107 m/s YOU MAY ALSO LIKE TMA2/STT102 - Introductory Statistics A. 90.6m B. 37.8m C. 56.1m D. 46.5m 15 A positive ion passes through an electric and magnetic fields which are mutually perpendicular. The electric field strength is 20.0kV/m while the magnetic flux density is 0.40T. At what speed will the ion pass through undeflected? A. 6.0×104 m/s B. 5.0×104 m/s C. 7.0×104 m/s D. 8.0\times10^{4}$$m/s 16 For how long must a steady current of 2A flow through a copper voltameter to deposit 10À3kg of copper? Z for coper is 0.000329g/C A. 42.1min B. 22.6min C. 30.2min D. 25.3min 17 An ammeter is suspeted of giving inaccurate readings. In order to confirm the readings, the ammeter is connected to a silver votltameter in series and a steady current is passed for one hour. The ammeter reads 0.56A and 2.0124g of silver is deposited. Wha A. 0.06A B. 0.11A C. 1.1A D. 6.0A 18 The magnetic flux through each loop of a 35-loop coil is given by (3:6tÀ0:71t3)Â10À2Tm2, where the time is in seconds. Determine the induced emf at t=5.0s. A. 6.17V B. 14.43V C. 17.49V D. 9.17V 19 A single turn coil of cross-sectional area 7.2cm^2 is in the a magnetic field of flux desity 0.45T. The field which is perpendicular to the coil, is steadily reduced to 0.0T in 5s. Calculate the induced emf. A. 0:72ÖV B. 0:52ÖV C. 0:47ÖV D. 0:65ÖV 20 What is the self – inductance of an air-core solenoid, 1m long and 0.05m in diameter, if it has 1400 turns? A. 5.23mH B. 4.84mH C. 3.63mH D. 2.42mH I strongly advice you to crosscheck the TMA questions and answers on this blog, nounsite.com or it’s staffs will not be held responsible for any incorrect answers or TMA question. Close Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6025188565254211, "perplexity": 7673.813432983616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813818.15/warc/CC-MAIN-20180221222354-20180222002354-00319.warc.gz"}
http://doctorpenguin.com/2021-07-22_2_axv_summary/
Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field. ArXiv Preprint Assessment of cardiovascular disease (CVD) with cine magnetic resonance imaging (MRI) has been used to non-invasively evaluate detailed cardiac structure and function. Accurate segmentation of cardiac structures from cine MRI is a crucial step for early diagnosis and prognosis of CVD, and has been greatly improved with convolutional neural networks (CNN). There, however, are a number of limitations identified in CNN models, such as limited interpretability and high complexity, thus limiting their use in clinical practice. In this work, to address the limitations, we propose a lightweight and interpretable machine learning model, successive subspace learning with the subspace approximation with adjusted bias (Saab) transform, for accurate and efficient segmentation from cine MRI. Specifically, our segmentation framework is comprised of the following steps: (1) sequential expansion of near-to-far neighborhood at different resolutions; (2) channel-wise subspace approximation using the Saab transform for unsupervised dimension reduction; (3) class-wise entropy guided feature selection for supervised dimension reduction; (4) concatenation of features and pixel-wise classification with gradient boost; and (5) conditional random field for post-processing. Experimental results on the ACDC 2017 segmentation database, showed that our framework performed better than state-of-the-art U-Net models with 200$\times$ fewer parameters in delineating the left ventricle, right ventricle, and myocardium, thus showing its potential to be used in clinical practice. Xiaofeng Liu, Fangxu Xing, Hanna K. Gaggin, Weichung Wang, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo 2021-07-22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5213022828102112, "perplexity": 5686.493028997883}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304859.70/warc/CC-MAIN-20220125160159-20220125190159-00591.warc.gz"}
https://www.physicsforums.com/threads/calculate-electric-field-at-origin-with-3-charges.314664/
# Calculate electric field at origin, with 3 charges 1. May 17, 2009 1. The problem statement, all variables and given/known data Three charges, +2.5$$\mu$$C, -4.8$$\mu$$C & -6.3$$\mu$$C are located at (-0.20m, 0.15m), (0.50m, -0.35m) and (-0.42m, -0.32m) respectively. What is the electric field at the origin? q1 = +2.5$$\mu$$C q2 = -4.8$$\mu$$C q3 = -6.3$$\mu$$C 2. Relevant equations a$$^{}2$$ + b$$^{}2$$ = c$$^{}2$$ v$$_{}x$$ = magnitude $$\times$$cos($$\theta$$) v$$_{}y$$ = magnitude $$\times$$sin($$\theta$$) E = $$\frac{kq}{r^{}2}$$ law of cosines k= 9x10^9 3. The attempt at a solution first i found the hypotenuse for the three charges. q1 = .25m q2 = .6103m q3 = .5280m then i used the formula for magnitude of an electric field where k is the constant, q was my three charges, and the radius were my three hypotenuses my results were: q1 = 360000 q2 = 115983 q3 = 203383 i used the law of cosines to get $$\theta$$ my three angles were: 1 = 36.8 2 = 35 3 = 37.3 to find Ex, i multiplied the product of my magnitudes by the cosine of its respective angle: my results: 1 = 288263 2 = 95007 3 = 161785 i added these up and got 545055, the book says 2.2x10^5! i didnt bother doing y, since im completely lost! 2. May 17, 2009 ### LowlyPion Doesn't the direction of q3 carry a negative sign ... i.e. pointing toward the right from the origin? Hence |E1| + |E2| - |E3| along x? 288 + 95 - 161 = 222 3. May 17, 2009 yeah q3 has a negative sign. So is the way i did the problem correct? 4. May 17, 2009 I got another problem im trying to solve for y, but when i add everything up i get + 443958, not -4.1x10^5 like the book says. P.S. how do i know which ones to add, and which ones to subtract? 5. May 17, 2009 ### LowlyPion Remember the E-Field is a vector field. So you not only need to account for the sign of the charge, but you also must take into account where the point that you are taking the E-Field at is relative to the charge. A positive charge has radial outward field. A negative charge is radial inward. So depending on which side you are and whether it is a + or - is what determines the sign of the |E|, not simply which quadrant it may lay in. Similar Discussions: Calculate electric field at origin, with 3 charges
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8060171604156494, "perplexity": 1780.0200680087798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822116.0/warc/CC-MAIN-20171017144041-20171017164041-00894.warc.gz"}
https://www.studypug.com/ca/phys/work-and-energy
# Work and Energy ### Work and Energy #### Lessons In this lesson, we will learn: • Work done on an object is the change in an object's mechanical energy • Work done by a force is the product of the force and the displacement of the object Notes: • Work is the transfer of energy from one place to another. Because work is a form of energy, it is a scalar and measured in joules (J). • Work is done when a force moves an object over a displacement. It is equal to force times displacement, or $W = F_\parallel d$. It can be either positive or negative. Doing positive work on an object increases the object's mechanical energy. Positive work can increase an object's kinetic energy (by accelerating it), potential energy (by moving it to a greater height), or both. This can be expressed with the equation $W = F_\parallel d = \Delta E_{mech}$. Negative work on an object reduces its mechanical energy. • The parallel sign ("$\parallel$") in the formula $W = F_\parallel d$ indicates that only a force that is parallel to the displacement of an object can do work. Remember, in order for a force to do positive work on an object (add energy), it has to add kinetic and/or potential energy to the object. Consider the following examples: • A force pointed in the same direction as an object's displacement does positive work. • A force that does not cause a displacement does not do work. • A force pointed perpendicular to an object's displacement does not work. • If a force is applied to an object and the resulting displacement is at a non-90° angle from the force (i.e. moving an object by pushing/pulling at an angle), only the component of the force that is pointed in the same direction as the displacement does work. The other component is perpendicular to the displacement and does no work. • A force that points in the opposite direction of an object's displacement does negative work. • A common force that does negative work is friction, since it is always pointed opposite the direction of motion. Work $W = F_\parallel d = \Delta E_{mech} = (E_{kf} + E_{pf}) - (E_{ki} + E_{pi})$ $W:$ work, in joules (J) $d:$ displacement, in meters (m) $F_\parallel:$ component of force parallel to $d$ in newtons (N) $\Delta E_{mech}:$ change in mechanical energy $(E_{kf} + E_{pf}):$ total final mechanical energy, in joules (J) $(E_{ki} + E_{pi}):$ total initial of force parallel to $d$, in newtons (N) Kinetic Energy $E_k = \frac{1}{2}mv^2$ $E_k;$ kinetic energy, in joules (J) $m:$ mass, in kilgrams (kg) $v:$ velocity, in meters per second (m/s) Gravitational Potential Energy $E_p = mgh$ $E_p:$ gravitational potential energy, in joules (J) $g:$ acceleration due to gravity, in meters per second squared (m/s2) $h:$ height, in meters (m) • 1. $\bold{W = \Delta E_{mech} = F_\parallel d:}$ Calculating work a) A 1170 kg car travels at 11.0 m/s. 1. How much work needs to be done on the car to accelerate it to 24.0 m/s? 2. What is the net force acting on the car that accelerates it, if the acceleration is uniform and happens over 95.0 m? b) A 5.50 kg box slides across a floor at 12.0 m/s. Friction slows the box to 2.00 m/s after is has travelled 13.0 m. Find the work done on the box and the force of friction acting on the box. c) A 2.50 kg box is initially at rest at the top of a 30.0° slope and reaches a speed of 11.5 m/s when it slides 12.0 m down the slope. Find the force of friction acting on the box. • 2. $\bold{W = \Delta E_{mech} = F_\parallel d:}$ Calculating work with force applied at an angle A force of 885 N pulls on a box at an angle of 28.0° above the horizontal. 115 N of friction acts on the box as it slides 19.0 m. How much work does the 885 N force do on the box?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095964431762695, "perplexity": 430.3861128380168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00434.warc.gz"}
https://en.wikipedia.org/wiki/Ordinal_date
# Ordinal date Date 2023-02-06 2023-037 An ordinal date is a calendar date typically consisting of a year and a day of the year or ordinal day number (or simply ordinal day or day number), an ordinal number ranging between 1 and 366 (starting on January 1), though year may sometimes be omitted. The two numbers can be formatted as YYYY-DDD to comply with the ISO 8601 ordinal date format. ## Nomenclature Ordinal date is the preferred name for what was formerly called the "Julian date" or JD, or JDATE, which still seen in old programming languages and spreadsheet software. The older names are deprecated because they are easily confused with the earlier dating system called Julian day number or JDN, which was in prior use and which remains ubiquitous in astronomical and some historical calculations. ## Calculation Computation of the ordinal day within a year is part of calculating the ordinal day throughout the years from a reference date, such as the Julian date. It is also part of calculating the day of the week, though for this purpose modulo 7 simplifications can be made. In the following text, several algorithms for calculating the ordinal day O is presented. The inputs taken are integers y, m and d, for the year, month, and day numbers of the Gregorian or Julian calendar date. ### Trivial methods The most trivial method of calculating the ordinal day involves counting up all days that have elapsed per the definition: 1. Let O be 0. 2. From i = 1 .. m - 1, add the length of month i to O, taking care of leap year according to the calendar used. Similarly trivial is the use of a lookup table, such as the one referenced.[1] ### Zeller-like The table of month lengths can be replaced following the method of encoding the month-length variation in Zeller's congruence. As in Zeller, the m is changed to m + 12 if m ≤ 2. It can be shown (see below) that for a month-number m, the total days of the preceding months is equal to ⌊(153 * (m − 3) + 2) / 5⌋. As a result, the March 1-based ordinal day number is OMar = ⌊(153 * (m − 3) + 2) / 5⌋ + d. The formula reflects the fact that any five consecutive months in the range March–January have a total length of 153 days, due to a fixed pattern 31–30–31–30–31 repeating itself twice. This is similar to encoding of the month offset (which would be the same sequence modulo 7) in Zeller's congruence. As 153/5 is 30.6, the sequence oscillates in the desired pattern with the desired period 5. To go from the March 1 based ordinal day to a January 1 based ordinal day: • For m ≤ 12 (March through December), O = OMar + 59 + isLeap(y) , where isLeap is a function returning 0 or 1 depending whether the input is a leap year. • For January and February, two methods can be used: 1. The trivial method is to skip the calculation of OMar and go straight for O = d for January and O = d + 31 for February. 2. The less redundant method is to use O = OMar − 306, where 306 is the number of dates in March through December. This makes use of the fact that the formula correctly gives a month-length of 31 for January. "Doomsday" properties: For ${\displaystyle m=2n}$ and ${\displaystyle d=m}$ we get ${\displaystyle O=\left\lfloor 63.2n-91.4\right\rfloor }$ giving consecutive differences of 63 (9 weeks) for n = 2, 3, 4, 5, and 6, i.e., between 4/4, 6/6, 8/8, 10/10, and 12/12. For ${\displaystyle m=2n+1}$ and ${\displaystyle d=m+4}$ we get ${\displaystyle O=\left\lfloor 63.2n-56+0.2\right\rfloor }$ and with m and d interchanged ${\displaystyle O=\left\lfloor 63.2n-56+119-0.4\right\rfloor }$ giving a difference of 119 (17 weeks) for n = 2 (difference between 5/9 and 9/5), and also for n = 3 (difference between 7/11 and 11/7). ## Table To the day of i Add Leap years Algorithm 13Jan 14Feb 3Mar 4Apr 5May 6Jun 7Jul 8Aug 9Sep 10Oct 11Nov 12Dec 0 31 59 90 120 151 181 212 243 273 304 334 3 0 31 60 91 121 152 182 213 244 274 305 335 2 ${\displaystyle 30(m-1)+\left\lfloor 0.6(m+1)\right\rfloor -i}$ For example, the ordinal date of April 15 is 90 + 15 = 105 in a common year, and 91 + 15 = 106 in a leap year. ## Month–day The number of the month and date is given by ${\displaystyle m=\left\lfloor od/30\right\rfloor +1}$ ${\displaystyle d=mod(od,30)+i-\left\lfloor 0.6(m+1)\right\rfloor }$ the term ${\displaystyle mod(od,30)}$ can also be replaced by ${\displaystyle od-30(m-1)}$ with ${\displaystyle od}$ the ordinal date. • Day 100 of a common year: ${\displaystyle m=\left\lfloor 100/30\right\rfloor +1=4}$ ${\displaystyle d=mod(100,30)+3-\left\lfloor 0.6(4+1)\right\rfloor =10+3-3=10}$ April 10. • Day 200 of a common year: ${\displaystyle m=\left\lfloor 200/30\right\rfloor +1=7}$ ${\displaystyle d=mod(200,30)+3-\left\lfloor 0.6(7+1)\right\rfloor =20+3-4=19}$ July 19. • Day 300 of a leap year: ${\displaystyle m=\left\lfloor 300/30\right\rfloor +1=11}$ ${\displaystyle d=mod(300,30)+2-\left\lfloor 0.6(11+1)\right\rfloor =0+2-7=-5}$ November - 5 = October 26 (31 - 5).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.669420599937439, "perplexity": 1095.8269468490294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00100.warc.gz"}
https://forums.unrealengine.com/t/gas-playmontageandwait-not-calling-oninterrupt/227572
• News • Industries • Learning & Support • Community • Marketplace GAS, PlayMontageAndWait not calling "OnInterrupt" Good day. I’m using Gameplay Ability System and as far as I can tell, node “PlayMontageAndWait” by desing supposed to ends either by “OnCompleted” or “OnInterrupted” callbacks, no third option available. But weird thing happened from time to time: there is a third option, which brake my gameplay logic, since I cannot distinguish this third case as any the first two. From my analysis I conclude that this third case is “Interrupting PlayMontageAndWait1 by other PlayMontageAndWait2, called between Montage1 starts blendout, but before it’s ended”. This way PlayMontageAndWait1 calls only “OnBlendOut”, but not “OnCompleted” or “OnInterrupted”. As I can see, the root of the problem is that in Engine/Source/Runtime/Engine/Private/Animation/AnimMontage.cpp, function FAnimMontageInstance::Stop() sends FQueuedMontageBlendingOutEvent only once per montage, so if blendout(interrupt==false) was called first, blendout(interrupt==true) will never be send. Currently I’m planning to fix this behavior by letting montages send blendout(interrupt==true)-event even if Stop() was already called. Particularly by adding next code in the “else” branch of the Stop() function: `````` if (Montage && bInterrupted) { if (UAnimInstance* Inst = AnimInstance.Get()) { Inst->QueueMontageBlendingOutEvent(FQueuedMontageBlendingOutEvent(Montage, bInterrupted, OnMontageBlendingOutStarted)); } } `````` So, my questions are: • Whether I’m understand design behind PlayMontageAndWait() correctly and this is really a bug or I’m misunderstand something and this can\should be workarounded on higher level? • While proposed fix did fix problem I’m encountered, I’m wonder if there are some unforeseen consequences of such approach that I’m missed? Or is there a better way to fix it? note: Actually i’m using custom “PlayMontageAndWaitForEvent” node, which I took from some example, but as far as I can see, part responsible for this problem is absolutely same in both nodes
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049758672714233, "perplexity": 5749.506358538423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00325.warc.gz"}
https://www.physicsforums.com/threads/functional-analysis-question.305092/
# Functional analysis question. 1. Apr 5, 2009 ### math8 See the attachment. File size: 28.2 KB Views: 68 2. Apr 5, 2009 ### Dick I really don't think there is much to show. How do you define an infinite sum like the sum of x_k*e_k? I would say it's the sequence whose i-th term is the sum of the i-ith terms of all of the x_k*e_k. So for a given i there's only one sequence with a nonzero term. You definitely don't want to start trying to prove the partial sums converge in the l_infinity norm. They don't unless x converges to zero (in the real infinite sequence sense). 3. Apr 6, 2009 ### maze You can use LaTeX codes on the forum by using the [ tex ]LaTeX Code Goes Here[ /tex ] tags (without the spaces)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922431707382202, "perplexity": 1801.3045201453972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00261.warc.gz"}
https://worldwidescience.org/topicpages/c/carlo+method+implemented.html
#### Sample records for carlo method implemented 1. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4 Energy Technology Data Exchange (ETDEWEB) Dixon, D.A., E-mail: [email protected] [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: [email protected] [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: [email protected] [Sandia National Laboratories, Albuquerque, NM 87123 (United States) 2015-09-15 This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data. 2. An Alternative Implementation of the Differential Operator (Taylor Series) Perturbation Method for Monte Carlo Criticality Problems International Nuclear Information System (INIS) The standard implementation of the differential operator (Taylor series) perturbation method for Monte Carlo criticality problems has previously been shown to have a wide range of applicability. In this method, the unperturbed fission distribution is used as a fixed source to estimate the change in the keff eigenvalue of a system due to a perturbation. A new method, based on the deterministic perturbation theory assumption that the flux distribution (rather than the fission source distribution) is unchanged after a perturbation, is proposed in this paper. Dubbed the F-A method, the new method is implemented within the framework of the standard differential operator method by making tallies only in perturbed fissionable regions and combining the standard differential operator estimate of their perturbations according to the deterministic first-order perturbation formula. The F-A method, developed to extend the range of applicability of the differential operator method rather than as a replacement, was more accurate than the standard implementation for positive and negative density perturbations in a thin shell at the exterior of a computational Godiva model. The F-A method was also more accurate than the standard implementation at estimating reactivity worth profiles of samples with a very small positive reactivity worth (compared to actual measurements) in the Zeus critical assembly, but it was less accurate for a sample with a small negative reactivity worth 3. Implementation of the probability table method in a continuous-energy Monte Carlo code system Energy Technology Data Exchange (ETDEWEB) Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States) 1998-10-01 RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5. 4. Implementation of a Monte Carlo method to model photon conversion for solar cells International Nuclear Information System (INIS) A physical model describing different photon conversion mechanisms is presented in the context of photovoltaic applications. To solve the resulting system of equations, a Monte Carlo ray-tracing model is implemented, which takes into account the coupling of the photon transport phenomena to the non-linear rate equations describing luminescence. It also separates the generation of rays from the two very different sources of photons involved (the sun and the luminescence centers). The Monte Carlo simulator presented in this paper is proposed as a tool to help in the evaluation of candidate materials for up- and down-conversion. Some application examples are presented, exploring the range of values that the most relevant parameters describing the converter should have in order to give significant gain in photocurrent 5. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems Energy Technology Data Exchange (ETDEWEB) Somasundaram, E.; Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97332-5902 (United States) 2013-07-01 In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors) 6. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems International Nuclear Information System (INIS) In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors) 7. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method Science.gov (United States) Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P. 2008-06-01 An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study. 8. Exploring Monte Carlo methods CERN Document Server Dunn, William L 2012-01-01 Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble 9. MontePython: Implementing Quantum Monte Carlo using Python OpenAIRE J.K. Nilsen 2006-01-01 We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. 10. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy International Nuclear Information System (INIS) We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30–16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9–67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. (note) 11. Shell model Monte Carlo methods International Nuclear Information System (INIS) We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs 12. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation Directory of Open Access Journals (Sweden) Jimin Liang 2010-01-01 Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results. 13. Monte Carlo Methods in Physics International Nuclear Information System (INIS) Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained 14. Criticality calculations on pebble-bed HTR-PROTEUS configuration as a validation for the pseudo-scattering tracking method implemented in the MORET 5 Monte Carlo code International Nuclear Information System (INIS) The MORET code is a three dimensional Monte Carlo criticality code. It is designed to calculate the effective multiplication factor (keff) of any geometrical configuration as well as the reaction rates in the various volumes and the neutron leakage out of the system. A recent development for the MORET code consists of the implementation of an alternate neutron tracking method, known as the pseudo-scattering tracking method. This method has been successfully implemented in the MORET code and its performances have been tested by mean of an extensive parametric study on very simple geometrical configurations. In this context, the goal of the present work is to validate the pseudo-scattering method against realistic configurations. In this perspective, pebble-bed cores are particularly well-adapted cases to model, as they exhibit large amount of volumes stochastically arranged on two different levels (the pebbles in the core and the TRISO particles inside each pebble). This paper will introduce the techniques and methods used to model pebble-bed cores in a realistic way. The results of the criticality calculations, as well as the pseudo-scattering tracking method performance in terms of computation time, will also be presented. (authors) 15. Criticality calculations on realistic modelling of pebble-bed HTR-PROTEUS as a validation for the woodcock tracking method implemented in the MORET 5 Monte Carlo code International Nuclear Information System (INIS) The MORET code is a three dimensional Monte Carlo criticality code. It is designed to calculate the effective multiplication factor (keff) of any geometrical configuration as well as the reaction rates in the various volumes and the neutron leakage out of the system. A recent development for the MORET code consists of the implementation of an alternate neutron tracking method known as the pseudo-scattering tracking method. This method has been successfully implemented in the MORET code and its performances have been tested by the means of an extensive parametric study on very simple geometrical configurations. In this context, the goal of the present work is to validate the pseudo-scattering method against realistic configurations. In this perspective, pebble-bed cores are particularly well-adapted cases to model as they exhibit large amount of volumes stochastically arranged on two different levels (the pebbles in the core and the TRISO particles inside each pebble). This paper will introduce the techniques and methods used to model pebble-bed cores in a realistic way. The results of the criticality calculations, as well as the pseudo-scattering tracking method performance in terms of computation time will be presented. (authors) 16. Efficient implementation of the Monte Carlo method for lattice gauge theory calculations on the floating point systems FPS-164 International Nuclear Information System (INIS) The computer program calculates the average action per plaquette for SU(6)/Z6 lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600. (orig.) 17. Extending canonical Monte Carlo methods Science.gov (United States) Velazquez, L.; Curilef, S. 2010-02-01 In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model. 18. Monte Carlo methods for applied scientists CERN Document Server Dimov, Ivan T 2007-01-01 The Monte Carlo method is inherently parallel and the extensive and rapid development in parallel computers, computational clusters and grids has resulted in renewed and increasing interest in this method. At the same time there has been an expansion in the application areas and the method is now widely used in many important areas of science including nuclear and semiconductor physics, statistical mechanics and heat and mass transfer. This book attempts to bridge the gap between theory and practice concentrating on modern algorithmic implementation on parallel architecture machines. Although 19. IMPLEMENTATION METHOD Directory of Open Access Journals (Sweden) Cătălin LUPU 2009-06-01 Full Text Available In this article presents applications of “Divide et impera” method using object -oriented programming in C #.Main advantage of using the "divide et impera" cost in that it allows software to reduce the complexity of the problem,sub-problems that were being decomposed and simpler data sharing in smaller groups of data (eg sub -algorithmQuickSort. Object-oriented programming means programs with new types that integrates both data and methodsassociated with the creation, processing and destruction of such data. To gain advantages through abstractionprogramming (the program is no longer a succession of processing, but a set of objects to life, have differentproperties, are capable of specific action s and interact in the program. Spoke on instantiation new techniques,derivation and polimorfismul object types. 20. TH-A-19A-04: Latent Uncertainties and Performance of a GPU-Implemented Pre-Calculated Track Monte Carlo Method International Nuclear Information System (INIS) Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implemented on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy 1. Simulations with the Hybrid Monte Carlo algorithm: implementation and data analysis CERN Document Server Schaefer, Stefan 2011-01-01 This tutorial gives a practical introduction to the Hybrid Monte Carlo algorithm and the analysis of Monte Carlo data. The method is exemplified at the ϕ 4 theory, for which all steps from the derivation of the relevant formulae to the actual implementation in a computer program are discussed in detail. It concludes with the analysis of Monte Carlo data, in particular their auto-correlations. 2. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations International Nuclear Information System (INIS) If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors 3. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations Energy Technology Data Exchange (ETDEWEB) Urbatsch, T.J. 1995-11-01 If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors. 4. Monte Carlo methods for particle transport CERN Document Server Haghighat, Alireza 2015-01-01 The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ... 5. Use of Monte Carlo Methods in brachytherapy International Nuclear Information System (INIS) The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author) 6. Experience with the Monte Carlo Method International Nuclear Information System (INIS) Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed 7. A Multivariate Time Series Method for Monte Carlo Reactor Analysis International Nuclear Information System (INIS) A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor 8. Extending canonical Monte Carlo methods: II Science.gov (United States) Velazquez, L.; Curilef, S. 2010-04-01 We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2langδE2rang compatible with negative heat capacities, C < 0. Now, we improve this methodology by including the finite size effects that reduce the precision of a direct determination of the microcanonical caloric curve β(E) = ∂S(E)/∂E, as well as by carrying out a better implementation of the MC schemes. We show that, despite the modifications considered, the extended canonical MC methods lead to an impressive overcoming of the so-called supercritical slowing down observed close to the region of the temperature driven first-order phase transition. In this case, the size dependence of the decorrelation time τ is reduced from an exponential growth to a weak power-law behavior, \\tau (N)\\propto N^{\\alpha } , as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14-0.18. 9. Clinical implementation of full Monte Carlo dose calculation in proton beam therapy Energy Technology Data Exchange (ETDEWEB) Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States) 2008-09-07 The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical 10. Guideline for radiation transport simulation with the Monte Carlo method International Nuclear Information System (INIS) Today, the photon and neutron transport calculations with the Monte Carlo method have been progressed with advanced Monte Carlo codes and high-speed computers. Monte Carlo simulation is rather suitable expression than the calculation. Once Monte Carlo codes become more friendly and performance of computer progresses, most of the shielding problems will be solved by using the Monte Carlo codes and high-speed computers. As those codes prepare the standard input data for some problems, the essential techniques for solving the Monte Carlo method and variance reduction techniques of the Monte Carlo calculation might lose the interests to the general Monte Carlo users. In this paper, essential techniques of the Monte Carlo method and the variance reduction techniques, such as importance sampling method, selection of estimator, and biasing technique, are described to afford a better understanding of the Monte Carlo method and Monte Carlo code. (author) 11. On the Convergence of Adaptive Sequential Monte Carlo Methods OpenAIRE Beskos, Alexandros; Jasra, Ajay; Kantas, Nikolas; Thiery, Alexandre 2013-01-01 In several implementations of Sequential Monte Carlo (SMC) methods it is natural, and important in terms of algorithmic efficiency, to exploit the information of the history of the samples to optimally tune their subsequent propagations. In this article we provide a carefully formulated asymptotic theory for a class of such \\emph{adaptive} SMC methods. The theoretical framework developed here will cover, under assumptions, several commonly used SMC algorithms. There are only limited results a... 12. Monte Carlo methods beyond detailed balance NARCIS (Netherlands) Schram, Raoul D.; Barkema, Gerard T. 2015-01-01 Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying 13. Extending canonical Monte Carlo methods: II International Nuclear Information System (INIS) We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2(δE2) compatible with negative heat capacities, C α, as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14–0.18 14. Introduction to the Monte Carlo methods International Nuclear Information System (INIS) Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab 15. The Monte Carlo method the method of statistical trials CERN Document Server Shreider, YuA 1966-01-01 The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio 16. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX International Nuclear Information System (INIS) In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.) 17. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX Energy Technology Data Exchange (ETDEWEB) Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology 2010-02-15 In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.) 18. The Moment Guided Monte Carlo Method OpenAIRE Degond, Pierre; Dimarco, Giacomo; Pareschi, Lorenzo 2009-01-01 In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the p... 19. New Dynamic Monte Carlo Renormalization Group Method OpenAIRE Lacasse, Martin-D.; Vinals, Jorge; Grant, Martin 1992-01-01 The dynamical critical exponent of the two-dimensional spin-flip Ising model is evaluated by a Monte Carlo renormalization group method involving a transformation in time. The results agree very well with a finite-size scaling analysis performed on the same data. The value of $z = 2.13 \\pm 0.01$ is obtained, which is consistent with most recent estimates. 20. Monte Carlo methods for preference learning DEFF Research Database (Denmark) Viappiani, P. 2012-01-01 Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems.... 1. Fast sequential Monte Carlo methods for counting and optimization CERN Document Server 2013-01-01 A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the 2. by means of FLUKA Monte Carlo method Directory of Open Access Journals (Sweden) Ermis Elif Ebru 2015-01-01 Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials. 3. The Moment Guided Monte Carlo Method CERN Document Server Degond, Pierre; Pareschi, Lorenzo 2009-01-01 In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the particle positions and velocities through moment equations so that the concurrent solution of the moment and kinetic models furnishes the same macroscopic quantities. 4. Reactor perturbation calculations by Monte Carlo methods International Nuclear Information System (INIS) Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author) 5. Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems Science.gov (United States) Slattery, Stuart R. This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It 6. Monte Carlo method in radiation transport problems International Nuclear Information System (INIS) In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media 7. Introduction to Monte-Carlo method International Nuclear Information System (INIS) We recall first some well known facts about random variables and sampling. Then we define the Monte-Carlo method in the case where one wants to compute a given integral. Afterwards, we ship to discrete Markov chains for which we define random walks, and apply to finite difference approximations of diffusion equations. Finally we consider Markov chains with continuous state (but discrete time), transition probabilities and random walks, which are the main piece of this work. The applications are: diffusion and advection equations, and the linear transport equation with scattering 8. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code Science.gov (United States) Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T. 2016-03-01 This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO. 9. A new method for commissioning Monte Carlo treatment planning systems Science.gov (United States) Aljarrah, Khaled Mohammed 2005-11-01 The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type. 10. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm KAUST Repository Hoel, Hakon 2014-01-01 We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014. 11. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing CERN Document Server Nuyens, Dirk 2016-01-01 This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics. 12. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code Science.gov (United States) He, Tongming Tony In IMRT inverse planning, inaccurate dose calculations and limitations in optimization algorithms introduce both systematic and convergence errors to treatment plans. The goal of this work is to practically implement a Monte Carlo based inverse planning model for clinical IMRT. The intention is to minimize both types of error in inverse planning and obtain treatment plans with better clinical accuracy than non-Monte Carlo based systems. The strategy is to calculate the dose matrices of small beamlets by using a Monte Carlo based method. Optimization of beamlet intensities is followed based on the calculated dose data using an optimization algorithm that is capable of escape from local minima and prevents possible pre-mature convergence. The MCNP 4B Monte Carlo code is improved to perform fast particle transport and dose tallying in lattice cells by adopting a selective transport and tallying algorithm. Efficient dose matrix calculation for small beamlets is made possible by adopting a scheme that allows concurrent calculation of multiple beamlets of single port. A finite-sized point source (FSPS) beam model is introduced for easy and accurate beam modeling. A DVH based objective function and a parallel platform based algorithm are developed for the optimization of intensities. The calculation accuracy of improved MCNP code and FSPS beam model is validated by dose measurements in phantoms. Agreements better than 1.5% or 0.2 cm have been achieved. Applications of the implemented model to clinical cases of brain, head/neck, lung, spine, pancreas and prostate have demonstrated the feasibility and capability of Monte Carlo based inverse planning for clinical IMRT. Dose distributions of selected treatment plans from a commercial non-Monte Carlo based system are evaluated in comparison with Monte Carlo based calculations. Systematic errors of up to 12% in tumor doses and up to 17% in critical structure doses have been observed. The clinical importance of Monte Carlo based 13. Method of tallying adjoint fluence and calculating kinetics parameters in Monte Carlo codes International Nuclear Information System (INIS) A method of using iterated fission probability to estimate the adjoint fluence during particles simulation, and using it as the weighting function to calculate kinetics parameters βeff and A in Monte Carlo codes, was introduced in this paper. Implements of this method in continuous energy Monte Carlo code MCNP and multi-group Monte Carlo code MCMG are both elaborated. Verification results show that, with regardless additional computing cost, using this method, the adjoint fluence accounted by MCMG matches well with the result computed by ANISN, and the kinetics parameters calculated by MCNP agree very well with benchmarks. This method is proved to be reliable, and the function of calculating kinetics parameters in Monte Carlo codes is carried out effectively, which could be the basement for Monte Carlo codes' utility in the analysis of nuclear reactors' transient behavior. (authors) 14. Accelerated Monte Carlo Methods for Coulomb Collisions Science.gov (United States) Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce 2014-03-01 We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710. 15. Monte Carlo method with complex-valued weights for frequency domain analyses of neutron noise International Nuclear Information System (INIS) Highlights: • The transport equation of the neutron noise is solved with the Monte Carlo method. • A new Monte Carlo algorithm where complex-valued weights are treated is developed.• The Monte Carlo algorithm is verified by comparing with analytical solutions. • The results with the Monte Carlo method are compared with the diffusion theory. - Abstract: A Monte Carlo algorithm to solve the transport equation of the neutron noise in the frequency domain has been developed to extend the conventional diffusion theory of the neutron noise to the transport theory. In this paper, the neutron noise is defined as the stationary fluctuation of the neutron flux around its mean value, and is induced by perturbations of the macroscopic cross sections. Since the transport equation of the neutron noise is a complex equation, a Monte Carlo technique for treating complex-valued weights that was recently proposed for neutron leakage-corrected calculations has been introduced to solve the complex equation. To cancel the positive and negative values of complex-valued weights, an algorithm that is similar to the power iteration method has been implemented. The newly-developed Monte Carlo algorithm is benchmarked to analytical solutions in an infinite homogeneous medium. The neutron noise spatial distributions have been obtained both with the newly-developed Monte Carlo method and the conventional diffusion method for an infinitely-long homogeneous cylinder. The results with the Monte Carlo method agree well with those of the diffusion method. However, near the noise source induced by a high frequency perturbation, significant differences are found between the diffusion method and Monte Carlo method. The newly-developed Monte Carlo algorithm is expected to contribute to the improvement of calculation accuracy of the neutron noise 16. Improved criticality convergence via a modified Monte Carlo iteration method Energy Technology Data Exchange (ETDEWEB) Booth, Thomas E [Los Alamos National Laboratory; Gubernatis, James E [Los Alamos National Laboratory 2009-01-01 Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner. 17. Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia Energy Technology Data Exchange (ETDEWEB) Granero Cabanero, D. 2015-07-01 The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author) 18. Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems Science.gov (United States) Martin, W. R. 1993-01-01 This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors. 19. Rare event simulation using Monte Carlo methods CERN Document Server Rubino, Gerardo 2009-01-01 In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ... 20. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code International Nuclear Information System (INIS) This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results 1. Combinatorial nuclear level density by a Monte Carlo method OpenAIRE Cerf, N. 1993-01-01 We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning t... 2. Neutron transport calculations using Quasi-Monte Carlo methods Energy Technology Data Exchange (ETDEWEB) Moskowitz, B.S. 1997-07-01 This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points. 3. Monte Carlo method for solving a parabolic problem Directory of Open Access Journals (Sweden) Tian Yi 2016-01-01 Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems. 4. On the feasibility of a homogenised multi-group Monte Carlo method in reactor analysis International Nuclear Information System (INIS) The use of homogenised multi-group cross sections to speed up Monte Carlo calculation has been studied to some extent, but the method is not widely implemented in modern calculation codes. This paper presents a calculation scheme in which homogenised material parameters are generated using the PSG continuous-energy Monte Carlo reactor physics code and used by MORA, a new full-core Monte Carlo code entirely based on homogenisation. The theory of homogenisation and its implementation in the Monte Carlo method are briefly introduced. The PSG-MORA calculation scheme is put to practice in two fundamentally different test cases: a small sodium-cooled fast reactor (JOYO) and a large PWR core. It is shown that the homogenisation results in a dramatic increase in efficiency. The results are in a reasonably good agreement with reference PSG and MCNP5 calculations, although fission source convergence becomes a problem in the PWR test case. (authors) 5. Quantum Monte Carlo methods algorithms for lattice models CERN Document Server Gubernatis, James; Werner, Philipp 2016-01-01 Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ... 6. Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules CERN Document Server Lester, William A; Reynolds, PJ 1994-01-01 This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n 7. Inference in Kingman's Coalescent with Particle Markov Chain Monte Carlo Method OpenAIRE Chen, Yifei; Xie, Xiaohui 2013-01-01 We propose a new algorithm to do posterior sampling of Kingman's coalescent, based upon the Particle Markov Chain Monte Carlo methodology. Specifically, the algorithm is an instantiation of the Particle Gibbs Sampling method, which alternately samples coalescent times conditioned on coalescent tree structures, and tree structures conditioned on coalescent times via the conditional Sequential Monte Carlo procedure. We implement our algorithm as a C++ package, and demonstrate its utility via a ... 8. On the Markov Chain Monte Carlo (MCMC) method Rajeeva L Karandikar 2006-04-01 Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples. 9. A Particle Population Control Method for Dynamic Monte Carlo Science.gov (United States) Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony 2014-06-01 A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems. 10. Problems in radiation shielding calculations with Monte Carlo methods International Nuclear Information System (INIS) The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author) 11. Monte Carlo methods and applications in nuclear physics International Nuclear Information System (INIS) Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs 12. Implementing Newton's Method OpenAIRE Neuerburg, Kent M. 2007-01-01 Newton's Method, the recursive algorithm for computing the roots of an equation, is one of the most efficient and best known numerical techniques. The basics of the method are taught in any first-year calculus course. However, in most cases the two most important questions are often left unanswered. These questions are, "Where do I start?" and "When do I stop?" We give criteria for determining when a given value is a good starting value and how many iterations it will take to ... 13. A new method for the calculation of diffusion coefficients with Monte Carlo International Nuclear Information System (INIS) This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods. (author) 14. A New Method for the Calculation of Diffusion Coefficients with Monte Carlo Science.gov (United States) Dorval, Eric 2014-06-01 This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods. 15. Implementation of Rosenbrock methods Energy Technology Data Exchange (ETDEWEB) Shampine, L. F. 1980-11-01 Rosenbrock formulas have shown promise in research codes for the solution of initial-value problems for stiff systems of ordinary differential equations (ODEs). To help assess their practical value, the author wrote an item of mathematical software based on such a formula. This required a variety of algorithmic and software developments. Those of general interest are reported in this paper. Among them is a way to select automatically, at every step, an explicit Runge-Kutta formula or a Rosenbrock formula according to the stiffness of the problem. Solving linear systems is important to methods for stiff ODEs, and is rather special for Rosenbrock methods. A cheap, effective estimate of the condition of the linear systems is derived. Some numerical results are presented to illustrate the developments. 16. Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo Energy Technology Data Exchange (ETDEWEB) Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France) 2011-07-01 This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.) 17. Application of biasing techniques to the contributon Monte Carlo method International Nuclear Information System (INIS) Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables 18. Simulation and the Monte Carlo Method, Student Solutions Manual CERN Document Server Rubinstein, Reuven Y 2012-01-01 This accessible new edition explores the major topics in Monte Carlo simulation Simulation and the Monte Carlo Method, Second Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over twenty-five years ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, suc 19. A residual Monte Carlo method for discrete thermal radiative diffusion International Nuclear Information System (INIS) Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems 20. Development of Continuous-Energy Eigenvalue Sensitivity Coefficient Calculation Methods in the Shift Monte Carlo Code Energy Technology Data Exchange (ETDEWEB) Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL 2012-01-01 Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. 1. A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation International Nuclear Information System (INIS) Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors) 2. Comparison between Monte Carlo method and deterministic method International Nuclear Information System (INIS) A fast critical assembly consists of a lattice of plates of sodium, plutonium or uranium, resulting in a high inhomogeneity. The inhomogeneity in the lattice should be evaluated carefully to determine the bias factor accurately. Deterministic procedures are generally used for the lattice calculation. To reduce the required calculation time, various one-dimensional lattice models have been developed previously to replace multi-dimensional models. In the present study, calculations are made for a two-dimensional model and results are compared with those obtained with one-dimensional models in terms of the average microscopic cross section of a lattice and diffusion coefficient. Inhomogeneity in a lattice affects the effective cross section and distribution of neutrons in the lattice. The background cross section determined by the method proposed by Tone is used here to calculate the effective cross section, and the neutron distribution is determined by the collision probability method. Several other methods have been proposed to calculate the effective cross section. The present study also applies the continuous energy Monte Carlo method to the calculation. A code based on this method is employed to evaluate several one-dimensional models. (Nogami, K.) 3. Computing Functionals of Multidimensional Diffusions via Monte Carlo Methods OpenAIRE Jan Baldeaux; Eckhard Platen 2012-01-01 We discuss suitable classes of diffusion processes, for which functionals relevant to finance can be computed via Monte Carlo methods. In particular, we construct exact simulation schemes for processes from this class. However, should the finance problem under consideration require e.g. continuous monitoring of the processes, the simulation algorithm can easily be embedded in a multilevel Monte Carlo scheme. We choose to introduce the finance problems under the benchmark approach, and find th... 4. Computing Greeks with Multilevel Monte Carlo Methods using Importance Sampling OpenAIRE Euget, Thomas 2012-01-01 This paper presents a new efficient way to reduce the variance of an estimator of popular payoffs and greeks encounter in financial mathematics. The idea is to apply Importance Sampling with the Multilevel Monte Carlo recently introduced by M.B. Giles. So far, Importance Sampling was proved successful in combination with standard Monte Carlo method. We will show efficiency of our approach on the estimation of financial derivatives prices and then on the estimation of Greeks (i.e. sensitivitie... 5. A New Method for Parallel Monte Carlo Tree Search OpenAIRE Mirsoleimani, S. Ali; Plaat, Aske; Herik, Jaap van den; Vermaseren, Jos 2016-01-01 In recent years there has been much interest in the Monte Carlo tree search algorithm, a new, adaptive, randomized optimization algorithm. In fields as diverse as Artificial Intelligence, Operations Research, and High Energy Physics, research has established that Monte Carlo tree search can find good solutions without domain dependent heuristics. However, practice shows that reaching high performance on large parallel machines is not so successful as expected. This paper proposes a new method... 6. New simpler method of matching NLO corrections with parton shower Monte Carlo OpenAIRE Jadach, S.; Placzek, W.; Sapeta, S.(CERN PH-TH, CH-1211, Geneva 23, Switzerland); Siodmok, A.; Skrzypek, M. 2016-01-01 Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higg... 7. New simpler method of matching NLO corrections with parton shower Monte Carlo CERN Document Server Jadach, S; Sapeta, S; Siodmok, A; Skrzypek, M 2016-01-01 Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higgs-boson production process are also presented. 8. Monte Carlo methods and models in finance and insurance CERN Document Server Korn, Ralf 2010-01-01 Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat 9. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method CERN Document Server 2002-01-01 This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report. 10. Markov chain Monte Carlo methods: an introductory example Science.gov (United States) Klauenberg, Katy; Elster, Clemens 2016-02-01 When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout. 11. Implementation Method of Stable Model Directory of Open Access Journals (Sweden) Shasha Wu 2008-01-01 Full Text Available Software Stability Modeling (SSM is a promising software development methodology based on object-oriented programming to achieve model level stability and reusability. Among the three critical categories of objects proposed by SSM, the business objects play a critical role in connecting the stable problem essentials (enduringbusiness themes and the unstable object implementations (industry objects. The business objects are especially difficult to implement and often raise confusion in the implementation because of their unique characteristics: externally stable and internally unstable. The implementation and code level stability is not the major concern. How to implement the objects in a stable model through object-oriented programming without losing its stability is a big challenge in the real software development. In this paper, we propose new methods to realize the business objects in the implementation of stable model. We also rephrase the definition of the business objects from the implementation perspective, in hope the new description can help software developers to adopt and implement stable models more easily. Finally, we describe the implementation of a stable model for a balloon rental resource management scope to illustrate the advantages of the proposed method. 12. Monte Carlo methods for the self-avoiding walk Energy Technology Data Exchange (ETDEWEB) Janse van Rensburg, E J [Department of Mathematics and Statistics, York University, Toronto, ON M3J 1P3 (Canada)], E-mail: [email protected] 2009-08-14 The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review) 13. Monte Carlo methods for the self-avoiding walk International Nuclear Information System (INIS) The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review) 14. Monte Carlo Methods for Tempo Tracking and Rhythm Quantization CERN Document Server Cemgil, A T; 10.1613/jair.1121 2011-01-01 We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr... 15. Monte Carlo method application to shielding calculations International Nuclear Information System (INIS) CANDU spent fuel discharged from the reactor core contains Pu, so it must be stressed in two directions: tracing for the fuel reactivity in order to prevent critical mass formation and personnel protection during the spent fuel manipulation. The basic tasks accomplished by the shielding calculations in a nuclear safety analysis consist in dose rates calculations in order to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. To perform photon dose rates calculations the Monte Carlo MORSE-SGC code incorporated in SAS4 sequence from SCALE system was used. The paper objective was to obtain the photon dose rates to the spent fuel transport cask wall, both in radial and axial directions. As source of radiation one spent CANDU fuel bundle was used. All the geometrical and material data related to the transport cask were considered according to the shipping cask type B model, whose prototype has been realized and tested in the Institute for Nuclear Research Pitesti. (authors) 16. Quantum Monte Carlo diagonalization method as a variational calculation Energy Technology Data Exchange (ETDEWEB) Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio 1997-05-01 A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author) 17. Auxiliary-field quantum Monte Carlo methods in nuclei CERN Document Server Alhassid, Y 2016-01-01 Auxiliary-field quantum Monte Carlo methods enable the calculation of thermal and ground state properties of correlated quantum many-body systems in model spaces that are many orders of magnitude larger than those that can be treated by conventional diagonalization methods. We review recent developments and applications of these methods in nuclei using the framework of the configuration-interaction shell model. 18. Observations on variational and projector Monte Carlo methods International Nuclear Information System (INIS) Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed 19. LISA data analysis using Markov chain Monte Carlo methods International Nuclear Information System (INIS) The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions 20. Monte Carlo methods for the reliability analysis of Markov systems International Nuclear Information System (INIS) This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator 1. Introduction to Monte Carlo methods: sampling techniques and random numbers International Nuclear Information System (INIS) The Monte Carlo method describes a very broad area of science, in which many processes, physical systems and phenomena that are statistical in nature and are difficult to solve analytically are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions. As the number of individual events (called histories) is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. Assuming that the behavior of physical system can be described by probability density functions, then the Monte Carlo simulation can proceed by sampling from these probability density functions, which necessitates a fast and effective way to generate random numbers uniformly distributed on the interval (0,1). Particles are generated within the source region and are transported by sampling from probability density functions through the scattering media until they are absorbed or escaped the volume of interest. The outcomes of these random samplings or trials, must be accumulated or tallied in an appropriate manner to produce the desired result, but the essential characteristic of Monte Carlo is the use of random sampling techniques to arrive at a solution of the physical problem. The major components of Monte Carlo methods for random sampling for a given event are described in the paper 2. Frequency domain optical tomography using a Monte Carlo perturbation method Science.gov (United States) Yamamoto, Toshihiro; Sakamoto, Hiroki 2016-04-01 A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods. OpenAIRE Falcioni, Marco; Michael W. Deem 2000-01-01 Strategies for searching the space of variables in combinatorial chemistry experiments are presented, and a random energy model of combinatorial chemistry experiments is introduced. The search strategies, derived by analogy with the computer modeling technique of Monte Carlo, effectively search the variable space even in combinatorial chemistry experiments of modest size. Efficient implementations of the library design and redesign strategies are feasible with current experimental capabilities. 4. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging Energy Technology Data Exchange (ETDEWEB) Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States) 2014-06-15 Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the 5. Monte Carlo Form-Finding Method for Tensegrity Structures Science.gov (United States) Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping 2010-05-01 In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method. 6. Latent uncertainties of the precalculated track Monte Carlo method International Nuclear Information System (INIS) Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the 7. Diffusion/transport hybrid discrete method for Monte Carlo solution of the neutron transport equation International Nuclear Information System (INIS) Monte Carlo method is widely used for solving neutron transport equation. Basically Monte Carlo method treats continuous angle, space and energy. It gives very accurate solution when enough many particle histories are used, but it takes too long computation time. To reduce computation time, discrete Monte Carlo method was proposed. It is called Discrete Transport Monte Carlo (DTMC) method. It uses discrete space but continuous angle in mono energy one dimension problem and uses lump, linear-discontinuous (LLD) equation to make probabilities of leakage, scattering, and absorption. LLD may cause negative angular fluxes in highly scattering problem, so two scatter variance reduction method is applied to DTMC and shows very accurate solution in various problems. In transport Monte Carlo calculation, the particle history does not end for scattering event. So it also takes much computation time in highly scattering problem. To further reduce computation time, Discrete Diffusion Monte Carlo (DDMC) method is implemented. DDMC uses diffusion equation to make probabilities and has no scattering events. So DDMC takes very short computation time comparing with DTMC and shows very well-agreed results with cell-centered diffusion results. It is known that diffusion result may not be good in boundaries. So in hybrid method of DTMC and DDMC, boundary regions are calculated by DTMC and the other regions are calculated by DDMC. In this thesis, DTMC, DDMC and hybrid methods and their results of several problems are presented. The results show that DDMC and DTMC are well agreed with deterministic diffusion and transport results, respectively. The hybrid method shows transport-like results in problems where diffusion results are poor. The computation time of hybrid method is between DDMC and DTMC, as expected 8. Extending the alias Monte Carlo sampling method to general distributions International Nuclear Information System (INIS) The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs 9. Analysis of the uranium price predicted to 24 months, implementing neural networks and the Monte Carlo method like predictive tools; Analisis del precio del uranio pronosticado a 24 meses, implementando redes neuronales y el metodo de Monte Carlo como herramientas predictivas Energy Technology Data Exchange (ETDEWEB) Esquivel E, J.; Ramirez S, J. R.; Palacios H, J. C., E-mail: [email protected] [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico) 2011-11-15 The present work shows predicted prices of the uranium, using a neural network. The importance of predicting financial indexes of an energy resource, in this case, allows establishing budgetary measures, as well as the costs of the resource to medium period. The uranium is part of the main energy generating fuels and as such, its price rebounds in the financial analyses, due to this is appealed to predictive methods to obtain an outline referent to the financial behaviour that will have in a certain time. In this study, two methodologies are used for the prediction of the uranium price: the Monte Carlo method and the neural networks. These methods allow predicting the indexes of monthly costs, for a two years period, starting from the second bimonthly of 2011. For the prediction the uranium costs are used, registered from the year 2005. (Author) 10. Computing Functionals of Multidimensional Diffusions via Monte Carlo Methods CERN Document Server Baldeaux, Jan 2012-01-01 We discuss suitable classes of diffusion processes, for which functionals relevant to finance can be computed via Monte Carlo methods. In particular, we construct exact simulation schemes for processes from this class. However, should the finance problem under consideration require e.g. continuous monitoring of the processes, the simulation algorithm can easily be embedded in a multilevel Monte Carlo scheme. We choose to introduce the finance problems under the benchmark approach, and find that this approach allows us to exploit conveniently the analytical tractability of these diffusion processes. 11. Development of three-dimensional program based on Monte Carlo and discrete ordinates bidirectional coupling method International Nuclear Information System (INIS) The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors) 12. MOSFET GATE CURRENT MODELLING USING MONTE-CARLO METHOD OpenAIRE Voves, J.; Vesely, J. 1988-01-01 The new technique for determining the probability of hot-electron travel through the gate oxide is presented. The technique is based on the Monte Carlo method and is used in MOSFET gate current modelling. The calculated values of gate current are compared with experimental results from direct measurements on MOSFET test chips. 13. Application of equivalence methods on Monte Carlo method based homogenization multi-group constants International Nuclear Information System (INIS) The multi-group constants generated via continuous energy Monte Carlo method do not satisfy the equivalence between reference calculation and diffusion calculation applied in reactor core analysis. To the satisfaction of the equivalence theory, general equivalence theory (GET) and super homogenization method (SPH) were applied to the Monte Carlo method based group constants, and a simplified reactor core and C5G7 benchmark were examined with the Monte Carlo constants. The results show that the calculating precision of group constants is improved, and GET and SPH are good candidates for the equivalence treatment of Monte Carlo homogenization. (authors) 14. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning Science.gov (United States) Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J. 2008-02-01 Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations. 15. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning International Nuclear Information System (INIS) Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife (registered) SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head and neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations 16. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning Energy Technology Data Exchange (ETDEWEB) Ma, C-M; Li, J S; Deng, J; Fan, J [Radiation Oncology Department, Fox Chase Cancer Center, Philadelphia, PA (United States)], E-mail: [email protected] 2008-02-01 Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife (registered) SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head and neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations. 17. A separable shadow Hamiltonian hybrid Monte Carlo method Science.gov (United States) Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A. 2009-11-01 Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc). 18. Monte Carlo methods for pricing financial options N Bolia; S Juneja 2005-04-01 Pricing financial options is amongst the most important and challenging problems in the modern financial industry. Except in the simplest cases, the prices of options do not have a simple closed form solution and efficient computational methods are needed to determine them. Monte Carlo methods have increasingly become a popular computational tool to price complex financial options, especially when the underlying space of assets has a large dimensionality, as the performance of other numerical methods typically suffer from the ‘curse of dimensionality’. However, even Monte-Carlo techniques can be quite slow as the problem-size increases, motivating research in variance reduction techniques to increase the efficiency of the simulations. In this paper, we review some of the popular variance reduction techniques and their application to pricing options. We particularly focus on the recent Monte-Carlo techniques proposed to tackle the difficult problem of pricing American options. These include: regression-based methods, random tree methods and stochastic mesh methods. Further, we show how importance sampling, a popular variance reduction technique, may be combined with these methods to enhance their effectiveness. We also briefly review the evolving options market in India. 19. Stabilizing Canonical-Ensemble Calculations in the Auxiliary-Field Monte Carlo Method CERN Document Server Gilbreth, C N 2014-01-01 Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method. 20. Bayesian Monte Carlo method for nuclear data evaluation International Nuclear Information System (INIS) A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight. (orig.) 1. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification Science.gov (United States) Wu, Keyi; Li, Jinglai 2016-09-01 In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods. 2. Non-analogue Monte Carlo method, application to neutron simulation International Nuclear Information System (INIS) With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Nowadays, only the Monte Carlo method offers such possibilities. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette 3. Development of continuous-energy eigenvalue sensitivity coefficient calculation methods in the shift Monte Carlo Code Energy Technology Data Exchange (ETDEWEB) Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States) 2012-07-01 Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors) 4. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign ? Part II: Model application to the CENICA, Pedregal and Santa Ana sites OpenAIRE San Martini, F. M.; E. J. Dunlea; R. Volkamer; Onasch, T. B.; J. T. Jayne; Canagaratna, M. R.; Worsnop, D. R.; C. E. Kolb; J. H. Shorter; S. C. Herndon; M. S. Zahniser; D. Salcedo; Dzepina, K.; Jimenez, J. L; Ortega, J. M. 2006-01-01 A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the model is able to accurately pred... 5. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part II: Model application to the CENICA, Pedregal and Santa Ana sites OpenAIRE San Martini, F. M.; Dunlea, E. J.; R. Volkamer; Onasch, T. B.; Jayne, J. T.; Canagaratna, M. R.; Worsnop, D. R.; Kolb, C. E.; Shorter, J. H.; Herndon, S. C.; Zahniser, M. S.; D. Salcedo; Dzepina, K.; Jimenez, J. L.; Ortega, J. M. 2006-01-01 A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the mode... 6. Efficient Monte Carlo methods for continuum radiative transfer CERN Document Server Juvela, M 2005-01-01 We discuss the efficiency of Monte Carlo methods in solving continuum radiative transfer problems. The sampling of the radiation field and convergence of dust temperature calculations in the case of optically thick clouds are both studied. For spherically symmetric clouds we find that the computational cost of Monte Carlo simulations can be reduced, in some cases by orders of magnitude, with simple importance weighting schemes. This is particularly true for models consisting of cells of different sizes for which the run times would otherwise be determined by the size of the smallest cell. We present a new idea of extending importance weighting to scattered photons. This is found to be useful in calculations of scattered flux and could be important for three-dimensional models when observed intensity is needed only for one general direction of observations. Convergence of dust temperature calculations is studied for models with optical depths 10-10000. We examine acceleration methods where radiative interactio... 7. Multi-way Monte Carlo Method for Linear Systems OpenAIRE Wu, Tao; Gleich, David F. 2016-01-01 We study the Monte Carlo method for solving a linear system of the form $x = H x + b$. A sufficient condition for the method to work is $\\| H \\| < 1$, which greatly limits the usability of this method. We improve this condition by proposing a new multi-way Markov random walk, which is a generalization of the standard Markov random walk. Under our new framework we prove that the necessary and sufficient condition for our method to work is the spectral radius $\\rho(H^{+}) < 1$, which is a weake... 8. Monte Carlo methods and applications for the nuclear shell model OpenAIRE Dean, D. J.; White, J A 1998-01-01 The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sdpf- shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems. 9. Efficient Monte Carlo methods for light transport in scattering media OpenAIRE Jarosz, Wojciech 2008-01-01 In this dissertation we focus on developing accurate and efficient Monte Carlo methods for synthesizing images containing general participating media. Participating media such as clouds, smoke, and fog are ubiquitous in the world and are responsible for many important visual phenomena which are of interest to computer graphics as well as related fields. When present, the medium participates in lighting interactions by scattering or absorbing photons as they travel through the scene. Though th... 10. Calculating atomic and molecular properties using variational Monte Carlo methods International Nuclear Information System (INIS) The authors compute a number of properties for the 1 1S, 21S, and 23S states of helium as well as the ground states of H2 and H/+3 using Variational Monte Carlo. These are in good agreement with previous calculations (where available). Electric-response constants for the ground states of helium, H2 and H+3 are computed as derivatives of the total energy. The method used to calculate these quantities is discussed in detail 11. Monte Carlo Methods and Applications for the Nuclear Shell Model International Nuclear Information System (INIS) The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems 12. Calculations of pair production by Monte Carlo methods International Nuclear Information System (INIS) We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs 13. Calculations of pair production by Monte Carlo methods Energy Technology Data Exchange (ETDEWEB) Bottcher, C.; Strayer, M.R. 1991-01-01 We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs. 14. Comparison of deterministic and Monte Carlo methods in shielding design International Nuclear Information System (INIS) In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors) 15. A new lattice Monte Carlo method for simulating dielectric inhomogeneity Science.gov (United States) Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support. 16. A new hybrid method--combined heat flux method with Monte-Carlo method to analyze thermal radiation Institute of Scientific and Technical Information of China (English) 2006-01-01 A new hybrid method, Monte-Carlo-Heat-Flux (MCHF) method, was presented to analyze the radiative heat transfer of participating medium in a three-dimensional rectangular enclosure using combined the Monte-Carlo method with the heat flux method. Its accuracy and reliability was proved by comparing the computational results with exact results from classical "Zone Method". 17. An object-oriented implementation of a parallel Monte Carlo code for radiation transport Science.gov (United States) Santos, Pedro Duarte; Lani, Andrea 2016-05-01 This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable. 18. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 5. New Zero-Variance Methods for Monte Carlo Criticality and Source-Detector Problems International Nuclear Information System (INIS) A zero-variance (ZV) Monte Carlo transport method is a theoretical construct that, if it could be implemented on a practical computer, would produce the exact result after any number of histories. Unfortunately, ZV methods are impractical; to implement them, one must have complete knowledge of a certain adjoint flux, and acquiring this knowledge is an infinitely greater task than solving the original criticality or source-detector problem. (In fact, the adjoint flux itself yields the desired result, with no need of a Monte Carlo simulation) Nevertheless, ZV methods are of practical interest because it is possible to approximate them in ways that yield efficient variance-reduction schemes. Such implementations must be done carefully. For example, one must not change the mean of the final answer) The goal of variance reduction is to estimate the true mean with greater efficiency. In this paper, we describe new ZV methods for Monte Carlo criticality and source-detector problems. These methods have the same requirements (and disadvantages) as described earlier. However, their implementation is very different. Thus, the concept of approximating them to obtain practical variance-reduction schemes opens new possibilities. In previous ZV methods, (a) a single characteristic parameter (the k-eigenvalue or a detector response) of a forward transport problem is sought; (b) the exact solution of an adjoint problem must be known for all points in phase-space; and (c) a non-analog process, defined in terms of the adjoint solution, transports forward Monte Carlo particles from the source to the detector (in criticality problems, from the fission region, where a generation n fission neutron is born, back to the fission region, where generation n+1 fission neutrons are born). In the non-analog transport process, Monte Carlo particles (a) are born in the source region with weight equal to the desired characteristic parameter, (b) move through the system by an altered transport 19. Finite population-size effects in projection Monte Carlo methods International Nuclear Information System (INIS) Projection (Green's function and diffusion) Monte Carlo techniques sample a wave function by a stochastic iterative procedure. It is shown that these methods converge to a stationary distribution which is unexpectedly biased, i.e., differs from the exact ground state wave function, and that this bias occurs because of the introduction of a replication procedure. It is demonstrated that these biased Monte Carlo algorithms lead to a modified effective mass which is equal to the desired mass only in the limit of an infinite population of walkers. In general, the bias scales as 1/N for a population of walkers of size N. Various strategies to reduce this bias are considered. (authors). 29 refs., 3 figs 20. A Hamiltonian Monte–Carlo method for Bayesian inference of supermassive black hole binaries International Nuclear Information System (INIS) We investigate the use of a Hamiltonian Monte–Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte–Carlo (MCMC) methods, such as Metropolis–Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte–Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a 106 iteration chain from ∼109 to ∼106. The result is in an implementation of the Hamiltonian Monte–Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC. (paper) 1. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging International Nuclear Information System (INIS) Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 107 xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual 2. Monte Carlo methods in electron transport problems. Pt. 1 International Nuclear Information System (INIS) The condensed-history Monte Carlo method for charged particles transport is reviewed and discussed starting from a general form of the Boltzmann equation (Part I). The physics of the electronic interactions, together with some pedagogic example will be introduced in the part II. The lecture is directed to potential users of the method, for which it can be a useful introduction to the subject matter, and wants to establish the basis of the work on the computer code RECORD, which is at present in a developing stage 3. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation International Nuclear Information System (INIS) Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes 4. Dynamical Monte Carlo methods for plasma-surface reactions Science.gov (United States) Guerra, Vasco; Marinov, Daniil 2016-08-01 Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir–Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries. 5. Monte Carlo implementation of a guiding-center Fokker-Planck kinetic equation International Nuclear Information System (INIS) A Monte Carlo method for the collisional guiding-center Fokker-Planck kinetic equation is derived in the five-dimensional guiding-center phase space, where the effects of magnetic drifts due to the background magnetic field nonuniformity are included. It is shown that, in the limit of a homogeneous magnetic field, our guiding-center Monte Carlo collision operator reduces to the guiding-center Monte Carlo Coulomb operator previously derived by Xu and Rosenbluth [Phys. Fluids B 3, 627 (1991)]. Applications of the present work will focus on the collisional transport of energetic ions in complex nonuniform magnetized plasmas in the large mean-free-path (collisionless) limit, where magnetic drifts must be retained 6. Condensed history Monte Carlo methods for photon transport problems International Nuclear Information System (INIS) We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods - called Condensed History (CH) methods - have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models - one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes - can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions 7. MCNP4, a parallel Monte Carlo implementation on a workstation network International Nuclear Information System (INIS) The Monte Carlo code MCNP4 has been implemented on a workstation network to allow parallel computing of Monte Carlo transport processes. This has been achieved by making use of the communication tool PVM (Parallel Virtual Machine) and introducing some changes in the MCNP4 code. The PVM daemons and user libraries have been installed on different workstations to allow working on the same platform. Essential features of PVM and the structure of the parallelized MCNP4 version are discussed in this paper. Experiences are described and problems are explained and solved with the extended version of MCNP. The efficiency of the parallelized MCNP4 is assessed for two realistic sample problems from the field of fusion neutronics. Compared with the fastest workstation in the network, a speed-up factor near five has been obtained by using a network of ten workstations, different in architecture and performance. (orig.) 8. Iridium 192 dosimetric study by Monte-Carlo method International Nuclear Information System (INIS) The Monte-Carlo method was applied to a dosimetry of iridium192 in water and in air; an iridium-platinum alloy seed, enveloped by a platinum can, is used as source. The radioactive decay of this nuclide and the transport of emitted particles from the seed-source in the can and in the irradiated medium are simulated successively. The photons energy spectra outside the source, as well as dose distributions, are given. Phi(d) function is calculated and our results with various experimental values are compared 9. Research on Monte Carlo simulation method of industry CT system International Nuclear Information System (INIS) There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author) 10. The macro response Monte Carlo method for electron transport CERN Document Server Svatos, M M 1999-01-01 This thesis demonstrates the feasibility of basing dose calculations for electrons in radiotherapy on first-principles single scatter physics, in a calculation time that is comparable to or better than current electron Monte Carlo methods. The macro response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. The problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position, and trajectory after leaving the local geometry, a small sphere or "kugel." A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25-8 MeV) and sizes (0.025 to 0.1 cm in radius). The second transport stage is a global calculation, in which steps that conform to the size of the kugels in the... 11. 'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods International Nuclear Information System (INIS) The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author) 12. Application of Monte Carlo methods in tomotherapy and radiation biophysics Science.gov (United States) Hsiao, Ya-Yun Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published 13. A study of potential energy curves from the model space quantum Monte Carlo method Energy Technology Data Exchange (ETDEWEB) Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: [email protected] [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan) 2015-12-07 We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy. 14. Time-step limits for a Monte Carlo Compton-scattering method Energy Technology Data Exchange (ETDEWEB) Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory 2008-01-01 Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the 15. Multilevel Monte Carlo methods for computing failure probability of porous media flow systems Science.gov (United States) Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A. 2016-08-01 We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo. 16. Application of Macro Response Monte Carlo method for electron spectrum simulation International Nuclear Information System (INIS) During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations 17. Monte Carlo implementation, validation, and characterization of a 120 leaf MLC International Nuclear Information System (INIS) Purpose: Recently, the new high definition multileaf collimator (HD120 MLC) was commercialized by Varian Medical Systems providing high resolution in the center section of the treatment field. The aim of this work is to investigate the characteristics of the HD120 MLC using Monte Carlo (MC) methods. Methods: Based on the information of the manufacturer, the HD120 MLC was implemented into the already existing Swiss MC Plan (SMCP). The implementation has been configured by adjusting the physical density and the air gap between adjacent leaves in order to match transmission profile measurements for 6 and 15 MV beams of a Novalis TX. These measurements have been performed in water using gafchromic films and an ionization chamber at an SSD of 95 cm and a depth of 5 cm. The implementation was validated by comparing diamond measured and calculated penumbra values (80%-20%) for different field sizes and water depths. Additionally, measured and calculated dose distributions for a head and neck IMRT case using the DELTA4 phantom have been compared. The validated HD120 MLC implementation has been used for its physical characterization. For this purpose, phase space (PS) files have been generated below the fully closed multileaf collimator (MLC) of a 40 x 22 cm2 field size for 6 and 15 MV. The PS files have been analyzed in terms of energy spectra, mean energy, fluence, and energy fluence in the direction perpendicular to the MLC leaves and have been compared with the corresponding data using the well established Varian 80 leaf (MLC80) and Millennium M120 (M120 MLC) MLCs. Additionally, the impact of the tongue and groove design of the MLCs on dose has been characterized. Results: Calculated transmission values for the HD120 MLC are 1.25% and 1.34% in the central part of the field for the 6 and 15 MV beam, respectively. The corresponding ionization chamber measurements result in a transmission of 1.20% and 1.35%. Good agreement has been found for the comparison between 18. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications. Science.gov (United States) Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J 2004-09-01 We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy. PMID:15487756 19. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications International Nuclear Information System (INIS) We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1x108 or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8x108 histories. For a smaller number of histories (1x108) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1x108 histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy 20. A new DNB design method using the system moment method combined with Monte Carlo simulation International Nuclear Information System (INIS) A new statistical method of core thermal design for pressurized water reactors is presented. It not only quantifies the DNBR parameter uncertainty by the system moment method, but also combines the DNBR parameter with correlation uncertainty using Monte Carlo technique. The randomizing function for Monte Carlo simulation was expressed in a form of reciprocal-multiplication of DNBR parameter and correlation uncertainty factors. The results of comparisons with the conventional methods show that the DNBR limit calculated by this method is in good agreement with that by the SCU method with less computational effort and it is considered applicable to the current DNB design 1. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization Directory of Open Access Journals (Sweden) S. J. Noh 2011-10-01 Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place. 2. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer: I. Algorithms and numerical methods CERN Document Server Harries, Tim J 2015-01-01 We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelisation method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion onto, and the growth of, the protostars. We detail the resu... 3. The macro response Monte Carlo method for electron transport Energy Technology Data Exchange (ETDEWEB) Svatos, M M 1998-09-01 The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could 4. A CNS calculation line based on a Monte Carlo method International Nuclear Information System (INIS) Full text: The design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. Decisions taken in this sense affect not only the neutron flux in the source neighborhood, which can be evaluated by a standard empirical method, but also the neutron flux values in experimental positions far away of the neutron source. At long distances from the neutron source, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures. Standard and typical terminology such as average neutron flux, neutron current, angular flux, luminosity, are magnitudes very difficult to evaluate in positions located several meters away from the neutron source. The Monte Carlo method is a unique and powerful tool to transport neutrons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The proper use of MCNP as the main tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors. The design goal is to evaluate the performance of the neutron sources, their beam tubes and neutron guides at specific experimental locations in the reactor hall as well as in the neutron or experimental hall. In this work, the calculation methodology used to design Cold, Thermal and Hot Neutron Sources and their associated Neutron Beam Transport Systems, based on the use of the MCNP code, is presented. This work also presents some changes made to the cross section libraries in order to cope with cryogenic moderators such as liquid hydrogen and liquid deuterium. (author) 5. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part II: Model application to the CENICA, Pedregal and Santa Ana sites Directory of Open Access Journals (Sweden) F. M. San Martini 2006-01-01 Full Text Available A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the model is able to accurately predict the observed inorganic particle concentrations at all three sites. The agreement between the predicted and observed gas phase ammonia concentration is excellent. The NOz concentration calculated from the NOy, NO and NO2 observations is of limited use in constraining the gas phase nitric acid concentration given the large uncertainties in this measure of nitric acid and additional reactive nitrogen species. Focusing on the acidic period of 9–11 April identified by Salcedo et al. (2006, the model accurately predicts the particle phase observations during this period with the exception of the nitrate predictions after 10:00 a.m. (Central Daylight Time, CDT on 9 April, where the model underpredicts the observations by, on average, 20%. This period had a low planetary boundary layer, very high particle concentrations, and higher than expected nitrogen dioxide concentrations. For periods when the particle chloride observations are consistently above the detection limit, the model is able to both accurately predict the particle chloride mass concentrations and provide well-constrained HCl (g concentrations. The availability of gas-phase ammonia observations helps constrain the predicted HCl (g concentrations. When the particles are aqueous, the most likely concentrations of HCl (g are in the sub-ppbv range. The most likely predicted concentration of HCl (g was found to reach concentrations of order 10 ppbv if the particles are dry. Finally, the 6. Hybrid Deterministic-Monte Carlo Methods for Neutral Particle Transport International Nuclear Information System (INIS) In the history of transport analysis methodology for nuclear systems, there have been two fundamentally different methods, i.e., deterministic and Monte Carlo (MC) methods. Even though these two methods coexisted for the past 60 years and are complementary each other, they never been coded in the same computer codes. Recently, however, researchers have started to consider to combine these two methods in a computer code to make use of the strengths of two algorithms and avoid weaknesses. Although the advanced modern deterministic techniques such as method of characteristics (MOC) can solve a multigroup transport equation very accurately, there are still uncertainties in the MOC solutions due to the inaccuracy of the multigroup cross section data caused by approximations in the process of multigroup cross section generation, i.e., equivalence theory, interference effects, etc. Conversely, the MC method can handle the resonance shielding effect accurately when sufficiently many neutron histories are used but it takes a long calculation time. There was also a research to combine a multigroup transport and a continuous energy transport solver in a computer code system depending on the energy range. This paper proposes a hybrid deterministic-MC method in which a multigroup MOC method is used for high and low energy range and continuous MC method is used for the intermediate resonance energy range for efficient and accurate transport analysis 7. The derivation of Particle Monte Carlo methods for plasma modeling from transport equations OpenAIRE Longo, Savino 2008-01-01 We analyze here in some detail, the derivation of the Particle and Monte Carlo methods of plasma simulation, such as Particle in Cell (PIC), Monte Carlo (MC) and Particle in Cell / Monte Carlo (PIC/MC) from formal manipulation of transport equations. 8. Methods for variance reduction in Monte Carlo simulations Science.gov (United States) Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J. 2016-03-01 Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here. 9. Radiative heat transfer by the Monte Carlo method CERN Document Server Hartnett †, James P; Cho, Young I; Greene, George A; Taniguchi, Hiroshi; Yang, Wen-Jei; Kudo, Kazuhiko 1995-01-01 This book presents the basic principles and applications of radiative heat transfer used in energy, space, and geo-environmental engineering, and can serve as a reference book for engineers and scientists in researchand development. A PC disk containing software for numerical analyses by the Monte Carlo method is included to provide hands-on practice in analyzing actual radiative heat transfer problems.Advances in Heat Transfer is designed to fill the information gap between regularly scheduled journals and university level textbooks by providing in-depth review articles over a broader scope than journals or texts usually allow.Key Features* Offers solution methods for integro-differential formulation to help avoid difficulties* Includes a computer disk for numerical analyses by PC* Discusses energy absorption by gas and scattering effects by particles* Treats non-gray radiative gases* Provides example problems for direct applications in energy, space, and geo-environmental engineering 10. Modelling a gamma irradiation process using the Monte Carlo method Energy Technology Data Exchange (ETDEWEB) Soares, Gabriela A.; Pereira, Marcio T., E-mail: [email protected], E-mail: [email protected] [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil) 2011-07-01 In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author) 11. The discrete angle technique combined with the subgroup Monte Carlo method International Nuclear Information System (INIS) We are investigating the use of the discrete angle technique for taking into account anisotropy scattering in the case of a subgroup (or multiband) Monte Carlo algorithm implemented in the DRAGON lattice code. In order to use the same input library data already available for deterministic methods, only Legendre moments of the isotopic transfer cross sections are available, typically computed by the GROUPR module of NJOY. However the direct use of these data is impractical into a Monte Carlo algorithm, due to the occurrence of negative parts into these distributions. To deal with this limitation, Legendre expansions are consistently converted by a moment method into sums of Dirac-delta distributions. These probability tables can then be directly used to sample the scattering cosine. In this proposed approach, the same moment approach is used to compute probability tables for the scattering angle and for the resonant cross sections. The applicability of the moment approach shall however be thoroughly investigated, due to the presence of incoherent Legendre moments. When Dirac angles can not be computed, the discrete angle technique is substituted by legacy semi-analytic methods. We provide numerical examples to illustrate the methodology by comparison with SN and legacy Monte Carlo codes on several benchmarks from the ICSBEP. (author) 12. Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering OpenAIRE Machta, Jon; Ellis, Richard S. 2011-01-01 Parallel tempering and population annealing are both effective methods for simulating equilibrium systems with rough free energy landscapes. Parallel tempering, also known as replica exchange Monte Carlo, is a Markov chain Monte Carlo method while population annealing is a sequential Monte Carlo method. Both methods overcome the exponential slowing associated with high free energy barriers. The convergence properties and efficiency of the two methods are compared. For large systems, populatio... 13. Reactor physics analysis method based on Monte Carlo homogenization International Nuclear Information System (INIS) Background: Many new concepts of nuclear energy systems with complicated geometric structures and diverse energy spectra have been put forward to meet the future demand of nuclear energy market. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multi-group cross section libraries. The Monte Carlo (MC) method predominates the suitability of geometry and spectrum, but faces the problems of long computation time and slow convergence. Purpose: This work aims to find a novel scheme to take the advantages of both methods drawn from the deterministic core analysis method and MC method. Methods: A new two-step core analysis scheme is proposed to combine the geometry modeling capability and continuous energy cross section libraries of MC method, as well as the higher computational efficiency of deterministic method. First of all, the MC simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Then, the core diffusion calculations can be done with these multi-group cross sections. Results: The new scheme can achieve high efficiency while maintain acceptable precision. Conclusion: The new scheme can be used as an effective tool for the design and analysis of innovative nuclear energy systems, which has been verified by numeric tests. (authors) 14. Comprehensive evaluation and clinical implementation of commercially available Monte Carlo dose calculation algorithm. Science.gov (United States) Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J 2013-01-01 A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed 15. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization Directory of Open Access Journals (Sweden) S. J. Noh 2011-04-01 Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter. 16. XBRL implementation methods in COREP reporting OpenAIRE Kettula, Teemu 2015-01-01 Objectives of the Study: The main objective of this study is to find out the XBRL adoption methods for European banks to submit COREP reports to local FSAs and to explore transitions in these methods. Thus, the goal is to find patterns from the transitions in XBRL implementation methods. The study is exploratory, as there is no earlier literature about XBRL implementation methods in COREP reporting or from XBRL implementation method transitions in any field. Additionally, this thesis h... 17. Implementation of mathematical phantom of hand and forearm in GEANT4 Monte Carlo code International Nuclear Information System (INIS) In this work, the implementation of a hand and forearm Geant4 phantom code, for further evaluation of occupational exposure of ends of the radionuclides decay manipulated during procedures involving the use of injection syringe. The simulation model offered by Geant4 includes a full set of features, with the reconstruction of trajectories, geometries and physical models. For this work, the values calculated in the simulation are compared with the measurements rates by thermoluminescent dosimeters (TLDs) in physical phantom REMAB®. From the analysis of the data obtained through simulation and experimentation, of the 14 points studied, there was a discrepancy of only 8.2% of kerma values found, and these figures are considered compatible. The geometric phantom implemented in Geant4 Monte Carlo code was validated and can be used later for the evaluation of doses at ends 18. Comparison of the TEP method for neutral particle transport in the plasma edge with the Monte Carlo method International Nuclear Information System (INIS) The transmission/escape probability (TEP) method for neutral particle transport has recently been introduced and implemented for the calculation of 2-D neutral atom transport in the edge plasma and divertor regions of tokamaks. The results of an evaluation of the accuracy of the approximations made in the calculation of the basic TEP transport parameters are summarized. Comparisons of the TEP and Monte Carlo calculations for model problems using tokamak experimental geometries and for the analysis of measured neutral densities in DIII-D are presented. The TEP calculations are found to agree rather well with Monte Carlo results, for the most part, but the need for a few extensions of the basic TEP transport methodology and for inclusion of molecular effects and a better wall reflection model in the existing code is suggested by the study. (author) 19. Interacting multiagent systems kinetic equations and Monte Carlo methods CERN Document Server Pareschi, Lorenzo 2014-01-01 The description of emerging collective phenomena and self-organization in systems composed of large numbers of individuals has gained increasing interest from various research communities in biology, ecology, robotics and control theory, as well as sociology and economics. Applied mathematics is concerned with the construction, analysis and interpretation of mathematical models that can shed light on significant problems of the natural sciences as well as our daily lives. To this set of problems belongs the description of the collective behaviours of complex systems composed by a large enough number of individuals. Examples of such systems are interacting agents in a financial market, potential voters during political elections, or groups of animals with a tendency to flock or herd. Among other possible approaches, this book provides a step-by-step introduction to the mathematical modelling based on a mesoscopic description and the construction of efficient simulation algorithms by Monte Carlo methods. The ar... 20. Quasi Monte Carlo methods for optimization models of the energy industry with pricing and load processes International Nuclear Information System (INIS) We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising. 1. On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems Energy Technology Data Exchange (ETDEWEB) Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) 2015-08-31 The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach. 2. Evaluation of uncertainty in grating pitch measurement by optical diffraction using Monte Carlo methods International Nuclear Information System (INIS) Measurement of grating pitch by optical diffraction is one of the few methods currently available for establishing traceability to the definition of the meter on the nanoscale; therefore, understanding all aspects of the measurement is imperative for accurate dissemination of the SI meter. A method for evaluating the component of measurement uncertainty associated with coherent scattering in the diffractometer instrument is presented. The model equation for grating pitch calibration by optical diffraction is an example where Monte Carlo (MC) methods can vastly simplify evaluation of measurement uncertainty. This paper includes discussion of the practical aspects of implementing MC methods for evaluation of measurement uncertainty in grating pitch calibration by diffraction. Downloadable open-source software is demonstrated. (technical design note) 3. Earthquake Forecasting Based on Data Assimilation: Sequential Monte Carlo Methods for Renewal Processes CERN Document Server Werner, M J; Sornette, D 2009-01-01 In meteorology, engineering and computer sciences, data assimilation is routinely employed as the optimal way to combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than can be achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant to the seismic gap hypothesis, models of characteristic earthquakes and to recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating ar... 4. First Numerical Implementation of the Loop-Tree Duality Method CERN Document Server Buchta, Sebastian 2015-01-01 The Loop-Tree Duality (LTD) is a novel perturbative method in QFT that establishes a relation between loop-level and tree-level amplitudes, which gives rise to the idea of treating them simultaneously in a common Monte Carlo. Initially introduced for one-loop scalar integrals, the applicability of the LTD has been expanded to higher order loops and Feynman graphs beyond simple poles. For the first time, a numerical implementation relying on the LTD was realized in the form of a computer program that calculates one-loop scattering amplitudes. We present details on the employed contour deformation as well as results for scalar and tensor integrals. 5. Synchronous parallel Kinetic Monte Carlo: Implementation and results for object and lattice approaches International Nuclear Information System (INIS) An adaptation of the synchronous parallel Kinetic Monte Carlo (spKMC) algorithm developed by Martinez et al. (2008) to the existing KMC code MMonCa (Martin-Bragado et al. 2013) is presented in this work. Two cases, general enough to provide an idea of the current state-of-the-art in parallel KMC, are presented: Object KMC simulations of the evolution of damage in irradiated iron, and Lattice KMC simulations of epitaxial regrowth of amorphized silicon. The results allow us to state that (a) the parallel overhead is critical, and severely degrades the performance of the simulator when it is comparable to the CPU time consumed per event, (b) the balance between domains is important, but not critical, (c) the algorithm and its implementation are correct and (d) further improvements are needed for spKMC to become a general, all-working solution for KMC simulations 6. Synchronous parallel Kinetic Monte Carlo: Implementation and results for object and lattice approaches Energy Technology Data Exchange (ETDEWEB) Martin-Bragado, Ignacio, E-mail: [email protected] [IMDEA Materials Institute, C/ Eric Kandel 2, 28906 Getafe, Madrid (Spain); Abujas, J.; Galindo, P.L.; Pizarro, J. [Departamento de Ingeniería Informática, Universidad de Cádiz, Puerto Real, Cádiz (Spain) 2015-06-01 An adaptation of the synchronous parallel Kinetic Monte Carlo (spKMC) algorithm developed by Martinez et al. (2008) to the existing KMC code MMonCa (Martin-Bragado et al. 2013) is presented in this work. Two cases, general enough to provide an idea of the current state-of-the-art in parallel KMC, are presented: Object KMC simulations of the evolution of damage in irradiated iron, and Lattice KMC simulations of epitaxial regrowth of amorphized silicon. The results allow us to state that (a) the parallel overhead is critical, and severely degrades the performance of the simulator when it is comparable to the CPU time consumed per event, (b) the balance between domains is important, but not critical, (c) the algorithm and its implementation are correct and (d) further improvements are needed for spKMC to become a general, all-working solution for KMC simulations. 7. A Comparison of Advanced Monte Carlo Methods for Open Systems: CFCMC vs CBMC NARCIS (Netherlands) A. Torres-Knoop; S.P. Balaji; T.J.H. Vlugt; D. Dubbeldam 2014-01-01 Two state-of-the-art simulation methods for computing adsorption properties in porous materials like zeolites and metal-organic frameworks are compared: the configurational bias Monte Carlo (CBMC) method and the recently proposed continuous fractional component Monte Carlo (CFCMC) method. We show th 8. Formulation and Application of Quantum Monte Carlo Method to Fractional Quantum Hall Systems OpenAIRE Suzuki, Sei; Nakajima, Tatsuya 2003-01-01 Quantum Monte Carlo method is applied to fractional quantum Hall systems. The use of the linear programming method enables us to avoid the negative-sign problem in the Quantum Monte Carlo calculations. The formulation of this method and the technique for avoiding the sign problem are described. Some numerical results on static physical quantities are also reported. 9. Radiation transport in random disperse media implemented in the Monte Carlo code PRIZMA International Nuclear Information System (INIS) The paper describes PRIZMA capabilities for modeling radiation transport in random disperse media by the Monte Carlo method. It proposes a method for simulating radiation transport in binary media with variable volume fractions. The method models the medium consequently from one grain crossed by a particle trajectory to another. Like in the Limited Chord Length Sampling (LCLS) method, particles in grains are tracked in the actual grain geometry, but unlike LCLS, the medium is modeled using only Matrix Chord Length Sampling (MCLS) from the exponential distribution and it is not necessary to know the grain chord length distribution. This helped us extend the method to media with randomly oriented, arbitrarily shaped convex grains. Other extensions include multicomponent media - grains of several sorts, and polydisperse media - grains of different sizes 10. Seriation in paleontological data using markov chain Monte Carlo methods. Directory of Open Access Journals (Sweden) Kai Puolamäki 2006-02-01 Full Text Available Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95. 11. Limit theorems for weighted samples with applications to sequential Monte Carlo methods OpenAIRE Douc, R.; Moulines, France E. 2008-01-01 In the last decade, sequential Monte Carlo methods (SMC) emerged as a key tool in computational statistics [see, e.g., Sequential Monte Carlo Methods in Practice (2001) Springer, New York, Monte Carlo Strategies in Scientific Computing (2001) Springer, New York, Complex Stochastic Systems (2001) 109–173]. These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighted population of particles, which are generated recursively. ¶ ... 12. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond International Nuclear Information System (INIS) Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors) 13. Continuous-energy Monte Carlo methods for calculating generalized response sensitivities using TSUNAMI-3D International Nuclear Information System (INIS) This work introduces a new approach for calculating the sensitivity of generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The GEneralized Adjoint Responses in Monte Carlo (GEAR-MC) method has enabled the calculation of high resolution sensitivity coefficients for multiple, generalized neutronic responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here and proof of principle is demonstrated by calculating sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications. (author) 14. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models Science.gov (United States) Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C. 2010-10-01 Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage. 15. Direct simulation Monte Carlo calculation of rarefied gas drag using an immersed boundary method Science.gov (United States) Jin, W.; Kleijn, C. R.; van Ommen, J. R. 2016-06-01 For simulating rarefied gas flows around a moving body, an immersed boundary method is presented here in conjunction with the Direct Simulation Monte Carlo (DSMC) method in order to allow the movement of a three dimensional immersed body on top of a fixed background grid. The simulated DSMC particles are reflected exactly at the landing points on the surface of the moving immersed body, while the effective cell volumes are taken into account for calculating the collisions between molecules. The effective cell volumes are computed by utilizing the Lagrangian intersecting points between the immersed boundary and the fixed background grid with a simple polyhedra regeneration algorithm. This method has been implemented in OpenFOAM and validated by computing the drag forces exerted on steady and moving spheres and comparing the results to that from conventional body-fitted mesh DSMC simulations and to analytical approximations. 16. A Monte Carlo simulation based inverse propagation method for stochastic model updating Science.gov (United States) Bao, Nuo; Wang, Chunjie 2015-08-01 This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method. 17. Diffusion Monte Carlo methods applied to Hamaker Constant evaluations CERN Document Server Hongo, Kenta 2016-01-01 We applied diffusion Monte Carlo (DMC) methods to evaluate Hamaker constants of liquids for wettabilities, with practical size of a liquid molecule, Si$_6$H$_{12}$ (cyclohexasilane). The evaluated constant would be justified in the sense that it lies within the expected dependence on molecular weights among similar kinds of molecules, though there is no reference experimental values available for this molecule. Comparing the DMC with vdW-DFT evaluations, we clarified that some of the vdW-DFT evaluations could not describe correct asymptotic decays and hence Hamaker constants even though they gave reasonable binding lengths and energies, and vice versa for the rest of vdW-DFTs. We also found the advantage of DMC for this practical purpose over CCSD(T) because of the large amount of BSSE/CBS corrections required for the latter under the limitation of basis set size applicable to the practical size of a liquid molecule, while the former is free from such limitations to the extent that only the nodal structure of... 18. Dose calculation of 6 MV Truebeam using Monte Carlo method International Nuclear Information System (INIS) The purpose of this work is to simulate 6 MV Varian Truebeam linac dosimeter characteristics using Monte Carlo method and to investigate the availability of phase space file and the accuracy of the simulation. With the phase space file at linac window supplied by Varian to be a source, the patient-dependent part was simulated. Dose distributions in a water phantom with a 10 cm × 10 cm field were calculated and compared with measured data for validation. Evident time reduction was obtained from 4-5 h which a whole simulation cost on the same computer to around 48 minutes. Good agreement between simulations and measurements in water was observed. Dose differences are less than 3% for depth doses in build-up region and also for dose profiles inside the 80% field size, and the effect in penumbra is good. It demonstrate that the simulation using existing phase space file as the EGSnrc source is efficient. Dose differences between calculated data and measured data could meet the requirements for dose calculation. (authors) 19. Medical Imaging Image Quality Assessment with Monte Carlo Methods Science.gov (United States) Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S. 2015-09-01 The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations. 20. Gas Swing Options: Introduction and Pricing using Monte Carlo Methods Directory of Open Access Journals (Sweden) Václavík Tomáš 2016-02-01 Full Text Available Motivated by the changing nature of the natural gas industry in the European Union, driven by the liberalisation process, we focus on the introduction and pricing of gas swing options. These options are embedded in typical gas sales agreements in the form of offtake flexibility concerning volume and time. The gas swing option is actually a set of several American puts on a spread between prices of two or more energy commodities. This fact, together with the fact that the energy markets are fundamentally different from traditional financial security markets, is important for our choice of valuation technique. Due to the specific features of the energy markets, the existing analytic approximations for spread option pricing are hardly applicable to our framework. That is why we employ Monte Carlo methods to model the spot price dynamics of the underlying commodities. The price of an arbitrarily chosen gas swing option is then computed in accordance with the concept of risk-neutral expectations. Finally, our result is compared with the real payoff from the option realised at the time of the option execution and the maximum ex-post payoff that the buyer could generate in case he knew the future, discounted to the original time of the option pricing. 1. Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters Energy Technology Data Exchange (ETDEWEB) Owen, R.K. 1990-12-01 Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons. 2. Quantum Monte Carlo methods and lithium cluster properties Energy Technology Data Exchange (ETDEWEB) Owen, R.K. 1990-12-01 Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons. 3. Development of 3d reactor burnup code based on Monte Carlo method and exponential Euler method International Nuclear Information System (INIS) Burnup analysis plays a key role in fuel breeding, transmutation and post-processing in nuclear reactor. Burnup codes based on one-dimensional and two-dimensional transport method have difficulties in meeting the accuracy requirements. A three-dimensional burnup analysis code based on Monte Carlo method and Exponential Euler method has been developed. The coupling code combines advantage of Monte Carlo method in complex geometry neutron transport calculation and FISPACT in fast and precise inventory calculation, meanwhile resonance Self-shielding effect in inventory calculation can also be considered. The IAEA benchmark text problem has been adopted for code validation. Good agreements were shown in the comparison with other participants' results. (authors) 4. Applications of Monte Carlo methods in nuclear science and engineering International Nuclear Information System (INIS) 5. Simple recursive implementation of fast multipole method International Nuclear Information System (INIS) In this paper we present an implementation of the well known 'fast multipole' method (FMM) for the efficient calculation of dipole fields. The main advantage of the present implementation is simplicity-we believe that a major reason for the lack of use of FMMs is their complexity. One of the simplifications is the use of polynomials in the Cartesian coordinates rather than spherical harmonics. We have implemented it in the context of an arbitrary hierarchical system of cells-no periodic mesh is required, as it is for FFT (fast Fourier transform) methods. The implementation is in terms of recursive functions. Results are given for application to micromagnetic simulation. Complete source code is provided for an open-source implementation of this method, as well as an installer for the resulting program. 6. Theory and applications of the fission matrix method for continuous-energy Monte Carlo International Nuclear Information System (INIS) Highlights: • The fission matrix method is implemented into the MCNP Monte Carlo code. • Eigenfunctions and eigenvalues of power distributions are shown and studied. • Source convergence acceleration is demonstrated for a fuel storage vault problem. • Forward flux eigenmodes and relative uncertainties are shown for a reactor problem. • Eigenmodes expansions are performed during source convergence for a reactor problem. - Abstract: The fission matrix method can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission distribution. It can also be used to accelerate the convergence of power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. These aspects of the method are here both theoretically justified and demonstrated, and then used to investigate fundamental properties of the transport equation for a continuous-energy physics treatment. Implementation into the MCNP6 Monte Carlo code is also discussed, including a sparse representation of the fission matrix, which permits much larger and more accurate representations. Properties of the calculated eigenvalue spectrum of a 2D PWR problem are discussed: for a fine enough mesh and a sufficient degree of sampling, the spectrum both converges and has a negligible imaginary component. Calculation of the fundamental mode of the fission matrix for a fuel storage vault problem shows how convergence can be accelerated by over a factor of ten given a flat initial distribution. Forward fluxes and the relative uncertainties for a 2D PWR are shown, both of which qualitatively agree with expectation. Lastly, eigenmode expansions are performed during source convergence of the 2D PWR 7. Monte Carlo methods for direct calculation of 3D dose distributions for photon fields in radiotherapy International Nuclear Information System (INIS) Even with state of the art treatment planning systems the photon dose calculation can be erroneous under certain circumstances. In these cases Monte Carlo methods promise a higher accuracy. We have used the photon transport code CHILD of the GSF-Forschungszentrum, which was developed to calculate dose in diagnostic radiation protection matters. The code was refined for application in radiotherapy for high energy photon irradiation and should serve for dose verification in individual cases. The irradiation phantom can be entered as any desired 3D matrix or be generated automatically from an individual CT database. The particle transport takes into account pair production, photo, and Compton effect with certain approximations. Efficiency is increased by the method of 'fractional photons'. The generated secondary electrons are followed by the unscattered continuous-slowing-down-approximation (CSDA). The developed Monte Carlo code Monaco Matrix was tested with simple homogeneous and heterogeneous phantoms through comparisons with simulations of the well known but slower EGS4 code. The use of a point source with a direction independent energy spectrum as simplest model of the radiation field from the accelerator head is shown to be sufficient for simulation of actual accelerator depth dose curves. Good agreement (<2%) was found for depth dose curves in water and in bone. With complex test phantoms and comparisons with EGS4 calculated dose profiles some drawbacks in the code were found. Thus, the implementation of the electron multiple-scattering should lead us to step by step improvement of the algorithm. (orig.) 8. Simulating Compton scattering using Monte Carlo method: COSMOC library Czech Academy of Sciences Publication Activity Database Opava: Silesian University, 2014 - (Stuchlík, Z.), s. 1-10. (Publications of the Institute of Physics. 7). ISBN 9788075101266. ISSN 2336-5668. [RAGtime /14.-16./. Opava (CZ), 18.09. 2012 -22.09. 2012 ] Institutional support: RVO:67985815 Keywords : Monte Carlo * Compton scattering * C++ Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics 9. Analysis of some splitting and roulette algorithms in shield calculations by the Monte Carlo method International Nuclear Information System (INIS) Different schemes of using the splitting and roulette methods in calculation of radiation transport in nuclear facility shields by the Monte Carlo method are considered. Efficiency of the considered schemes is estimated on the example of test calculations 10. Review of quantum Monte Carlo methods and results for Coulombic systems Energy Technology Data Exchange (ETDEWEB) Ceperley, D. 1983-01-27 The various Monte Carlo methods for calculating ground state energies are briefly reviewed. Then a summary of the charged systems that have been studied with Monte Carlo is given. These include the electron gas, small molecules, a metal slab and many-body hydrogen. 11. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D Energy Technology Data Exchange (ETDEWEB) Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL 2014-01-01 This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications. 12. Parallel implementation of the Monte Carlo transport code EGS4 on the hypercube International Nuclear Information System (INIS) Monte Carlo transport codes are commonly used in the study of particle interactions. The CALOR89 code system is a combination of several Monte Carlo transport and analysis programs. In order to produce good results, a typical Monte Carlo run will have to produce many particle histories. On a single processor computer, the transport calculation can take a huge amount of time. However, if the transport of particles were divided among several processors in a multiprocessor machine, the time can be drastically reduced 13. BREESE-II: auxiliary routines for implementing the albedo option in the MORSE Monte Carlo code International Nuclear Information System (INIS) The routines in the BREESE package implement the albedo option in the MORSE Monte Carlo Code by providing (1) replacements for the default routines ALBIN and ALBDO in the MORSE Code, (2) an estimating routine ALBDOE compatible with the SAMBO package in MORSE, and (3) a separate program that writes a tape of albedo data in the proper format for ALBIN. These extensions of the package initially reported in 1974 were performed jointly by ORNL, Bechtel Power Corporation, and Science Applications, Inc. The first version of BREESE had a fixed number of outgoing polar angles and the number of outgoing azimuthal angles was a function of the value of the outgoing polar angle only. An examination of differential albedo data led to this modified version which allows the number of outgoing polar angles to be dependent upon the value of the incoming polar angle and the number of outgoing azimuthal angles to be a function of the value of both incoming and outgoing polar angles 14. The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy CERN Document Server Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F 2010-01-01 Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor... 15. Monte Carlo Method for Calculating Oxygen Abundances and Their Uncertainties from Strong-Line Flux Measurements CERN Document Server Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or 2015-01-01 We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi... 16. A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport International Nuclear Information System (INIS) Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution. 17. Monte Carlo method for calculating oxygen abundances and their uncertainties from strong-line flux measurements Science.gov (United States) Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O. 2016-07-01 We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z https://github.com/nyusngroup/pyMCZ. 18. Genetic algorithms: An evolution from Monte Carlo Methods for strongly non-linear geophysical optimization problems Science.gov (United States) Gallagher, Kerry; Sambridge, Malcolm; Drijkoningen, Guy In providing a method for solving non-linear optimization problems Monte Carlo techniques avoid the need for linearization but, in practice, are often prohibitive because of the large number of models that must be considered. A new class of methods known as Genetic Algorithms have recently been devised in the field of Artificial Intelligence. We outline the basic concept of genetic algorithms and discuss three examples. We show that, in locating an optimal model, the new technique is far superior in performance to Monte Carlo techniques in all cases considered. However, Monte Carlo integration is still regarded as an effective method for the subsequent model appraisal. 19. Gamma ray energy loss spectra simulation in NaI detectors with the Monte Carlo method International Nuclear Information System (INIS) With the aim of studying and applying the Monte Carlo method, a computer code was developed to calculate the pulse height spectra and detector efficiencies for gamma rays incident on NaI (Tl) crystals. The basic detector processes in NaI (Tl) detectors are given together with an outline of Monte Carlo methods and a general review of relevant published works. A detailed description of the application of Monte Carlo methods to ν-ray detection in NaI (Tl) detectors is given. Comparisons are made with published, calculated and experimental, data. (Author) 20. Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues International Nuclear Information System (INIS) The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL 1. SU-E-T-277: Raystation Electron Monte Carlo Commissioning and Clinical Implementation International Nuclear Information System (INIS) Purpose: To evaluate the Raystation v4.0 Electron Monte Carlo algorithm for an Elekta Infinity linear accelerator and commission for clinical use. Methods: A total of 199 tests were performed (75 Export and Documentation, 20 PDD, 30 Profiles, 4 Obliquity, 10 Inhomogeneity, 55 MU Accuracy, and 5 Grid and Particle History). Export and documentation tests were performed with respect to MOSAIQ (Elekta AB) and RadCalc (Lifeline Software Inc). Mechanical jaw parameters and cutout magnifications were verified. PDD and profiles for open cones and cutouts were extracted and compared with water tank measurements. Obliquity and inhomogeneity for bone and air calculations were compared to film dosimetry. MU calculations for open cones and cutouts were performed and compared to both RadCalc and simple hand calculations. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Acceptability was categorized as follows: performs as expected, negligible impact on workflow, marginal impact, critical impact or safety concern, and catastrophic impact of safety concern. Results: Overall results are: 88.8% perform as expected, 10.2% negligible, 2.0% marginal, 0% critical and 0% catastrophic. Results per test category are as follows: Export and Documentation: 100% perform as expected, PDD: 100% perform as expected, Profiles: 66.7% perform as expected, 33.3% negligible, Obliquity: 100% marginal, Inhomogeneity 50% perform as expected, 50% negligible, MU Accuracy: 100% perform as expected, Grid and particle histories: 100% negligible. To achieve distributions with satisfactory smoothness level, 5,000,000 particle histories were used. Calculation time was approximately 1 hour. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use. All of the issues encountered have acceptable workarounds. Known issues were reported to Raysearch and will be resolved in upcoming releases 2. An Implementation of the Frequency Matching Method DEFF Research Database (Denmark) Lange, Katrine; Frydendall, Jan; Hansen, Thomas Mejer; aspects of the implementation of the Fre-quency Matching method and the techniques adopted to make it com-putationally feasible also for large-scale inverse problems. The source code is publicly available at GitHub and this paper also provides an example of how to apply the Frequency Matching method to a... 3. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method International Nuclear Information System (INIS) This work is based on the determination of the detection efficiency of 125I and 131I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131I and 125I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author) 4. Quasi-Monte Carlo methods for lattice systems. A first look International Nuclear Information System (INIS) We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N-1/2, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N-1. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. 5. A method of simulating dynamic multileaf collimators using Monte Carlo techniques for intensity-modulated radiation therapy International Nuclear Information System (INIS) A method of modelling the dynamic motion of multileaf collimators (MLCs) for intensity-modulated radiation therapy (IMRT) was developed and implemented into the Monte Carlo simulation. The simulation of the dynamic MLCs (DMLCs) was based on randomizing leaf positions during a simulation so that the number of particle histories being simulated for each possible leaf position was proportional to the monitor units delivered to that position. This approach was incorporated into an EGS4 Monte Carlo program, and was evaluated in simulating the DMLCs for Varian accelerators (Varian Medical Systems, Palo Alto, CA, USA). The MU index of each segment, which was specified in the DMLC-control data, was used to compute the cumulative probability distribution function (CPDF) for the leaf positions. This CPDF was then used to sample the leaf positions during a real-time simulation, which allowed for either the step-shoot or sweeping-leaf motion in the beam delivery. Dose intensity maps for IMRT fields were computed using the above Monte Carlo method, with its accuracy verified by film measurements. The DMLC simulation improved the operational efficiency by eliminating the need to simulate multiple segments individually. More importantly, the dynamic motion of the leaves could be simulated more faithfully by using the above leaf-position sampling technique in the Monte Carlo simulation. (author) 6. Implementation and the choice of evaluation methods DEFF Research Database (Denmark) Flyvbjerg, Bent 1984-01-01 approach founded more in phenomenology and social science. The role of analytical methods is viewed very differently in the two paradigms as in the conception of the policy process in general. Allthough analytical methods have come to play a prominent (and often dominant) role in transportation evaluation...... the programmed paradigm. By emphasizing the importance of the process of social interaction and subordinating analysis to this process, the adaptive paradigm reduces the likelihood of analytical methods narrowing and biasing implementation. To fulfil this subordinate role and to aid social interaction......The development of evaluation and implementation processes has been closely interrelated in both theory and practice. Today, two major paradigms of evaluation and implementation exist: the programmed paradigm with its approach based on the natural science model, and the adaptive paradigm with an... 7. Frequency-domain deviational Monte Carlo method for linear oscillatory gas flows Science.gov (United States) 2015-10-01 Oscillatory non-continuum low Mach number gas flows are often generated by nanomechanical devices in ambient conditions. These flows can be simulated using a range of particle based Monte Carlo techniques, which in their original form operate exclusively in the time-domain. Recently, a frequency-domain weight-based Monte Carlo method was proposed [D. R. Ladiges and J. E. Sader, "Frequency-domain Monte Carlo method for linear oscillatory gas flows," J. Comput. Phys. 284, 351-366 (2015)] that exhibits superior statistical convergence when simulating oscillatory flows. This previous method used the Bhatnagar-Gross-Krook (BGK) kinetic model and contains a "virtual-time" variable to maintain the inherent time-marching nature of existing Monte Carlo algorithms. Here, we propose an alternative frequency-domain deviational Monte Carlo method that facilitates the use of a wider range of molecular models and more efficient collision/relaxation operators. We demonstrate this method with oscillatory Couette flow and the flow generated by an oscillating sphere, utilizing both the BGK kinetic model and hard sphere particles. We also discuss how oscillatory motion of arbitrary time-dependence can be simulated using computationally efficient parallelization. As in the weight-based method, this deviational frequency-domain Monte Carlo method is shown to offer improved computational speed compared to the equivalent time-domain technique. 8. Growing lattice animals and Monte-Carlo methods Science.gov (United States) Reich, G. R.; Leath, P. L. 1980-01-01 We consider the search problems which arise in Monte-Carlo studies involving growing lattice animals. A new periodic hashing scheme (based on a periodic cell) especially suited to these problems is presented which takes advantage both of the connected geometric structure of the animals and the traversal-oriented nature of the search. The scheme is motivated by a physical analogy and tested numerically on compact and on ramified animals. In both cases the performance is found to be more efficient than random hashing, and to a degree depending on the compactness of the animals 9. Study of the quantitative analysis approach of maintenance by the Monte Carlo simulation method International Nuclear Information System (INIS) This study is examination of the quantitative valuation by Monte Carlo simulation method of maintenance activities of a nuclear power plant. Therefore, the concept of the quantitative valuation of maintenance that examination was advanced in the Japan Society of Maintenology and International Institute of Universality (IUU) was arranged. Basis examination for quantitative valuation of maintenance was carried out at simple feed water system, by Monte Carlo simulation method. (author) 10. Spectral method and its high performance implementation KAUST Repository Wu, Zedong 2014-01-01 We have presented a new method that can be dispersion free and unconditionally stable. Thus the computational cost and memory requirement will be reduced a lot. Based on this feature, we have implemented this algorithm on GPU based CUDA for the anisotropic Reverse time migration. There is almost no communication between CPU and GPU. For the prestack wavefield extrapolation, it can combine all the shots together to migration. However, it requires to solve a bigger dimensional problem and more meory which can\\'t fit into one GPU cards. In this situation, we implement it based on domain decomposition method and MPI for distributed memory system. 11. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method Science.gov (United States) Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A. 2013-12-01 We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition. 12. Implementation of Mobility Management Methods for MANET Directory of Open Access Journals (Sweden) Jiri Hosek 2012-12-01 Full Text Available The Mobile Adhoc Networks represent very perspective way of communication. The mobility management is on the most often discussed research issues within these networks. There have been designed many methods and algorithms to control and predict the movement of mobile nodes, but each method has different functional principle and is suitable for different environment and network circumstances. Therefore, it is advantageous to use a simulation tool in order to model and evaluate a mobile network together with the mobility management method. The aim of this paper is to present the implementation process of movement control methods into simulation environment OPNET Modeler based on the TRJ file. The described trajectory control procedure utilized the information about the route stored in the GPX file which is used to store the GPS coordinates. The developed conversion tool, implementation of proposed method into OPNET Modeler and also final evaluation are presented in this paper. 13. An irreversible Markov-chain Monte Carlo method with skew detailed balance conditions International Nuclear Information System (INIS) An irreversible Markov-chain Monte Carlo (MCMC) method based on a skew detailed balance condition is discussed. Some recent theoretical works concerned with the irreversible MCMC method are reviewed and the irreversible Metropolis-Hastings algorithm for the method is described. We apply the method to ferromagnetic Ising models in two and three dimensions. Relaxation dynamics of the order parameter and the dynamical exponent are studied in comparison to those with the conventional reversible MCMC method with the detailed balance condition. We also examine how the efficiency of exchange Monte Carlo method is affected by the combined use of the irreversible MCMC method 14. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo International Nuclear Information System (INIS) In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors) 15. Verification of the spectral history correction method with fully coupled Monte Carlo code BGCore International Nuclear Information System (INIS) Recently, a new method for accounting for burnup history effects on few-group cross sections was developed and implemented in the reactor dynamic code DYN3D. The method relies on the tracking of the local Pu-239 density which serves as an indicator of burnup spectral history. The validity of the method was demonstrated in PWR and VVER applications. However, the spectrum variation in BWR core is more pronounced due to the stronger coolant density change. Therefore, the purpose of the current work is to further investigate the applicability of the method to BWR analysis. The proposed methodology was verified against recently developed BGCore system, which couples Monte Carlo neutron transport with depletion and thermal hydraulic solvers and thus capable of providing a reference solution for 3D simulations. The results dearly show that neglecting the spectral history effects leads to a very large deviation (e.g. 2000 pcm in reactivity) from fee reference solution. However, a very good agreement between DYN3D and BGCore is observed (on the order of 200 pcm in reactivity), when the. Pu-correction method is applied. (author) 16. MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks Directory of Open Access Journals (Sweden) Zhaoyan Jin 2013-10-01 Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works 17. Analysis of possibility to apply new mathematical methods (R-function theory) in Monte Carlo simulation of complex geometry International Nuclear Information System (INIS) This analysis is part of the report on ' Implementation of geometry module of 05R code in another Monte Carlo code', chapter 6.0: establishment of future activity related to geometry in Monte Carlo method. The introduction points out some problems in solving complex three-dimensional models which induce the need for developing more efficient geometry modules in Monte Carlo calculations. Second part include formulation of the problem and geometry module. Two fundamental questions to be solved are defined: (1) for a given point, it is necessary to determine material region or boundary where it belongs, and (2) for a given direction, all cross section points with material regions should be determined. Third part deals with possible connection with Monte Carlo calculations for computer simulation of geometry objects. R-function theory enables creation of geometry module base on the same logic (complex regions are constructed by elementary regions sets operations) as well as construction geometry codes. R-functions can efficiently replace functions of three-value logic in all significant models. They are even more appropriate for application since three-value logic is not typical for digital computers which operate in two-value logic. This shows that there is a need for work in this field. It is shown that there is a possibility to develop interactive code for computer modeling of geometry objects in parallel with development of geometry module 18. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer - I. Algorithms and numerical methods Science.gov (United States) Harries, Tim J. 2015-04-01 We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms. 19. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method Institute of Scientific and Technical Information of China (English) XI Jia-mi; YANG Geng-she 2008-01-01 Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability. 20. Correlation between vacancies and magnetoresistance changes in FM manganites using the Monte Carlo method International Nuclear Information System (INIS) The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La2/3Ca1/3MnO3, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x–y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn3+eg’–O–Mn3+eg, Mn3+eg–O–Mn4+d3 and Mn3+eg’–O–Mn4+d3. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions TC (Curie temperature) and TMI (metal–insulator temperature) are similar, whereas with the increase in the vacancy percentage, TMI presented lower values than TC. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below TMI. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below TC. - Highlights: • Changes in the resistivity of FM materials as a function of the temperature and external magnetic field can be obtained by the Monte Carlo method, Metropolis algorithm, classical Heisenberg and Kronig–Penney approximation for magnetic clusters. • Increases in the magnetoresistive effect were observed at temperatures below TMI by the vacancies effect. • The resistive hysteresis loop presents two peaks that are directly associated with the coercive field in the magnetic 1. Application of a Monte Carlo method for modeling debris flow run-out Science.gov (United States) Luna, B. Quan; Cepeda, J.; Stumpf, A.; van Westen, C. J.; Malet, J. P.; van Asch, T. W. J. 2012-04-01 A probabilistic framework based on a Monte Carlo method for the modeling of debris flow hazards is presented. The framework is based on a dynamic model, which is combined with an explicit representation of the different parameter uncertainties. The probability distribution of these parameters is determined from an extensive collected database with information of back calibrated past events from different authors. The uncertainty in these inputs can be simulated and used to increase confidence in certain extreme run-out distances. In the Monte Carlo procedure; the input parameters of the numerical models simulating propagation and stoppage of debris flows are randomly selected. Model runs are performed using the randomly generated input values. This allows estimating the probability density function of the output variables characterizing the destructive power of debris flow (for instance depth, velocities and impact pressures) at any point along the path. To demonstrate the implementation of this method, a continuum two-dimensional dynamic simulation model that solves the conservation equations of mass and momentum was applied (MassMov2D). This general methodology facilitates the consistent combination of physical models with the available observations. The probabilistic model presented can be considered as a framework to accommodate any existing one or two dimensional dynamic model. The resulting probabilistic spatial model can serve as a basis for hazard mapping and spatial risk assessment. The outlined procedure provides a useful way for experts to produce hazard or risk maps for the typical case where historical records are either poorly documented or even completely lacking, as well as to derive confidence limits on the proposed zoning. 2. Calculation of gamma-ray families by Monte Carlo method International Nuclear Information System (INIS) Extensive Monte Carlo calculation on gamma-ray families was carried out under appropriate model parameters which are currently used in high energy cosmic ray phenomenology. Characteristics of gamma-ray families are systematically investigated by the comparison of calculated results with experimental data obtained at mountain altitudes. The main point of discussion is devoted to examine the validity of Feynman scaling in the fragmentation region of the multiple meson production. It is concluded that experimental data cannot be reproduced under the assumption of scaling law when primary cosmic rays are dominated by protons. Other possibilities on primary composition and increase of interaction cross section are also examined. These assumptions are consistent with experimental data only when we introduce intense dominance of heavy primaries in E0>1015 eV region and very strong increase of interaction cross section (say sigma varies as Esub(0)sup(0.06)) simultaneously 3. New methods for the Monte Carlo simulation of neutron noise experiments in Ads International Nuclear Information System (INIS) This paper presents two improvements to speed up the Monte-Carlo simulation of neutron noise experiments. The first one is to separate the actual Monte Carlo transport calculation from the digital signal processing routines, while the second is to introduce non-analogue techniques to improve the efficiency of the Monte Carlo calculation. For the latter method, adaptations to the theory of neutron noise experiments were made to account for the distortion of the higher-moments of the calculated neutron noise. Calculations were performed to test the feasibility of the above outlined scheme and to demonstrate the advantages of the application of the track length estimator. It is shown that the modifications improve the efficiency of these calculations to a high extent, which turns the Monte Carlo method into a powerful tool for the development and design of on-line reactivity measurement systems for ADS 4. Quantum trajectory Monte Carlo method describing the coherent dynamics of highly charged ions International Nuclear Information System (INIS) We present a theoretical framework for studying dynamics of open quantum systems. Our formalism gives a systematic path from Hamiltonians constructed by first principles to a Monte Carlo algorithm. Our Monte Carlo calculation can treat the build-up and time evolution of coherences. We employ a reduced density matrix approach in which the total system is divided into a system of interest and its environment. An equation of motion for the reduced density matrix is written in the Lindblad form using an additional approximation to the Born-Markov approximation. The Lindblad form allows the solution of this multi-state problem in terms of Monte Carlo sampling of quantum trajectories. The Monte Carlo method is advantageous in terms of computer storage compared to direct solutions of the equation of motion. We apply our method to discuss coherence properties of the internal state of a Kr35+ ion subject to spontaneous radiative decay. Simulations exhibit clear signatures of coherent transitions 5. Convex-based void filling method for CAD-based Monte Carlo geometry modeling International Nuclear Information System (INIS) Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time 6. An energy transfer method for 4D Monte Carlo dose calculation. Science.gov (United States) Siebers, Jeffrey V; Zhong, Hualiang 2008-09-01 This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862 7. The all particle method: Coupled neutron, photon, electron, charged particle Monte Carlo calculations International Nuclear Information System (INIS) At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig 8. Consideration of convergence judgment method with source acceleration in Monte Carlo criticality calculation International Nuclear Information System (INIS) Theoretical consideration is made for possibility to accelerate and judge convergence of a conventional Monte Carlo iterative calculation when it is used for a weak neutron interaction problem. And the clue for this consideration is rendered with some application analyses using the OECD/NEA source convergence benchmark problems. Some practical procedures are proposed to realize these acceleration and judgment methods in practical application using a Monte Carlo code. (author) 9. Hybrid Monte-Carlo method for simulating neutron and photon radiography International Nuclear Information System (INIS) We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable (E)quivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs 10. Hybrid Monte-Carlo method for simulating neutron and photon radiography Science.gov (United States) Wang, Han; Tang, Vincent 2013-11-01 We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable Equivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs. 11. Combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation Energy Technology Data Exchange (ETDEWEB) Saleur, H.; Derrida, B. 1985-07-01 In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations. 12. Spin-orbit interactions in electronic structure quantum Monte Carlo methods Science.gov (United States) Melton, Cody A.; Zhu, Minyi; Guo, Shi; Ambrosetti, Alberto; Pederiva, Francesco; Mitas, Lubos 2016-04-01 We develop generalization of the fixed-phase diffusion Monte Carlo method for Hamiltonians which explicitly depends on particle spins such as for spin-orbit interactions. The method is formulated in a zero-variance manner and is similar to the treatment of nonlocal operators in commonly used static-spin calculations. Tests on atomic and molecular systems show that it is very accurate, on par with the fixed-node method. This opens electronic structure quantum Monte Carlo methods to a vast research area of quantum phenomena in which spin-related interactions play an important role. 13. Automating methods to improve precision in Monte-Carlo event generation for particle colliders Energy Technology Data Exchange (ETDEWEB) Gleisberg, Tanju 2008-07-01 The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove 14. Automating methods to improve precision in Monte-Carlo event generation for particle colliders International Nuclear Information System (INIS) The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove 15. The S/sub N//Monte Carlo response matrix hybrid method International Nuclear Information System (INIS) A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples 16. Acceptance and implementation of a system of planning computerized based on Monte Carlo International Nuclear Information System (INIS) It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author) 17. Progress on burnup calculation methods coupling Monte Carlo and depletion codes Energy Technology Data Exchange (ETDEWEB) Leszczynski, Francisco [Comision Nacional de Energia Atomica, San Carlos de Bariloche, RN (Argentina). Centro Atomico Bariloche]. E-mail: [email protected] 2005-07-01 Several methods of burnup calculations coupling Monte Carlo and depletion codes that were investigated and applied for the author last years are described. here. Some benchmark results and future possibilities are analyzed also. The methods are: depletion calculations at cell level with WIMS or other cell codes, and use of the resulting concentrations of fission products, poisons and actinides on Monte Carlo calculation for fixed burnup distributions obtained from diffusion codes; same as the first but using a method o coupling Monte Carlo (MCNP) and a depletion code (ORIGEN) at a cell level for obtaining the concentrations of nuclides, to be used on full reactor calculation with Monte Carlo code; and full calculation of the system with Monte Carlo and depletion codes, on several steps. All these methods were used for different problems for research reactors and some comparisons with experimental results of regular lattices were performed. On this work, a resume of all these works is presented and discussion of advantages and problems found are included. Also, a brief description of the methods adopted and MCQ system for coupling MCNP and ORIGEN codes is included. (author) 18. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method CERN Document Server Wollaeger, Ryan T; Graziani, Carlo; Couch, Sean M; Jordan, George C; Lamb, Donald Q; Moses, Gregory A 2013-01-01 We explore the application of Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) to radiation transport in strong fluid outflows with structured opacity. The IMC method of Fleck & Cummings is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking Monte Carlo particles through optically thick materials. The DDMC method of Densmore accelerates an IMC computation where the domain is diffusive. Recently, Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent neutrino transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally grey DDMC method. In this article we rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. The method described is suitable for a large variety of non-mono... 19. Calculation of extended shields in the Monte Carlo method using importance function (BRAND and DD code systems) International Nuclear Information System (INIS) Consideration is given of a technique and algorithms of constructing neutron trajectories in the Monte-Carlo method taking into account the data on adjoint transport equation solution. When simulating the transport part of transfer kernel the use is made of piecewise-linear approximation of free path length density along the particle motion direction. The approach has been implemented in programs within the framework of the BRAND code system. The importance is calculated in the multigroup P1-approximation within the framework of the DD-30 code system. The efficiency of the developed computation technique is demonstrated by means of solution of two model problems. 4 refs.; 2 tabs 20. MCVIEW: a radiation view factor computer program for three dimensional geometries using Monte Carlo method International Nuclear Information System (INIS) A Computer program MCVIEW calculates the radiation view factor between surfaces for three dimensional geometries. MCVIEW was developed to calculate view factors for input data to heat transfer analysis programs TRUMP, HEATING-5, HEATING-6 and so on. In the paper, brief illustration of calculation method using Monte Carlo for view factor is presented. The second section presents comparisons between view factors of other methods such as area integration, line integration and cross string and Monte Carlo methods, concerning with calculation error and computer execution time. The third section provides a user's input guide for MCVIEW. (author) 1. Metric conjoint segmentation methods : A Monte Carlo comparison NARCIS (Netherlands) Vriens, M; Wedel, M; Wilms, T 1996-01-01 The authors compare nine metric conjoint segmentation methods. Four methods concern two-stage procedures in which the estimation of conjoint models and the partitioning of the sample are performed separately; in five, the estimation and segmentation stages are integrated. The methods are compared co 2. Methods Used in Criticality Calculations; Monte Carlo Method, Neutron Interaction, Programmes for IBM-7094 International Nuclear Information System (INIS) Computer development has a bearing on the choice of methods and their possible uses. The authors discuss the possible uses of the diffusion and transport theories and their limitations. Most of the problems encountered in regard to criticality involve fissile materials in simple or multiple assemblies. These entail the use of methods of calculation based on different principles. There are approximate methods of calculation, but very often, for economic reasons or with a view to practical application, a high degree of accuracy is required in determining the reactivity of the assemblies in question, and the methods based on the Monte Carlo principle are then the most valid. When these methods are used, accuracy is linked with the calculation time, so that the usefulness of the codes derives from their speed. With a view to carrying out the work in the best conditions, depending on the geometry and the nature of the materials involved, various codes must be used. Four principal codes are described, as are their variants; some typical possibilities and certain fundamental results are presented. Finally the accuracies of the various methods are compared. (author) 3. The factorization method for Monte Carlo simulations of systems with a complex with Science.gov (United States) Ambjørn, J.; Anagnostopoulos, K. N.; Nishimura, J.; Verbaarschot, J. J. M. 2004-03-01 We propose a method for Monte Carlo simulations of systems with a complex action. The method has the advantages of being in principle applicable to any such system and provides a solution to the overlap problem. In some cases, like in the IKKT matrix model, a finite size scaling extrapolation can provide results for systems whose size would make it prohibitive to simulate directly. 4. Remarkable moments in the history of neutron transport Monte Carlo methods International Nuclear Information System (INIS) I highlight a few results from the past of the neutron and photon transport Monte Carlo methods which have caused me a great pleasure for their ingenuity and wittiness and which certainly merit to be remembered even when tricky methods are not needed anymore. (orig.) 5. Implementation of 3D Lattice Monte Carlo Simulation on a Cluster of Symmetric Multiprocessors Institute of Scientific and Technical Information of China (English) 雷咏梅; 蒋英; 等 2002-01-01 This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors(SMPs).The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition.Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied.Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly.It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains. 6. A GPU-based Large-scale Monte Carlo Simulation Method for Systems with Long-range Interactions CERN Document Server Liang, Yihao; Li, Yaohang 2016-01-01 In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures. It adopts the sequential updating scheme of Metropolis algorithm, and makes no approximation in the computation of energy. It reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We use this method to simulate primitive model electrolytes. We measure very precisely all ion-ion pair correlation functions at high concentrations, and extract renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory. 7. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units International Nuclear Information System (INIS) We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials. 8. ANALYSIS OF NEIGHBORHOOD IMPACTS ARISING FROM IMPLEMENTATION OF SUPERMARKETS IN CITY OF SÃO CARLOS OpenAIRE Pedro Silveira Gonçalves Neto; José Augusto de Lollo 2010-01-01 The study included supermarkets of different sizes (small, medium and large - defined based on the area occupied by the project and volume of activity) located in São Carlos (São Paulo state, Brazil) to evaluate the influence of the size of the project impacts neighborhood generated by these supermarkets. It was considered the influence of factors like the location of enterprises, size of the building, and areas of influence contribute to the increased population density and change of use of ... 9. Zone modeling of radiative heat transfer in industrial furnaces using adjusted Monte-Carlo integral method for direct exchange area calculation International Nuclear Information System (INIS) This paper proposes the Monte-Carlo Integral method for the direct exchange area calculation in the zone method for the first time. This method is simple and able to handle the complex geometry zone problem and the self-zone radiation problem. The Monte-Carlo Integral method is adjusted to improve the efficiency, so that an acceptable accuracy within a reasonable computation time could be achieved. The zone method with the adjusted Monte-Carlo Integral method is used for the modeling and simulation of the radiation transfer in the industrial furnace. The simulation result is compared with the industrial data and show great accordance. It also shows the high temperature flue gas heats the furnace wall, which reflects the radiant heat to the reactor tubes. The highest temperature of flue gas and the side wall appears in nearly one third of the furnace height from the bottom, which corresponds with the industrial measuring data. The simulation result indicates that the zone method is comprehensive and easy to implement for radiative phenomenon in the furnace. - Highlights: • The Monte Carlo Integral method for evaluating direct exchange areas. • Adjustment from the MCI method to the AMCI method for efficiency. • Examination of the performance of the MCI and AMCI methods. • Development of the 3D zone model with the AMCI method. • The simulation results show good accordance with the industrial data 10. Improving Power System Risk Evaluation Method Using Monte Carlo Simulation and Gaussian Mixture Method Directory of Open Access Journals (Sweden) GHAREHPETIAN, G. B. 2009-06-01 Full Text Available The analysis of the risk of partial and total blackouts has a crucial role to determine safe limits in power system design, operation and upgrade. Due to huge cost of blackouts, it is very important to improve risk assessment methods. In this paper, Monte Carlo simulation (MCS was used to analyze the risk and Gaussian Mixture Method (GMM has been used to estimate the probability density function (PDF of the load curtailment, in order to improve the power system risk assessment method. In this improved method, PDF and a suggested index have been used to analyze the risk of loss of load. The effect of considering the number of generation units of power plants in the risk analysis has been studied too. The improved risk assessment method has been applied to IEEE 118 bus and the network of Khorasan Regional Electric Company (KREC and the PDF of the load curtailment has been determined for both systems. The effect of various network loadings, transmission unavailability, transmission capacity and generation unavailability conditions on blackout risk has been investigated too. 11. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport International Nuclear Information System (INIS) Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations 12. External individual monitoring: experiments and simulations using Monte Carlo Method International Nuclear Information System (INIS) In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF2:NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have. been introduced. The first one was the inclusion of 6% of air in the composition of the CaF2:NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF2:NaCl compound estimated by simulation to be 2,20(25) mm-1 was introduced. Conversion coefficients Cp from air kerma to personal dose equivalent were calculated using a slab water phantom with polymethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratorio de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low price can 13. Quasi-Monte Carlo methods for lattice systems. A first look Energy Technology Data Exchange (ETDEWEB) Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik 2013-02-15 We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. 14. Monte Carlo boundary methods for RF-heating of fusion plasma International Nuclear Information System (INIS) A fusion plasma can be heated by launching an electromagnetic wave into the plasma with a frequency close to the cyclotron frequency of a minority ion species. This heating process creates a non-Maxwellian distribution function, that is difficult to solve numerically in toroidal geometry. Solutions have previously been found using a Monte Carlo code FIDO. However the computations are rather time consuming. Therefore methods to speed up the computations, using Monte Carlo boundary methods have been studied. The ion cyclotron frequency heating mainly perturbs the high velocity distribution, while the low velocity distribution remains approximately Maxwellian. An hybrid model is therefore proposed, assuming a Maxwellian at low velocities and calculating the high velocity distribution with a Monte Carlo method. Three different methods to treat the boundary between the low and the high velocity regime are presented. A Monte Carlo code HYBRID has been developed to test the most promising method, the 'Modified differential equation' method, for a one dimensional problem. The results show good agreement with analytical solutions 15. Implementation of SMED method in wood processing Directory of Open Access Journals (Sweden) Vukićević Milan R. 2007-01-01 Full Text Available The solution of problems in production is mainly tackled by the management based on the hardware component, i.e. by the introduction of work centres of recent generation. In this way, it ensures the continuity of quality reduced consumption of energy, humanization of work, etc. However, the interaction between technical-technological and organizational-economic aspects of production is neglected. This means that the new-generation equipment requires a modern approach to planning, organization, and management of production, as well as to economy of production. Consequently it is very important to ensure the implementation of modern organizational methods in wood processing. This paper deals with the problem of implementation of SMED method (SMED - Single Digit Minute Exchange of Die in the aim of rationalization of set-up-end-up operations. It is known that in the conditions of discontinuous production, set-up-end-up time is a significant limiting factor in the increase of flexibility of production systems. 16. Correlation between vacancies and magnetoresistance changes in FM manganites using the Monte Carlo method Energy Technology Data Exchange (ETDEWEB) Agudelo-Giraldo, J.D. [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo-Parra, E., E-mail: [email protected] [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo, J. [Grupo de Magnetismo y Simulación, Instituto de Física, Universidad de Antioquia, A.A. 1226, Medellín (Colombia) 2015-10-01 The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La{sub 2/3}Ca{sub 1/3}MnO{sub 3}, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x–y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn{sup 3+eg’}–O–Mn{sup 3+eg}, Mn{sup 3+eg}–O–Mn{sup 4+d3} and Mn{sup 3+eg’}–O–Mn{sup 4+d3}. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions T{sub C} (Curie temperature) and T{sub MI} (metal–insulator temperature) are similar, whereas with the increase in the vacancy percentage, T{sub MI} presented lower values than T{sub C}. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below T{sub MI}. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below T{sub C}. - Highlights: • Changes in the resistivity of FM materials as a function of the temperature and external magnetic field can be obtained by the Monte Carlo method, Metropolis algorithm, classical Heisenberg and Kronig–Penney approximation for magnetic clusters. • Increases in the magnetoresistive effect were observed at temperatures below T{sub MI} by the vacancies effect. • The resistive hysteresis 17. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo Energy Technology Data Exchange (ETDEWEB) Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: [email protected] [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba) 2014-08-15 This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author) 18. TH-A-19A-08: Intel Xeon Phi Implementation of a Fast Multi-Purpose Monte Carlo Simulation for Proton Therapy International Nuclear Information System (INIS) Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time 19. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO International Nuclear Information System (INIS) The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.) 20. Methods of Monte Carlo biasing using two-dimensional discrete ordinates adjoint flux Energy Technology Data Exchange (ETDEWEB) Tang, J.S.; Stevens, P.N.; Hoffman, T.J. 1976-06-01 Methods of biasing three-dimensional deep penetration Monte Carlo calculations using importance functions obtained from a two-dimensional discrete ordinates adjoint calculation have been developed. The important distinction was made between the applications of the point value and the event value to alter the random walk in Monte Carlo analysis of radiation transport. The biasing techniques developed are the angular probability biasing which alters the collision kernel using the point value as the importance function and the path length biasing which alters the transport kernel using the event value as the importance function. Source location biasings using the step importance function and the scalar adjoint flux obtained from the two-dimensional discrete ordinates adjoint calculation were also investigated. The effects of the biasing techniques to Monte Carlo calculations have been investigated for neutron transport through a thick concrete shield with a penetrating duct. Source location biasing, angular probability biasing, and path length biasing were employed individually and in various combinations. Results of the biased Monte Carlo calculations were compared with the standard Monte Carlo and discrete ordinates calculations. 1. Markov Chain Monte Carlo methods in computational statistics and econometrics Czech Academy of Sciences Publication Activity Database Volf, Petr Plzeň : University of West Bohemia in Pilsen, 2006 - (Lukáš, L.), s. 525-530 ISBN 978-80-7043-480-2. [Mathematical Methods in Economics 2006. Plzeň (CZ), 13.09.2006-15.09.2006] R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Random search * MCMC * optimization Subject RIV: BB - Applied Statistics, Operational Research 2. The application of Monte Carlo method to electron and photon beams transport International Nuclear Information System (INIS) The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs 3. Infinite dimensional integrals beyond Monte Carlo methods: yet another approach to normalized infinite dimensional integrals International Nuclear Information System (INIS) An approach to (normalized) infinite dimensional integrals, including normalized oscillatory integrals, through a sequence of evaluations in the spirit of the Monte Carlo method for probability measures is proposed. in this approach the normalization through the partition function is included in the definition. For suitable sequences of evaluations, the ('classical') expectation values of cylinder functions are recovered. 4. Infinite dimensional integrals beyond Monte Carlo methods: yet another approach to normalized infinite dimensional integrals OpenAIRE Magnot, Jean-Pierre 2012-01-01 An approach to (normalized) infinite dimensional integrals, including normalized oscillatory integrals, through a sequence of evaluations in the spirit of the Monte Carlo method for probability measures is proposed. in this approach the normalization through the partition function is included in the definition. For suitable sequences of evaluations, the ("classical") expectation values of cylinder functions are recovered 5. Lowest-order relativistic corrections of helium computed using Monte Carlo methods International Nuclear Information System (INIS) We have calculated the lowest-order relativistic effects for the three lowest states of the helium atom with symmetry 1S, 1P, 1D, 3S, 3P, and 3D using variational Monte Carlo methods and compact, explicitly correlated trial wave functions. Our values are in good agreement with the best results in the literature. 6. The information-based complexity of approximation problem by adaptive Monte Carlo methods Institute of Scientific and Technical Information of China (English) 2008-01-01 In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem. 7. On the use of the continuous-energy Monte Carlo method for lattice physics applications International Nuclear Information System (INIS) This paper is a general overview of the Serpent Monte Carlo reactor physics burnup calculation code. The Serpent code is a project carried out at VTT Technical Research Centre of Finland, in an effort to extend the use of the continuous-energy Monte Carlo method to lattice physics applications, including group constant generation for coupled full-core reactor simulator calculations. The main motivation of going from deterministic transport methods to Monte Carlo simulation is the capability to model any fuel or reactor type using the same fundamental neutron interaction data without major approximations. This capability is considered important especially for the development of next-generation reactor technology, which often lies beyond the modeling capabilities of conventional LWR codes. One of the main limiting factors for the Monte Carlo method is still today the prohibitively long computing time, especially in burnup calculation. The Serpent code uses certain dedicated calculation techniques to overcome this limitation. The overall running time is reduced significantly, in some cases by almost two orders of magnitude. The main principles of the calculation methods and the general capabilities of the code are introduced. The results section presents a collection of validation cases in which Serpent calculations are compared to reference MCNP4C and CASMO-4E results. (author) 8. A Monte Carlo Green's function method for three-dimensional neutron transport International Nuclear Information System (INIS) This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution 9. Transpor properties of electrons in GaAs using random techniques (Monte-Carlo Method) International Nuclear Information System (INIS) We study the transport properties of electrons in GaAs using random techniques (Monte-Carlo method). With a simple non parabolic band model for this semiconductor we obtain the electron stationary against the electric field in this material, cheking these theoretical results with the experimental ones given by several authors. (Author) 10. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model. Science.gov (United States) Kim, Seock-Ho 2001-01-01 Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions… 11. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods Science.gov (United States) Kim, Jee-Seon; Bolt, Daniel M. 2007-01-01 The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain… 12. Stability of few-body systems and quantum Monte-Carlo methods International Nuclear Information System (INIS) Quantum Monte-Carlo methods are well suited to study the stability of few-body systems. Their capabilities are illustrated by studying the critical stability of the hydrogen molecular ion whose nuclei and electron interact through the Yukawa potential, and the stability of small helium clusters. Refs. 16 (author) 13. A Monte-Carlo-Based Network Method for Source Positioning in Bioluminescence Tomography OpenAIRE Zhun Xu; Xiaolei Song; Xiaomeng Zhang; Jing Bai 2007-01-01 We present an approach based on the improved Levenberg Marquardt (LM) algorithm of backpropagation (BP) neural network to estimate the light source position in bioluminescent imaging. For solving the forward problem, the table-based random sampling algorithm (TBRS), a fast Monte Carlo simulation method ... 14. Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods International Nuclear Information System (INIS) The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author) 15. A variance-reduced electrothermal Monte Carlo method for semiconductor device simulation Energy Technology Data Exchange (ETDEWEB) Muscato, Orazio; Di Stefano, Vincenza [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) Leibniz-Institut im Forschungsverbund Berlin e.V., Berlin (Germany) 2012-11-01 This paper is concerned with electron transport and heat generation in semiconductor devices. An improved version of the electrothermal Monte Carlo method is presented. This modification has better approximation properties due to reduced statistical fluctuations. The corresponding transport equations are provided and results of numerical experiments are presented. 16. Detailed balance method for chemical potential determination in Monte Carlo and molecular dynamics simulations International Nuclear Information System (INIS) We present a new, nondestructive, method for determining chemical potentials in Monte Carlo and molecular dynamics simulations. The method estimates a value for the chemical potential such that one has a balance between fictitious successful creation and destruction trials in which the Monte Carlo method is used to determine success or failure of the creation/destruction attempts; we thus call the method a detailed balance method. The method allows one to obtain estimates of the chemical potential for a given species in any closed ensemble simulation; the closed ensemble is paired with a ''natural'' open ensemble for the purpose of obtaining creation and destruction probabilities. We present results for the Lennard-Jones system and also for an embedded atom model of liquid palladium, and compare to previous results in the literature for these two systems. We are able to obtain an accurate estimate of the chemical potential for the Lennard-Jones system at higher densities than reported in the literature 17. Sequential Monte Carlo methods for nonlinear discrete-time filtering CERN Document Server Bruno, Marcelo GS 2013-01-01 In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable.We begin the notes with a review of Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in line 18. Markov chain Monte Carlo methods in directed graphical models DEFF Research Database (Denmark) Højbjerre, Malene Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models...... tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both... 19. An energy transfer method for 4D Monte Carlo dose calculation OpenAIRE Siebers, Jeffrey V; Zhong, Hualiang 2008-01-01 This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy ... 20. Constrained-Realization Monte-Carlo Method for Hypothesis Testing CERN Document Server Theiler, J; Theiler, James; Prichard, Dean 1996-01-01 We compare two theoretically distinct approaches to generating artificial (or surrogate'') data for testing hypotheses about a given data set. The first and more straightforward approach is to fit a single best'' model to the original data, and then to generate surrogate data sets that are typical realizations'' of that model. The second approach concentrates not on the model but directly on the original data; it attempts to constrain the surrogate data sets so that they exactly agree with the original data for a specified set of sample statistics. Examples of these two approaches are provided for two simple cases: a test for deviations from a gaussian distribution, and a test for serial dependence in a time series. Additionally, we consider tests for nonlinearity in time series based on a Fourier transform (FT) method and on more conventional autoregressive moving-average (ARMA) fits to the data. The comparative performance of hypothesis testing schemes based on these two approaches is found to depend ... 1. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods; Avenir des nouveaux concepts des calculs dosimetriques bases sur les methodes de Monte Carlo Energy Technology Data Exchange (ETDEWEB) Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France) 2009-01-15 Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors) 2. MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM Directory of Open Access Journals (Sweden) Gabriela Ižaríková 2015-12-01 Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making. 3. Application of Monte Carlo methods for dead time calculations for counting measurements; Anwendung von Monte-Carlo-Methoden zur Berechnung der Totzeitkorrektion fuer Zaehlmessungen Energy Technology Data Exchange (ETDEWEB) Henniger, Juergen; Jakobi, Christoph [Technische Univ. Dresden (Germany). Arbeitsgruppe Strahlungsphysik (ASP) 2015-07-01 From a mathematical point of view Monte Carlo methods are the numerical solution of certain integrals and integral equations using a random experiment. There are several advantages compared to the classical stepwise integration. The time required for computing increases for multi-dimensional problems only moderately with increasing dimension. The only requirements for the integral kernel are its capability of being integrated in the considered integration area and the possibility of an algorithmic representation. These are the important properties of Monte Carlo methods that allow the application in every scientific area. Besides that Monte Carlo algorithms are often more intuitive than conventional numerical integration methods. The contribution demonstrates these facts using the example of dead time corrections for counting measurements. 4. ANALYSIS OF NEIGHBORHOOD IMPACTS ARISING FROM IMPLEMENTATION OF SUPERMARKETS IN CITY OF SÃO CARLOS Directory of Open Access Journals (Sweden) Pedro Silveira Gonçalves Neto 2010-12-01 Full Text Available The study included supermarkets of different sizes (small, medium and large - defined based on the area occupied by the project and volume of activity located in São Carlos (São Paulo state, Brazil to evaluate the influence of the size of the project impacts neighborhood generated by these supermarkets. It was considered the influence of factors like the location of enterprises, size of the building, and areas of influence contribute to the increased population density and change of use of buildings since it was post-deployment analysis. The relationship between the variables of the spatial impacts was made possible by the use of geographic information system. It was noted that the legislation does not have suitable conditions to guide the studies of urban impacts due to the complex integration between the urban and impacting components. 5. GPU-accelerated inverse identification of radiative properties of particle suspensions in liquid by the Monte Carlo method Science.gov (United States) Ma, C. Y.; Zhao, J. M.; Liu, L. H.; Zhang, L.; Li, X. C.; Jiang, B. C. 2016-03-01 Inverse identification of radiative properties of participating media is usually time consuming. In this paper, a GPU accelerated inverse identification model is presented to obtain the radiative properties of particle suspensions. The sample medium is placed in a cuvette and a narrow light beam is irradiated normally from the side. The forward three-dimensional radiative transfer problem is solved using a massive parallel Monte Carlo method implemented on graphics processing unit (GPU), and particle swarm optimization algorithm is applied to inversely identify the radiative properties of particle suspensions based on the measured bidirectional scattering distribution function (BSDF). The GPU-accelerated Monte Carlo simulation significantly reduces the solution time of the radiative transfer simulation and hence greatly accelerates the inverse identification process. Hundreds of speedup is achieved as compared to the CPU implementation. It is demonstrated using both simulated BSDF and experimentally measured BSDF of microalgae suspensions that the radiative properties of particle suspensions can be effectively identified based on the GPU-accelerated algorithm with three-dimensional radiative transfer modelling. 6. A Method for Estimating Annual Energy Production Using Monte Carlo Wind Speed Simulation Directory of Open Access Journals (Sweden) Birgir Hrafnkelsson 2016-04-01 Full Text Available A novel Monte Carlo (MC approach is proposed for the simulation of wind speed samples to assess the wind energy production potential of a site. The Monte Carlo approach is based on historical wind speed data and reserves the effect of autocorrelation and seasonality in wind speed observations. No distributional assumptions are made, and this approach is relatively simple in comparison to simulation methods that aim at including the autocorrelation and seasonal effects. Annual energy production (AEP is simulated by transforming the simulated wind speed values via the power curve of the wind turbine at the site. The proposed Monte Carlo approach is generic and is applicable for all sites provided that a sufficient amount of wind speed data and information on the power curve are available. The simulated AEP values based on the Monte Carlo approach are compared to both actual AEP and to simulated AEP values based on a modified Weibull approach for wind speed simulation using data from the Burfell site in Iceland. The comparison reveals that the simulated AEP values based on the proposed Monte Carlo approach have a distribution that is in close agreement with actual AEP from two test wind turbines at the Burfell site, while the simulated AEP of the Weibull approach is such that the P50 and the scale are substantially lower and the P90 is higher. Thus, the Weibull approach yields AEP that is not in line with the actual variability in AEP, while the Monte Carlo approach gives a realistic estimate of the distribution of AEP. 7. Modeling radiation from the atmosphere of Io with Monte Carlo methods Science.gov (United States) Gratiy, Sergey Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. To validate a global numerical model of Io's atmosphere against astronomical observations requires a 3-D spherical-shell radiative transfer (RT) code to simulate disk-resolved images and disk-integrated spectra from the ultraviolet to the infrared spectral region. In addition, comparison of simulated and astronomical observations provides important information to improve existing atmospheric models. In order to achieve this goal, a new 3-D spherical-shell forward/backward photon Monte Carlo code capable of simulating radiation from absorbing/emitting and scattering atmospheres with an underlying emitting and reflecting surface was developed. A new implementation of calculating atmospheric brightness in scattered sunlight is presented utilizing the notion of an "effective emission source" function. This allows for the accumulation of the scattered contribution along the entire path of a ray and the calculation of the atmospheric radiation when both scattered sunlight and thermal emission contribute to the observed radiation---which was not possible in previous models. A "polychromatic" algorithm was developed for application with the backward Monte Carlo method and was implemented in the code. It allows one to calculate radiative intensity at several wavelengths simultaneously, even when the scattering properties of the atmosphere are a function of wavelength. The application of the "polychromatic" method improves the computational efficiency because it reduces the number of photon bundles traced during the simulation. A 3-D gas dynamics model of Io's atmosphere, including both sublimation and volcanic 8. A recursive Monte Carlo method for estimating importance functions in deep penetration problems International Nuclear Information System (INIS) A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems 9. Quantile Mechanics II: Changes of Variables in Monte Carlo methods and GPU-Optimized Normal Quantiles OpenAIRE Shaw, W. T.; Luu, T.; Brickman, N. 2009-01-01 With financial modelling requiring a better understanding of model risk, it is helpful to be able to vary assumptions about underlying probability distributions in an efficient manner, preferably without the noise induced by resampling distributions managed by Monte Carlo methods. This paper presents differential equations and solution methods for the functions of the form Q(x) = F −1(G(x)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of Mont... 10. Construction of the Jacobian matrix for fluorescence diffuse optical tomography using a perturbation Monte Carlo method Science.gov (United States) Zhang, Xiaofeng 2012-03-01 Image formation in fluorescence diffuse optical tomography is critically dependent on construction of the Jacobian matrix. For clinical and preclinical applications, because of the highly heterogeneous characteristics of the medium, Monte Carlo methods are frequently adopted to construct the Jacobian. Conventional adjoint Monte Carlo method typically compute the Jacobian by multiplying the photon density fields radiated from the source at the excitation wavelength and from the detector at the emission wavelength. Nonetheless, this approach assumes that the source and the detector in Green's function are reciprocal, which is invalid in general. This assumption is particularly questionable in small animal imaging, where the mean free path length of photons is typically only one order of magnitude smaller than the representative dimension of the medium. We propose a new method that does not rely on the reciprocity of the source and the detector by tracing photon propagation entirely from the source to the detector. This method relies on the perturbation Monte Carlo theory to account for the differences in optical properties of the medium at the excitation and the emission wavelengths. Compared to the adjoint methods, the proposed method is more valid in reflecting the physical process of photon transport in diffusive media and is more efficient in constructing the Jacobian matrix for densely sampled configurations. 11. A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport Science.gov (United States) Tautz, R. C. 2016-05-01 A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed. 12. Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle Science.gov (United States) Arai, R.; Tamura, R.; Fukuda, H.; Li, J.; Saito, A. T.; Kaji, S.; Nakagome, H.; Numazawa, T. 2015-12-01 In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method. 13. Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method International Nuclear Information System (INIS) The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients Sij are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature Tm is plotted in terms of the cluster atom number Nat. The standard Nat−1/3 linear dependence (Pawlow law) is observed for Nat >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For Nat <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I 14. Sequential Monte Carlo Methods for Joint Detection and Tracking of Multiaspect Targets in Infrared Radar Images Directory of Open Access Journals (Sweden) Bruno MarceloGS 2008-01-01 Full Text Available We present in this paper a sequential Monte Carlo methodology for joint detection and tracking of a multiaspect target in image sequences. Unlike the traditional contact/association approach found in the literature, the proposed methodology enables integrated, multiframe target detection and tracking incorporating the statistical models for target aspect, target motion, and background clutter. Two implementations of the proposed algorithm are discussed using, respectively, a resample-move (RS particle filter and an auxiliary particle filter (APF. Our simulation results suggest that the APF configuration outperforms slightly the RS filter in scenarios of stealthy targets. 15. Research of Monte Carlo method used in simulation of different maintenance processes International Nuclear Information System (INIS) The paper introduces two kinds of Monte Carlo methods used in equipment life process simulation under the least maintenance: condition: method of producing the interval of lifetime, method of time scale conversion. The paper also analyzes the characteristics and the using scope of the two methods. By using the conception of service age reduction factor, the model of equipment's life process under incomplete maintenance condition is established, and also the life process simulation method applicable to this situation is invented. (authors) 16. Contributon Monte Carlo International Nuclear Information System (INIS) The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables 17. Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe Science.gov (United States) Martin, Nicolas This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic 18. Determining the optimum confidence interval based on the hybrid Monte Carlo method and its application in financial calculations OpenAIRE Kianoush Fathi Vajargah 2014-01-01 The accuracy of Monte Carlo and quasi-Monte Carlo methods decreases in problems of high dimensions. Therefore, the objective of this study was to present an optimum method to increase the accuracy of the answer. As the problem gets larger, the resulting accuracy will be higher. In this respect, this study combined the two previous methods, QMC and MC, and presented a hybrid method with efficiency higher than that of those two methods. 19. The application of Monte Carlo method to electron and photon beams transport; Zastosowanie metody Monte Carlo do analizy transportu elektronow i fotonow Energy Technology Data Exchange (ETDEWEB) Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland) 1994-12-31 The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs. 20. A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution Science.gov (United States) Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K. 2015-11-01 In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16]. 1. Polarization imaging of multiply-scattered radiation based on integral-vector Monte Carlo method International Nuclear Information System (INIS) A new integral-vector Monte Carlo method (IVMCM) is developed to analyze the transfer of polarized radiation in 3D multiple scattering particle-laden media. The method is based on a 'successive order of scattering series' expression of the integral formulation of the vector radiative transfer equation (VRTE) for application of efficient statistical tools to improve convergence of Monte Carlo calculations of integrals. After validation against reference results in plane-parallel layer backscattering configurations, the model is applied to a cubic container filled with uniformly distributed monodispersed particles and irradiated by a monochromatic narrow collimated beam. 2D lateral images of effective Mueller matrix elements are calculated in the case of spherical and fractal aggregate particles. Detailed analysis of multiple scattering regimes, which are very similar for unpolarized radiation transfer, allows identifying the sensitivity of polarization imaging to size and morphology. 2. Monte Carlo Methods Development and Applications in Conformational Sampling of Proteins DEFF Research Database (Denmark) Tian, Pengfei sampling methods to address these two problems. First of all, a novel technique has been developed for reliably estimating diffusion coefficients for use in the enhanced sampling of molecular simulations. A broad applicability of this method is illustrated by studying various simulation problems such as...... sufficient to provide an accurate structural and dynamical description of certain properties of proteins, (2), it is difficult to obtain correct statistical weights of the samples generated, due to lack of equilibrium sampling. In this dissertation I present several new methodologies based on Monte Carlo...... protein folding and aggregation. Second, by combining Monte Carlo sampling with a flexible probabilistic model of NMR chemical shifts, a series of simulation strategies are developed to accelerate the equilibrium sampling of free energy landscapes of proteins. Finally, a novel approach is presented to... 3. Monte Carlo method of macroscopic modulation of small-angle charged particle reflection from solid surfaces CERN Document Server Bratchenko, M I 2001-01-01 A novel method of Monte Carlo simulation of small-angle reflection of charged particles from solid surfaces has been developed. Instead of atomic-scale simulation of particle-surface collisions the method treats the reflection macroscopically as 'condensed history' event. Statistical parameters of reflection are sampled from the theoretical distributions upon energy and angles. An efficient sampling algorithm based on combination of inverse probability distribution function method and rejection method has been proposed and tested. As an example of application the results of statistical modeling of particles flux enhancement near the bottom of vertical Wehner cone are presented and compared with simple geometrical model of specular reflection. 4. A vectorized Monte Carlo method with pseudo-scattering for neutron transport analysis International Nuclear Information System (INIS) A vectorized Monte Carlo method has been developed for the neutron transport analysis on the vector supercomputer HITAC S810. In this method, a multi-particle tracking algorithm is adopted and fundamental processing such as pseudo-random number generation is modified to use the vector processor effectively. The flight analysis of this method is characterized by the new algorithm with pseudo-scattering. This algorithm was verified by comparing its results with those of the conventional one. The method realized a speed-up of factor 10; about 7 times by vectorization and 1.5 times by the new algorithm for flight analysis 5. Monte-Carlo method for electron transport in a material with electron field International Nuclear Information System (INIS) The precise mathematical and physical foundations of the Monte-Carlo method for electron transport with the electromagnetic field are established. The condensed histories method given by M.J. Berger is generalized to the case where electromagnetic field exists in the material region. The full continuous-slowing-down method and the coupling method of continuous-slowing-down and catastrophic collision are compared. Using the approximation of homogeneous electronic field, the thickness of material for shielding the supra-thermal electrons produced by laser light irradiated target is evaluated 6. A study of orientational disorder in ND4Cl by the reverse Monte Carlo method International Nuclear Information System (INIS) The total structure factor for deuterated ammonium chloride measured by neutron diffraction has been modeled using the reverse Monte Carlo method. The results show that the orientational disorder of the ammonium ions consists of a local librational motion with an average angular amplitude α = 17 deg and reorientations of ammonium ions by 90 deg jumps around two-fold axes. Reorientations around three-fold axes have a very low probability 7. The massive Schwinger model on the lattice studied via a local Hamiltonian Monte-Carlo method International Nuclear Information System (INIS) A local Hamiltonian Monte-Carlo method is used to study the massive Schwinger model. A non-vanishing quark condensate is found and the dependence of the condensate and the string tension on the background field is calculated. These results reproduce well the expected continuum results. We study also the first-order phase transition which separates the weak and strong coupling regimes and find evidence for the behaviour conjectured by Coleman. (author) 8. Study of the tritium production in a 1-D blanket model with Monte Carlo methods OpenAIRE Cubí Ricart, Álvaro 2015-01-01 In this work a method to collapse a 3D geometry into a mono dimensional model of a fusion reactor blanket is developed and tested. Using this model, neutron and photon uxes and its energy deposition will be obtained with a Monte Carlo code. This results will allow to calculate the TBR and the thermal power of the blanket and will be able to be integrated in the AINA code. 9. Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA International Nuclear Information System (INIS) Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.) 10. R and D on automatic modeling methods for Monte Carlo codes FLUKA International Nuclear Information System (INIS) FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors) 11. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems KAUST Repository Efendiev, Yalchin R. 2014-12-19 In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015. 12. Calculation of neutron cross-sections in the unresolved resonance region by the Monte Carlo method International Nuclear Information System (INIS) The Monte-Carlo method is used to produce neutron cross-sections and functions of the cross-section probabilities in the unresolved energy region and a corresponding Fortran programme (ONERS) is described. Using average resonance parameters, the code generates statistical distribution of level widths and spacing between resonance for S and P waves. Some neutron cross-sections for U238 and U235 are shown as examples 13. A ''local'' exponential transform method for global variance reduction in Monte Carlo transport problems International Nuclear Information System (INIS) Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ''Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ''local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation 14. Monte Carlo Methods in Materials Science Based on FLUKA and ROOT Science.gov (United States) Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor 2003-01-01 15. Quantifying and reducing uncertainty in life cycle assessment using the Bayesian Monte Carlo method International Nuclear Information System (INIS) The traditional life cycle assessment (LCA) does not perform quantitative uncertainty analysis. However, without characterizing the associated uncertainty, the reliability of assessment results cannot be understood or ascertained. In this study, the Bayesian method, in combination with the Monte Carlo technique, is used to quantify and update the uncertainty in LCA results. A case study of applying the method to comparison of alternative waste treatment options in terms of global warming potential due to greenhouse gas emissions is presented. In the case study, the prior distributions of the parameters used for estimating emission inventory and environmental impact in LCA were based on the expert judgment from the intergovernmental panel on climate change (IPCC) guideline and were subsequently updated using the likelihood distributions resulting from both national statistic and site-specific data. The posterior uncertainty distribution of the LCA results was generated using Monte Carlo simulations with posterior parameter probability distributions. The results indicated that the incorporation of quantitative uncertainty analysis into LCA revealed more information than the deterministic LCA method, and the resulting decision may thus be different. In addition, in combination with the Monte Carlo simulation, calculations of correlation coefficients facilitated the identification of important parameters that had major influence to LCA results. Finally, by using national statistic data and site-specific information to update the prior uncertainty distribution, the resultant uncertainty associated with the LCA results could be reduced. A better informed decision can therefore be made based on the clearer and more complete comparison of options 16. Investigation of neutral particle leakages in lacunary media to speed up Monte Carlo methods International Nuclear Information System (INIS) This research aims at optimizing calculation methods which are used for long duration penetration problems in radiation protection when vacuum media are involved. After having recalled the main notions of the transport theory, the various numerical methods which are used to solve them, the fundamentals of the Monte Carlo method, and problems related to long duration penetration, the report focuses on the problem of leaks through vacuum. It describes the bias introduced in the TRIPOLI code, reports the search for an optimal bias in cylindrical configurations by using the JANUS code. It reports the application to a simple straight tube 17. Mass attenuation coefficient calculations of different detector crystals by means of FLUKA Monte Carlo method Science.gov (United States) Ebru Ermis, Elif; Celiktas, Cuneyt 2015-07-01 Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials. 18. Analysis over Critical Issues of Implementation or Non-implementation of the ABC Method in Romania Directory of Open Access Journals (Sweden) Sorinel Cãpusneanu 2009-12-01 Full Text Available This article analyses the critical issues regarding implementation or non-implementation of the Activity-Based Costing (ABC method in Romania. There are highlighted the specialists views in the field opinions and own point of view of the authors regarding informational, technical, behavioral, financial, managerial, property and competitive issues regarding implementation or non-implementation of the ABC method in Romania. 19. Numerical methods design, analysis, and computer implementation of algorithms CERN Document Server Greenbaum, Anne 2012-01-01 Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or c 20. TH-A-19A-11: Validation of GPU-Based Monte Carlo Code (gPMC) Versus Fully Implemented Monte Carlo Code (TOPAS) for Proton Radiation Therapy: Clinical Cases Study International Nuclear Information System (INIS) Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavities and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases 1. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method Institute of Scientific and Technical Information of China (English) Chen Chaobin; Huang Qunying; Wu Yican 2005-01-01 A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target. 2. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method Science.gov (United States) Chen, Chaobin; Huang, Qunying; Wu, Yican 2005-04-01 A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target. 3. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method International Nuclear Information System (INIS) Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.) 4. Acceptance and implementation of a system of planning computerized based on Monte Carlo; Aceptacion y puesta en marcha de un sistema de planificacion comutarizada basado en Monte Carlo Energy Technology Data Exchange (ETDEWEB) Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S. 2013-07-01 It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author) 5. An implementation of Runge's method for Diophantine equations OpenAIRE Beukers, F.; Tengely, Sz. 2005-01-01 In this paper we suggest an implementation of Runge's method for solving Diophantine equations satisfying Runge's condition. In this implementation we avoid the use of Puiseux series and algebraic coefficients. 6. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams Energy Technology Data Exchange (ETDEWEB) Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' ' Carlos Haya' ' , Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain) 2010-07-15 Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics. 7. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams International Nuclear Information System (INIS) Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within ∼3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics. 8. A Monte-Carlo method for calculations of the distribution of angular deflections due to multiple scattering International Nuclear Information System (INIS) A Monte Carlo method for calculation of the distribution of angular deflections of fast charged particles passing through thin layer of matter is described on the basis of Moliere theory of multiple scattering. The distribution of the angular deflections obtained as the result of calculations is compared with Moliere theory. The method proposed is useful to calculate the electron transport in matter by Monte Carlo method. (author) 9. Monte Carlo simulations of Higgs-boson production at the LHC with the KrkNLO method CERN Document Server Jadach, S; Placzek, W; Sapeta, S; Siodmok, A; Skrzypek, M 2016-01-01 We present numerical tests and predictions of the KrkNLO method for matching of NLO QCD corrections to hard processes with LO parton shower Monte Carlo generators. This method was described in detail in our previous publications, where its advantages over other approaches, such as MCatNLO and POWHEG, were pointed out. Here we concentrate on presenting some numerical results (cross sections and distributions) for $Z/\\gamma^*$ (Drell-Yan) and Higgs-boson production processes at the LHC. The Drell--Yan process is used mainly to validate the KrkNLO implementation in the Herwig 7 program with respect to the previous implementation in Sherpa. We also show predictions for this process with the new, complete, MC-scheme parton distribution functions and compare them with our previously published results. Then, we present the first results of the KrkNLO method for the Higgs production in gluon--gluon fusion at the LHC and compare them with the predictions of other programs, such as MCFM, MCatNLO, POWHEG and HNNLO, as w... 10. Simulation of clinical X-ray tube using the Monte Carlo Method - PENELOPE code International Nuclear Information System (INIS) Breast cancer is the most common type of cancer among women. The main strategy to increase the long-term survival of patients with this disease is the early detection of the tumor, and mammography is the most appropriate method for this purpose. Despite the reduction of cancer deaths, there is a big concern about the damage caused by the ionizing radiation to the breast tissue. To evaluate these measures it was modeled a mammography equipment, and obtained the depth spectra using the Monte Carlo method - PENELOPE code. The average energies of the spectra in depth and the half value layer of the mammography output spectrum. (author) 11. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems International Nuclear Information System (INIS) The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method 12. Comparing Subspace Methods for Closed Loop Subspace System Identification by Monte Carlo Simulations Directory of Open Access Journals (Sweden) David Di Ruscio 2009-10-01 Full Text Available A novel promising bootstrap subspace system identification algorithm for both open and closed loop systems is presented. An outline of the SSARX algorithm by Jansson (2003 is given and a modified SSARX algorithm is presented. Some methods which are consistent for closed loop subspace system identification presented in the literature are discussed and compared to a recently published subspace algorithm which works for both open as well as for closed loop data, i.e., the DSR_e algorithm as well as the bootstrap method. Experimental comparisons are performed by Monte Carlo simulations. 13. Experimental results and Monte Carlo simulations of a landmine localization device using the neutron backscattering method Energy Technology Data Exchange (ETDEWEB) Datema, C.P. E-mail: [email protected]; Bom, V.R.; Eijk, C.W.E. van 2002-08-01 Experiments were carried out to investigate the possible use of neutron backscattering for the detection of landmines buried in the soil. Several landmines, buried in a sand-pit, were positively identified. A series of Monte Carlo simulations were performed to study the complexity of the neutron backscattering process and to optimize the geometry of a future prototype. The results of these simulations indicate that this method shows great potential for the detection of non-metallic landmines (with a plastic casing), for which so far no reliable method has been found. 14. Mass attenuation coefficient calculations of different detector crystals by means of FLUKA Monte Carlo method OpenAIRE Ermis Elif Ebru; Celiktas Cuneyt 2015-01-01 Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded f... 15. Comparison of approximative Markov and Monte Carlo simulation methods for reliability assessment of crack containing components International Nuclear Information System (INIS) Reliability assessments based on probabilistic fracture mechanics can give insight into the effects of changes in design parameters, operational conditions and maintenance schemes. Although they are often not capable of providing absolute reliability values, these methods at least allow the ranking of different solutions among alternatives. Due to the variety of possible solutions for design, operation and maintenance problems numerous probabilistic reliability assessments have to be carried out. This is a laborous task especially for crack containing welds of nuclear pipes subjected to fatigue. The objective of this paper is to compare the Monte Carlo simulation method and a newly developed approximative approach using the Markov process ansatz for this task 16. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF) Science.gov (United States) Hansson, Marie; Isaksson, Mats 2007-04-01 X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible. 17. On the Calculation of Reactor Time Constants Using the Monte Carlo Method International Nuclear Information System (INIS) Full-core reactor dynamics calculation involves the coupled modelling of thermal hydraulics and the time-dependent behaviour of core neutronics. The reactor time constants include prompt neutron lifetimes, neutron reproduction times, effective delayed neutron fractions and the corresponding decay constants, typically divided into six or eight precursor groups. The calculation of these parameters is traditionally carried out using deterministic lattice transport codes, which also produce the homogenised few-group constants needed for resolving the spatial dependence of neutron flux. In recent years, there has been a growing interest in the production of simulator input parameters using the stochastic Monte Carlo method, which has several advantages over deterministic transport calculation. This paper reviews the methodology used for the calculation of reactor time constants. The calculation techniques are put to practice using two codes, the PSG continuous-energy Monte Carlo reactor physics code and MORA, a new full-core Monte Carlo neutron transport code entirely based on homogenisation. Both codes are being developed at the VTT Technical Research Centre of Finland. The results are compared to other codes and experimental reference data in the CROCUS reactor kinetics benchmark calculation. (author) 18. Uncertainty Assessment of the Core Thermal-Hydraulic Analysis Using the Monte Carlo Method Energy Technology Data Exchange (ETDEWEB) Choi, Sun Rock; Yoo, Jae Woon; Hwang, Dae Hyun; Kim, Sang Ji [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of) 2010-10-15 In the core thermal-hydraulic design of a sodium cooled fast reactor, the uncertainty factor analysis is a critical issue in order to assure safe and reliable operation. The deviations from the nominal values need to be quantitatively considered by statistical thermal design methods. The hot channel factors (HCF) were employed to evaluate the uncertainty in the early design such as the CRBRP. The improved thermal design procedure (ISTP) calculates the overall uncertainty based on the Root Sum Square technique and sensitivity analyses of each design parameters. Another way to consider the uncertainties is to use the Monte Carlo method (MCM). In this method, all the input uncertainties are randomly sampled according to their probability density functions and the resulting distribution for the output quantity is analyzed. It is able to directly estimate the uncertainty effects and propagation characteristics for the present thermalhydraulic model. However, it requires a huge computation time to get a reliable result because the accuracy is dependent on the sampling size. In this paper, the analysis of uncertainty factors using the Monte Carlo method is described. As a benchmark model, the ORNL 19 pin test is employed to validate the current uncertainty analysis method. The thermal-hydraulic calculation is conducted using the MATRA-LMR program which was developed at KAERI based on the subchannel approach. The results are compared with those of the hot channel factors and the improved thermal design procedure 19. A CNS calculation line based on a Monte-Carlo method International Nuclear Information System (INIS) The neutronic design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. The decisions taken in this sense affect not only the neutron flux in the source neighbourhood, which can be evaluated by a standard deterministic method, but also the neutron flux values in experimental positions far away from the neutron source. At long distances from the CNS, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures of standard and typical magnitudes such as average neutron flux, neutron current, angular flux, and luminosity. The Monte Carlo method is a unique and powerful tool to calculate the transport of neutrons and photons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The use of MCNP as the main neutronic design tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors, if the proper scheme is applied. The design goal is to evaluate the performance of the CNS, its beam tubes and neutron guides, at specific experimental locations in the reactor hall and in the neutron or experimental hall. In this work, the calculation methodology used to design a CNS and its associated Neutron Beam Transport Systems (NBTS), based on the use of the MCNP code, is presented. (author) 20. Research on Reliability Modelling Method of Machining Center Based on Monte Carlo Simulation Directory of Open Access Journals (Sweden) Chuanhai Chen 2013-03-01 Full Text Available The aim of this study is to get the reliability of series system and analyze the reliability of machining center. So a modified method of reliability modelling based on Monte Carlo simulation for series system is proposed. The reliability function, which is built by the classical statistics method based on the assumption that machine tools were repaired as good as new, may be biased in the real case. The reliability functions of subsystems are established respectively and then the reliability model is built according to the reliability block diagram. Then the fitting reliability function of machine tools is established using the failure data of sample generated by Monte Carlo simulation, whose inverse reliability function is solved by the linearization technique based on radial basis function. Finally, an example of the machining center is presented using the proposed method to show its potential application. The analysis results show that the proposed method can provide an accurate reliability model compared with the conventional method. 1. Online Health Management for Complex Nonlinear Systems Based on Hidden Semi-Markov Model Using Sequential Monte Carlo Methods Directory of Open Access Journals (Sweden) Qinming Liu 2012-01-01 Full Text Available Health management for a complex nonlinear system is becoming more important for condition-based maintenance and minimizing the related risks and costs over its entire life. However, a complex nonlinear system often operates under dynamically operational and environmental conditions, and it subjects to high levels of uncertainty and unpredictability so that effective methods for online health management are still few now. This paper combines hidden semi-Markov model (HSMM with sequential Monte Carlo (SMC methods. HSMM is used to obtain the transition probabilities among health states and health state durations of a complex nonlinear system, while the SMC method is adopted to decrease the computational and space complexity, and describe the probability relationships between multiple health states and monitored observations of a complex nonlinear system. This paper proposes a novel method of multisteps ahead health recognition based on joint probability distribution for health management of a complex nonlinear system. Moreover, a new online health prognostic method is developed. A real case study is used to demonstrate the implementation and potential applications of the proposed methods for online health management of complex nonlinear systems. 2. Towards testing a two-Higgs-doublet model with maximal CP symmetry at the LHC: Monte Carlo event generator implementation International Nuclear Information System (INIS) A Monte Carlo event generator is implemented for a two-Higgs-doublet model with maximal CP symmetry, the MCPM. The model contains five physical Higgs bosons; the ρ', behaving similarly to the standard-model Higgs boson, two extra neutral bosons h' and h'', and a charged pair H±. The special feature of the MCPM is that, concerning the Yukawa couplings, the bosons h', h'' and H± couple directly only to the second-generation fermions but with strengths given by the third-generation-fermion masses. Our event generator allows the simulation of the Drell-Yan-type production processes of h', h'' and H± in proton-proton collisions at LHC energies. Also the subsequent leptonic decays of these bosons into the μ+ μ-, μ+νμ and μ- anti νμ channels are studied as well as the dominant background processes. We estimate the integrated luminosities needed in pp collisions at center-of-mass energies of 8 and 14 TeV for significant observations of the Higgs bosons h', h'' and H± in these muonic channels. (orig.) 3. Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems Science.gov (United States) Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark 2016-03-01 The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development. 4. A 'local' exponential transform method for global variance reduction in Monte Carlo transport problems International Nuclear Information System (INIS) We develop a 'Local' Exponential Transform method which distributes the particles nearly uniformly across the system in Monte Carlo transport calculations. An exponential approximation to the continuous transport equation is used in each mesh cell to formulate biasing parameters. The biasing parameters, which resemble those of the conventional exponential transform, tend to produce a uniform sampling of the problem geometry when applied to a forward Monte Carlo calculation, and thus they help to minimize the maximum variance of the flux. Unlike the conventional exponential transform, the biasing parameters are spatially dependent, and are automatically determined from a forward diffusion calculation. We develop two versions of the forward Local Exponential Transform method, one with spatial biasing only, and one with spatial and angular biasing. The method is compared to conventional geometry splitting/Russian roulette for several sample one-group problems in X-Y geometry. The forward Local Exponential Transform method with angular biasing is found to produce better results than geometry splitting/Russian roulette in terms of minimizing the maximum variance of the flux. (orig.) 5. EVALUATION OF AGILE METHODS AND IMPLEMENTATION OpenAIRE Hossain, Arif 2015-01-01 The concepts of agile development were introduced when programmers were experiencing different obstacles in building software in various aspects. The obsolete waterfall model became defective and was no more pure process in terms of developing software. Consequently new other development methods have been introduced to mitigate the defects. The purpose of this thesis is to study different agile methods and find out the best one for software development. Each important agile method offers ... 6. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method Directory of Open Access Journals (Sweden) Shaoyun Ge 2014-01-01 Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively. 7. Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process International Nuclear Information System (INIS) The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author) 8. Integration of the adjoint gamma quantum transport equation by the Monte Carlo method International Nuclear Information System (INIS) Comparative description and analysis of the direct and adjoint algorithms of calculation of gamma-quantum transmission in shielding using the Monte Carlo method have been carried out. Adjoint estimations for a number of monoenergetic sources have been considered. A brief description of ''COMETA'' program for BESM-6 computer reazaling direct and adjoint algorithms is presented. The program is modular-constructed which allows to widen it the new module-units being joined. Results of solution by the adjoint branch of two analog problems as compared to the analytical data are presented. These results confirm high efficiency of ''COMETA'' program 9. Microlens assembly error analysis for light field camera based on Monte Carlo method Science.gov (United States) Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping 2016-08-01 This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images. 10. Using Markov Chain Monte Carlo methods to solve full Bayesian modeling of PWR vessel flaw distributions International Nuclear Information System (INIS) We present a hierarchical Bayesian method for estimating the density and size distribution of subclad-flaws in French Pressurized Water Reactor (PWR) vessels. This model takes into account in-service inspection (ISI) data, a flaw size-dependent probability of detection (different functions are considered) with a threshold of detection, and a flaw sizing error distribution (different distributions are considered). The resulting model is identified through a Markov Chain Monte Carlo (MCMC) algorithm. The article includes discussion for choosing the prior distribution parameters and an illustrative application is presented highlighting the model's ability to provide good parameter estimates even when a small number of flaws are observed 11. Percolation conductivity of Penrose tiling by the transfer-matrix Monte Carlo method Science.gov (United States) Babalievski, Filip V. 1992-03-01 A generalization of the Derrida and Vannimenus transfer-matrix Monte Carlo method has been applied to calculations of the percolation conductivity in a Penrose tiling. Strips with a length~10 4 and widths from 3 to 19 have been used. Disregarding the differences for smaller strip widths (up to 7), the results show that the percolative conductivity of a Penrose tiling has a value very close to that of a square lattice. The estimate for the percolation transport exponent once more confirms the universality conjecture for the 0-1 distribution of resistors. 12. Forward-walking Green's function Monte Carlo method for correlation functions International Nuclear Information System (INIS) The forward-walking Green's Function Monte Carlo method is used to compute expectation values for the transverse Ising model in (1 + 1)D, and the results are compared with exact values. The magnetisation Mz and the correlation function pz (n) are computed. The algorithm reproduces the exact results, and convergence for the correlation functions seems almost as rapid as for local observables such as the magnetisation. The results are found to be sensitive to the trial wavefunction, however, especially at the critical point. Copyright (1999) CSIRO Australia 13. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy International Nuclear Information System (INIS) The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose. 14. Linewidth of Cyclotron Absorption in Band-Gap Graphene: Relaxation Time Approximation vs. Monte Carlo Method Directory of Open Access Journals (Sweden) S.V. Kryuchkov 2015-03-01 Full Text Available The power of the elliptically polarized electromagnetic radiation absorbed by band-gap graphene in presence of constant magnetic field is calculated. The linewidth of cyclotron absorption is shown to be non-zero even if the scattering is absent. The calculations are performed analytically with the Boltzmann kinetic equation and confirmed numerically with the Monte Carlo method. The dependence of the linewidth of the cyclotron absorption on temperature applicable for a band-gap graphene in the absence of collisions is determined analytically. 15. Investigation of the optimal parameters for laser treatment of leg telangiectasia using the Monte Carlo method Science.gov (United States) Kienle, Alwin; Hibst, Raimund 1996-05-01 Treatment of leg telangiectasia with a pulsed laser is investigated theoretically. The Monte Carlo method is used to calculate light propagation and absorption in the epidermis, dermis and the ectatic blood vessel. Calculations are made for different diameters and depths of the vessel in the dermis. In addition, the scattering and the absorption coefficients of the dermis are varied. On the basis of the considered damage model it is found that for vessels with diameters between 0.3 mm and 0.5 mm wavelengths about 600 nm are optimal to achieve selective photothermolysis. 16. Enhanced least squares Monte Carlo method for real-time decision optimizations for evolving natural hazards DEFF Research Database (Denmark) Anders, Annett; Nishijima, Kazuyoshi The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... prices. In Anders & Nishijima (2011) the LSM is adapted for a real-time operational decision problem; however it is found that further improvement is required in regard to the computational efficiency, in order to facilitate it for practice. This is the focus in the present paper. The idea behind the... 17. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy Science.gov (United States) Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui 2014-06-01 Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module. 18. NASA astronaut dosimetry: Implementation of scalable human phantoms and benchmark comparisons of deterministic versus Monte Carlo radiation transport Science.gov (United States) 19. Numerical simulation of C/O spectroscopy in logging by Monte-Carlo method International Nuclear Information System (INIS) Numerical simulation of ratio of C/O spectroscopy in logging by Monte-Carlo method is made in this paper. Agree well with the measured spectra, the simulated spectra can meet the requirement of logging practice. Vari- ous kinds of C/O ratios affected by different formation oil saturations,borehole oil fractions, casing sizes and concrete ring thicknesses are investigated. In order to achieve accurate results of processing the spectra, this paper presents a new method for unfolding the C/O inelastic gamma spectroscopy, and analysis for the spectra using the method, The result agrees with the fact. These rules and method can be used as calibrating tools and logging interpretation. (authors) 20. Spin kinetic Monte Carlo method for nanoferromagnetism and magnetization dynamics of nanomagnets with large magnetic anisotropy Institute of Scientific and Technical Information of China (English) LIU Bang-gui; ZHANG Kai-cheng; LI Ying 2007-01-01 The Kinetic Monte Carlo (KMC) method based on the transition-state theory, powerful and famous for sim-ulating atomic epitaxial growth of thin films and nanostruc-tures, was used recently to simulate the nanoferromagnetism and magnetization dynamics of nanomagnets with giant mag-netic anisotropy. We present a brief introduction to the KMC method and show how to reformulate it for nanoscale spin systems. Large enough magnetic anisotropy, observed exper-imentally and shown theoretically in terms of first-principle calculation, is not only essential to stabilize spin orientation but also necessary in making the transition-state barriers dur-ing spin reversals for spin KMC simulation. We show two applications of the spin KMC method to monatomic spin chains and spin-polarized-current controlled composite nano-magnets with giant magnetic anisotropy. This spin KMC method can be applied to other anisotropic nanomagnets and composite nanomagnets as long as their magnetic anisotropy energies are large enough. 1. Differential Monte Carlo method for computing seismogram envelopes and their partial derivatives Science.gov (United States) Takeuchi, Nozomu 2016-05-01 We present an efficient method that is applicable to waveform inversions of seismogram envelopes for structural parameters describing scattering properties in the Earth. We developed a differential Monte Carlo method that can simultaneously compute synthetic envelopes and their partial derivatives with respect to structural parameters, which greatly reduces the required CPU time. Our method has no theoretical limitations to apply to the problems with anisotropic scattering in a heterogeneous background medium. The effects of S wave polarity directions and phase differences between SH and SV components are taken into account. Several numerical examples are presented to show that the intrinsic and scattering attenuation at the depth range of the asthenosphere have different impacts on the observed seismogram envelopes, thus suggesting that our method can potentially be applied to inversions for scattering properties in the deep Earth. 2. Paediatric CT exposures: comparison between CTDIvol and SSDE methods using measurements and Monte Carlo simulations International Nuclear Information System (INIS) Computed tomography (CT) is one of the most used techniques in medical diagnosis, and its use has become one of the main sources of exposure of the population to ionising radiation. This work concentrates on the paediatric patients, since children exhibit higher radiosensitivity than adults. Nowadays, patient doses are estimated through two standard CT dose index (CTDI) phantoms as a reference to calculate CTDI volume (CTDIvol) values. This study aims at improving the knowledge about the radiation exposure to children and to better assess the accuracy of the CTDIvol method. The effectiveness of the CTDIvol method for patient dose estimation was then investigated through a sensitive study, taking into account the doses obtained by three methods: CTDIvol measured, CTDIvol values simulated with Monte Carlo (MC) code MCNPX and the recent proposed method Size-Specific Dose Estimate (SSDE). In order to assess organ doses, MC simulations were executed with paediatric voxel phantoms. (authors) 3. Biases in approximate solution to the criticality problem and alternative Monte Carlo method International Nuclear Information System (INIS) The solution to the problem of criticality for the neutron transport equation using the source iteration method is addressed. In particular, the question of convergence of the iterations is examined. It is concluded that slow convergence problems will occur in cases where the optical thickness of the space region in question is large. Furthermore it is shown that in general, the final result of the iterative process is strongly affected by an insufficient accuracy of the individual iterations. To avoid these problems, a modified method of the solution is suggested. This modification is based on the results of the theory of positive operators. The criticality problem is solved by means of the Monte Carlo method by constructing special random variables so that the differences between the observed and exact results are arbitrarily small. The efficiency of the method is discussed and some numerical results are presented 4. Recent advances in the microscopic calculations of level densities by the shell model Monte Carlo method International Nuclear Information System (INIS) The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (1) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (2) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59-64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets. (author) 5. On solution to the problem of criticality by alternative MONTE CARLO method International Nuclear Information System (INIS) The contribution deals with solution to the problem of criticality for neutron transport equation. The problem is transformed to equivalent one in a suitable set of complex functions and existence and uniqueness of its solution is shown. Then the source iteration method of the solution is discussed. It is pointed out that final result of iterative process is strongly affected by the fact that individual iterations are not computed with sufficient accuracy. To avoid this problem a modified method of the solution is suggested and presented. The modification is based on results of the theory of positive operators and problem of criticality is solved by Monte Carlo method constructing special random process and variable so that differences between results obtained and the exact ones would be arbitrarily small. Efficiency of this alternative method is analysed as well (Author) 6. A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry International Nuclear Information System (INIS) The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author) 7. Recent Advances in the Microscopic Calculations of Level Densities by the Shell Model Monte Carlo Method CERN Document Server Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H 2014-01-01 The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets. International Nuclear Information System (INIS) Light transfer in gradient-index media generally follows curved ray trajectories, which will cause light beam to converge or diverge during transfer and induce the rotation of polarization ellipse even when the medium is transparent. Furthermore, the combined process of scattering and transfer along curved ray path makes the problem more complex. In this paper, a Monte Carlo method is presented to simulate polarized radiative transfer in gradient-index media that only support planar ray trajectories. The ray equation is solved to the second order to address the effect induced by curved ray trajectories. Three types of test cases are presented to verify the performance of the method, which include transparent medium, Mie scattering medium with assumed gradient index distribution, and Rayleigh scattering with realistic atmosphere refractive index profile. It is demonstrated that the atmospheric refraction has significant effect for long distance polarized light transfer. - Highlights: • A Monte Carlo method for polarized radiative transfer in gradient index media. • Effect of curved ray paths on polarized radiative transfer is considered. • Importance of atmospheric refraction for polarized light transfer is demonstrated 9. The applicability of certain Monte Carlo methods to the analysis of interacting polymers Energy Technology Data Exchange (ETDEWEB) Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States) 1998-05-01 The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results. 10. Analysis of uncertainty quantification method by comparing Monte-Carlo method and Wilk's formula International Nuclear Information System (INIS) An analysis of the uncertainty quantification related to LBLOCA using the Monte-Carlo calculation has been performed and compared with the tolerance level determined by the Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LOCA phenomena were determined based on previous PIRT results and documentation during the BEMUSE project. Calulations were conducted on 3,500 cases within a 2-week CPU time on a 14-PC cluster system. The Monte-Carlo exercise shows that the 95% upper limit PCT value can be obtained well, with a 95% confidence level using the Wilks' formula, although we have to endure a 5% risk of PCT under-prediction. The results also show that the statistical fluctuation of the limit value using Wilks' first-order is as large as the uncertainty value itself. It is therefore desirable to increase the order of the Wilks' formula to be higher than the second-order to estimate the reliable safety margin of the design features. It is also shown that, with its ever increasing computational capability, the Monte-Carlo method is accessible for a nuclear power plant safety analysis within a realistic time frame. 11. Simulation of the nucleation of the precipitate Al3Sc in an aluminum scandium alloy using the kinetic monte carlo method OpenAIRE Moura, Alfredo de; Esteves, António 2013-01-01 This paper describes the simulation of the phenomenon of nucleation of the precipitate Al3Sc in an Aluminum Scandium alloy using the kinetic Monte Carlo (kMC) method and the density-based clustering with noise (DBSCAN) method to filter the simulation data. To conduct this task, kMC and DBSCAN algorithms were implemented in C language. The study covers a range of temperatures, concentrations, and dimensions, going from 573K to 873K, 0.25% to 5%, and 50x50x50 to 100x100x100. The Al3Sc precipita... 12. Self-optimizing Monte Carlo method for nuclear well logging simulation Science.gov (United States) Liu, Lianyan 1997-09-01 In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very 13. Monte Carlo simulation methods of determining red bone marrow dose from external radiation International Nuclear Information System (INIS) Objective: To provide evidence for a more reasonable method of determining red bone marrow dose by analyzing and comparing existing simulation methods. Methods: By utilizing Monte Carlo simulation software MCNPX, the absorbed doses of red hone marrow of Rensselaer Polytechnic Institute (RPI) adult female voxel phantom were calculated through 4 different methods: direct energy deposition.dose response function (DRF), King-Spiers factor method and mass-energy absorption coefficient (MEAC). The radiation sources were defined as infinite plate.sources with the energy ranging from 20 keV to 10 MeV, and 23 sources with different energies were simulated in total. The source was placed right next to the front of the RPI model to achieve a homogeneous anteroposterior radiation scenario. The results of different simulated photon energy sources through different methods were compared. Results: When the photon energy was lower than 100 key, the direct energy deposition method gave the highest result while the MEAC and King-Spiers factor methods showed more reasonable results. When the photon energy was higher than 150 keV taking into account of the higher absorption ability of red bone marrow at higher photon energy, the result of the King-Spiers factor method was larger than those of other methods. Conclusions: The King-Spiers factor method might be the most reasonable method to estimate the red bone marrow dose from external radiation. (authors) 14. Wind Turbine Placement Optimization by means of the Monte Carlo Simulation Method Directory of Open Access Journals (Sweden) S. Brusca 2014-01-01 Full Text Available This paper defines a new procedure for optimising wind farm turbine placement by means of Monte Carlo simulation method. To verify the algorithm’s accuracy, an experimental wind farm was tested in a wind tunnel. On the basis of experimental measurements, the error on wind farm power output was less than 4%. The optimization maximises the energy production criterion; wind turbines’ ground positions were used as independent variables. Moreover, the mathematical model takes into account annual wind intensities and directions and wind turbine interaction. The optimization of a wind farm on a real site was carried out using measured wind data, dominant wind direction, and intensity data as inputs to run the Monte Carlo simulations. There were 30 turbines in the wind park, each rated at 20 kW. This choice was based on wind farm economics. The site was proportionally divided into 100 square cells, taking into account a minimum windward and crosswind distance between the turbines. The results highlight that the dominant wind intensity factor tends to overestimate the annual energy production by about 8%. Thus, the proposed method leads to a more precise annual energy evaluation and to a more optimal placement of the wind turbines. 15. Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method International Nuclear Information System (INIS) A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media 16. Inconsistencies in widely used Monte Carlo methods for precise calculation of radial resonance captures in uranium fuel rods International Nuclear Information System (INIS) Although resonance neutron captures for 238U in water-moderated lattices are known to occur near moderator-fuel interfaces, the sharply attenuated spatial captures here have not been calculated by multigroup transport or Monte Carlo methods. Advances in computer speed and capacity have restored interest in applying Monte Carlo methods to evaluate spatial resonance captures in fueled lattices. Recently published studies have placed complete reliance on the ostensible precision of the Monte Carlo approach without auxiliary confirmation that resonance processes were followed adequately or that the Monte Carlo method was applied appropriately. Other methods of analysis that have evolved from early resonance integral theory have provided a basis for an alternative approach to determine radial resonance captures in fuel rods. A generalized method has been formulated and confirmed by comparison with published experiments of high spatial resolution for radial resonance captures in metallic uranium rods. The same analytical method has been applied to uranium-oxide fuels. The generalized method defined a spatial effective resonance cross section that is a continuous function of distance from the moderator-fuel interface and enables direct calculation of precise radial resonance capture distributions in fuel rods. This generalized method is used as a reference for comparison with two recent independent studies that have employed different Monte Carlo codes and cross-section libraries. Inconsistencies in the Monte Carlo application or in how pointwise cross-section libraries are sampled may exist. It is shown that refined Monte Carlo solutions with improved spatial resolution would not asymptotically approach the reference spatial capture distributions 17. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems DEFF Research Database (Denmark) Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E. 2002-01-01 A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this...... from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently... 18. Simulating rotationally inelastic collisions using a Direct Simulation Monte Carlo method CERN Document Server Schullian, O; Vaeck, N; van der Avoird, A; Heazlewood, B R; Rennick, C J; Softley, T P 2015-01-01 A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH$_3$ by He. 19. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters. Directory of Open Access Journals (Sweden) Kaarina Matilainen Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings. 20. Concerned items on variance reduction method of monte carlo calculation written in published literatures. A logic of monte carlo calculation=from experience to science International Nuclear Information System (INIS) In the fixed source problem such as a neutron deep penetration calculation with the Monte Carlo method, the application of the variance reduction method is most important for a high figure of merit (FOM) and the most reliable calculation. But, MCNP calculation inputs written in published literature are not to be best solution. The most concerned items are setting method for the lower weight bound on the weight window method and the exclusion radius for a point estimator. In those literatures, the lower weight bound is estimated by engineering judge or weight window generator in the MCNP. In the latter case, the lower weight bound is used with no turning process. Because of abnormal large lower weight bounds, many neutron are killed in no meaning by the Russian Roulette. The adjoint flux method for setting of lower weight bound should be adapted as a standard variance reduction method. The Monte Carlo calculation should be turned from the experience such as engineering judge to science such as adjoint method. (author) 1. Use of Monte Carlo Methods for Evaluating Probability of False Positives in Archaeoastronomy Alignments Science.gov (United States) Hull, Anthony B.; Ambruster, C.; Jewell, E. 2012-01-01 Simple Monte Carlo simulations can assist both the cultural astronomy researcher while the Research Design is developed and the eventual evaluators of research products. Following the method we describe allows assessment of the probability for there to be false positives associated with a site. Even seemingly evocative alignments may be meaningless, depending on the site characteristics and the number of degrees of freedom the researcher allows. In many cases, an observer may have to limit comments to "it is nice and it might be culturally meaningful, rather than saying "it is impressive so it must mean something". We describe a basic language with an associated set of attributes to be cataloged. These can be used to set up simple Monte Carlo simulations for a site. Without collaborating cultural evidence, or trends with similar attributes (for example a number of sites showing the same anticipatory date), the Monte Carlo simulation can be used as a filter to establish the likeliness that the observed alignment phenomena is the result of random factors. Such analysis may temper any eagerness to prematurely attribute cultural meaning to an observation. For the most complete description of an archaeological site, we urge researchers to capture the site attributes in a manner which permits statistical analysis. We also encourage cultural astronomers to record that which does not work, and that which may seem to align, but has no discernable meaning. Properly reporting situational information as tenets of the research design will reduce the subjective nature of archaeoastronomical interpretation. Examples from field work will be discussed. 2. Application of Monte Carlo method for dose calculation in thyroid follicle International Nuclear Information System (INIS) The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author) 3. A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation OpenAIRE Saleur, H.; Derrida, B. 1985-01-01 In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents which confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations. 4. A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation International Nuclear Information System (INIS) In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations 5. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction CERN Document Server 2003-01-01 Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system. 6. Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings. Science.gov (United States) Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L 2013-01-01 It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches. PMID:23194406 7. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model Science.gov (United States) Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%. 8. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot International Nuclear Information System (INIS) This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust. 9. Calculation of the radiation transport in rock salt using Monte Carlo methods. Final report. HAW project International Nuclear Information System (INIS) This report provides absorbed dose rate and photon fluence rate distributions in rock salt around 30 testwise emplaced canisters containing high-level radioactive material (HAW project) and around a single canister containing radioactive material of a lower activity level (INHAW experiment). The site of this test emplacement was located in test galleries at the 800-m-level in the Asse salt mine. The data given were calculated using a Monte Carlo method simulating photon transport in complex geometries of differently composed materials. The aim of these calculations was to enable determination of the dose absorbed in any arbitrary sample of salt to be further examined in the future with sufficient reliability. The geometry of the test arrangement, the materials involved and the calculational method are characterised and the results are shortly described and some figures presenting selected results are shown. In the appendices, the results for emplacement of the highly radioactive canisters are given in tabular form. (orig.) 10. Using neutron source distinguish mustard gas bomb from the others with Monte Carlo simulation method International Nuclear Information System (INIS) After Japan was defeated, the chemical weapon that left in China injured people constantly. It made very grave lost to the Chinese because of people's innocent to it. In these accidents, mustard gas bomb is the most. It is more difficult to distinguish mustard gas bomb from other normal bomb in out because it embedded in the earth for long time; leakage, eroding and rust appearance looked very serious. So the untouched measure method, neutron source inducing γ spectrum, showed very important. The Monte Carlo method was used in this paper to compute the γ spectrum when using neutron source irradiate mustard gas bomb. The characteristic radial of Cl, S, Fe and the other elements can picked up clearly. The result play some referenced role in analyzing γ spectrum. (authors) 11. Heat-Flux Analysis of Solar Furnace Using the Monte Carlo Ray-Tracing Method International Nuclear Information System (INIS) An understanding of the concentrated solar flux is critical for the analysis and design of solar-energy-utilization systems. The current work focuses on the development of an algorithm that uses the Monte Carlo ray-tracing method with excellent flexibility and expandability; this method considers both solar limb darkening and the surface slope error of reflectors, thereby analyzing the solar flux. A comparison of the modeling results with measurements at the solar furnace in Korea Institute of Energy Research (KIER) show good agreement within a measurement uncertainty of 10%. The model evaluates the concentration performance of the KIER solar furnace with a tracking accuracy of 2 mrad and a maximum attainable concentration ratio of 4400 sun. Flux variations according to measurement position and flux distributions depending on acceptance angles provide detailed information for the design of chemical reactors or secondary concentrators 12. Intra-operative radiation therapy optimization using the Monte Carlo method International Nuclear Information System (INIS) The problem addressed with reference to the treatment head optimization has been the choice of the proper design of the head of a new 12 MeV linear accelerator in order to have the required dose uniformity on the target volume while keeping the dose rate sufficiently high and the photon production and the beam impact with the head walls within acceptable limits. The second part of the optimization work, concerning the TPS, is based on the rationale that the TPSs generally used in radiotherapy use semi-empirical algorithms whose accuracy can be inadequate particularly when irregular surfaces and/or inhomogeneities, such as air cavities or bone, are present. The Monte Carlo method, on the contrary, is capable of accurately calculating the dose distribution under almost all circumstances. Furthermore it offers the advantage of allowing to start the simulation of the radiation transport in the patient from the beam data obtained with the transport through the specific treatment head used. Therefore the Monte Carlo simulations, which at present are not yet widely used for routine treatment planning due to the required computing time, can be employed as a benchmark and as an optimization tool for conventional TPSs. (orig.) 13. Intra-operative radiation therapy optimization using the Monte Carlo method Energy Technology Data Exchange (ETDEWEB) Rosetti, M. [ENEA, Bologna (Italy); Benassi, M.; Bufacchi, A.; D' Andrea, M. [Ist. Regina Elena, Rome (Italy); Bruzzaniti, V. [ENEA, S. Maria di Galeria (Rome) (Italy) 2001-07-01 The problem addressed with reference to the treatment head optimization has been the choice of the proper design of the head of a new 12 MeV linear accelerator in order to have the required dose uniformity on the target volume while keeping the dose rate sufficiently high and the photon production and the beam impact with the head walls within acceptable limits. The second part of the optimization work, concerning the TPS, is based on the rationale that the TPSs generally used in radiotherapy use semi-empirical algorithms whose accuracy can be inadequate particularly when irregular surfaces and/or inhomogeneities, such as air cavities or bone, are present. The Monte Carlo method, on the contrary, is capable of accurately calculating the dose distribution under almost all circumstances. Furthermore it offers the advantage of allowing to start the simulation of the radiation transport in the patient from the beam data obtained with the transport through the specific treatment head used. Therefore the Monte Carlo simulations, which at present are not yet widely used for routine treatment planning due to the required computing time, can be employed as a benchmark and as an optimization tool for conventional TPSs. (orig.) 14. Improvement of the neutron flux calculations in thick shield by conditional Monte Carlo and deterministic methods Energy Technology Data Exchange (ETDEWEB) Ghassoun, Jillali; Jehoauni, Abdellatif [Nuclear physics and Techniques Lab., Faculty of Science, Semlalia, Marrakech (Morocco) 2000-01-01 In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author) 15. Improvement of the neutron flux calculations in thick shield by conditional Monte Carlo and deterministic methods International Nuclear Information System (INIS) In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author) 16. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method KAUST Repository Li, Jun 2014-01-01 In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press. 17. Advantages and weakness of the Monte Carlo method used in studies for safety-criticality in nuclear installations International Nuclear Information System (INIS) The choice of the Monte Carlo method by the criticality service of the CEA is justified by the advantages of this method with regard to analytical codes. In this paper the authors present the advantages and the weakness of this method. Some studies for remediate at this weakness are presented 18. Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling Energy Technology Data Exchange (ETDEWEB) Peplow, Douglas E. [ORNL; Miller, Thomas Martin [ORNL; Patton, Bruce W [ORNL; Wagner, John C [ORNL 2013-01-01 The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency. 19. Coherent-wave Monte Carlo method for simulating light propagation in tissue Science.gov (United States) Kraszewski, Maciej; Pluciński, Jerzy 2016-03-01 Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime. 20. Treatment of the Shrodinger equation through a Monte Carlo method based upon the generalized Feynman-Kac formula International Nuclear Information System (INIS) We present a new Monte Carlo method based upon the theoretical proposal of Claverie and Soto. By contrast with other Quantum Monte Carlo methods used so far, the present approach uses a pure diffusion process without any branching. The many-fermion problem (with the specific constraint due to the Pauli principle) receives a natural solution in the framework of this method: in particular, there is neither the fixed-node approximation not the nodal release problem which occur in other approaches (see, e.g., Ref. 8 for a recent account). We give some numerical results concerning simple systems in order to illustrate the numerical feasibility of the proposed algorithm 1. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis Science.gov (United States) Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A. 2014-05-01 Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution 2. Application of multi-stage Monte Carlo method for solving machining optimization problems Directory of Open Access Journals (Sweden) 2014-08-01 Full Text Available Enhancing the overall machining performance implies optimization of machining processes, i.e. determination of optimal machining parameters combination. Optimization of machining processes is an active field of research where different optimization methods are being used to determine an optimal combination of different machining parameters. In this paper, multi-stage Monte Carlo (MC method was employed to determine optimal combinations of machining parameters for six machining processes, i.e. drilling, turning, turn-milling, abrasive waterjet machining, electrochemical discharge machining and electrochemical micromachining. Optimization solutions obtained by using multi-stage MC method were compared with the optimization solutions of past researchers obtained by using meta-heuristic optimization methods, e.g. genetic algorithm, simulated annealing algorithm, artificial bee colony algorithm and teaching learning based optimization algorithm. The obtained results prove the applicability and suitability of the multi-stage MC method for solving machining optimization problems with up to four independent variables. Specific features, merits and drawbacks of the MC method were also discussed. 3. Calculation of neutron importance function in fissionable assemblies using Monte Carlo method International Nuclear Information System (INIS) The purpose of the present work is to develop an efficient solution method to calculate neutron importance function in fissionable assemblies for all criticality conditions, using Monte Carlo Method. The neutron importance function has a well important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating adjoint flux through out solving the Adjoint weighted transport equation with deterministic methods. However, in complex geometries these calculations are very difficult. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on physical concept of neutron importance has been introduced for calculating neutron importance function in sub-critical, critical and supercritical conditions. For this means a computer program has been developed. The results of the method has been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries and their correctness has been approved for all three criticality conditions. Ultimately, the efficiency of the method for complex geometries has been shown by calculation of neutron importance in MNSR research reactor 4. Generation of organic scintillators response function for fast neutrons using the Monte Carlo method International Nuclear Information System (INIS) A computer program (DALP) in Fortran-4-G language, has been developed using the Monte Carlo method to simulate the experimental techniques leading to the distribution of pulse heights due to monoenergetic neutrons reaching an organic scintillator. The calculation of the pulse height distribution has been done for two different systems: 1) Monoenergetic neutrons from a punctual source reaching the flat face of a cylindrical organic scintillator; 2) Environmental monoenergetic neutrons randomly reaching either the flat or curved face of the cylindrical organic scintillator. The computer program has been developed in order to be applied to the NE-213 liquid organic scintillator, but can be easily adapted to any other kind of organic scintillator. With this program one can determine the pulse height distribution for neutron energies ranging from 15 KeV to 10 MeV. (Author) 5. Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy Science.gov (United States) King, Julian; Mortlock, Daniel; Webb, John; Murphy, Michael 2010-11-01 Recent attempts to constrain cosmological variation in the fine structure constant, α, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for Δα/α, the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable. 6. Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy CERN Document Server King, Julian A; Webb, John K; Murphy, Michael T 2009-01-01 Recent attempts to constrain cosmological variation in the fine structure constant, alpha, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for (Delta alpha)/(alpha), the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable. 7. Determination of dosimetric characteristics of 125I-103Pd brachytherapy source with Monte-Carlo method International Nuclear Information System (INIS) According to dose parameters calculation formula of seed source recommended by AAPM TG43U1, 125I-103Pd seed source dose parameters calculation formula and a variety of radionuclides composite seed source of dose parameters calculation formula can be obtain. Dose rate constant, radial dose function and anisotropy function of 125I-103Pd composite seed source are calculated by Monte-Carlo method, Empiric equations are obtained for radial dose function and anisotropy function by curve fitting. Comparisons with the relative data recommend by AAPM are performed. For the single source, the deviation of dose rate constant is 0.959 (cGy·h-1·U-1), and with 0.6093% from the AAPM. (authors) 8. Monte Carlo study of living polymers with the bond-fluctuation method Science.gov (United States) Rouault, Yannick; Milchev, Andrey 1995-06-01 The highly efficient bond-fluctuation method for Monte Carlo simulations of both static and dynamic properties of polymers is applied to a system of living polymers. Parallel to stochastic movements of monomers, which result in Rouse dynamics of the macromolecules, the polymer chains break, or associate at chain ends with other chains and single monomers, in the process of equilibrium polymerization. We study the changes in equilibrium properties, such as molecular-weight distribution, average chain length, and radius of gyration, and specific heat with varying density and temperature of the system. The results of our numeric experiments indicate a very good agreement with the recently suggested description in terms of the mean-field approximation. The coincidence of the specific heat maximum position at kBT=V/4 in both theory and simulation suggests the use of calorimetric measurements for the determination of the scission-recombination energy V in real experiments. 9. Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method International Nuclear Information System (INIS) We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels 10. Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods CERN Document Server Ferraioli, Luigi; Plagnol, Eric 2012-01-01 We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to... 11. MAMONT program for neutron field calculation by the Monte Carlo method International Nuclear Information System (INIS) The MAMONT program (MAthematical MOdelling of Neutron Trajectories) designed for three-dimensional calculation of neutron transport by analogue and nonanalogue Monte Carlo methods in the range of energies from 15 MeV to the thermal ones is described. The program is written in FORTRAN and is realized at the BESM-6 computer. Group constants of the library modulus are compiled of the ENDL-83, ENDF/B-4 and JENDL-2 files. The possibility of calculation for the layer spherical, cylindrical and rectangular configurations is envisaged. Accumulation and averaging of slowing-down kinetics functionals (averaged logarithmic losses of energy, time of slowing- down, free paths, the number of collisions, age), diffusion parameters, leakage spectra and fluxes as well as formation of separate isotopes over zones are realized in the process of calculation. 16 tabs 12. Absorbed dose measurements in mammography using Monte Carlo method and ZrO2+PTFE dosemeters International Nuclear Information System (INIS) Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO2+PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author) 13. Investigation of Reliabilities of Bolt Distances for Bolted Structural Steel Connections by Monte Carlo Simulation Method Directory of Open Access Journals (Sweden) Ertekin Öztekin Öztekin 2015-12-01 Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs. 14. Efficiency determination of whole-body counter by Monte Carlo method, using a microcomputer International Nuclear Information System (INIS) The purpose of this investigation was the development of an analytical microcomputer model to evaluate a whole body counter efficiency. The model is based on a modified Sryder's model. A stretcher type geometry along with the Monte Carlo method and a Synclair type microcomputer were used. Experimental measurements were performed using two phantoms, one as an adult and the other as a 5 year old child. The phantoms were made in acrylic and and 99mTc, 131I and 42K were the radioisotopes utilized. Results showed a close relationship between experimental and predicted data for energies ranging from 250 keV to 2 MeV, but some discrepancies were found for lower energies. (author) 15. Investigation of physical regularities in gamma gamma logging of oil wells by Monte Carlo method International Nuclear Information System (INIS) Some results are given of calculations by the Monte Carlo method of specific problems of gamma-gamma density logging. The paper considers the influence of probe length and volume density of the rocks; the angular distribution of the scattered radiation incident on the instrument; the spectra of the radiation being recorded and of the source radiation; depths of surveys, the effect of the mud cake, the possibility of collimating the source radiation; the choice of source, initial collimation angles, the optimum angle of recording scattered gamma-radiation and the radiation discrimination threshold; and the possibility of determining the mineralogical composition of rocks in sections of oil wells and of identifying once-scattered radiation. (author) 16. Application of Monte Carlo method in modelling physical and physico-chemical processes International Nuclear Information System (INIS) The seminar was held on September 9 and 10, 1982 at the Faculty of Nuclear Science and Technical Engineering of the Czech Technical University in Prague. The participants heard 11 papers of which 7 were inputed in INIS. The papers dealt with the use of the Monte Carlo method for modelling the transport and scattering of gamma radiation in layers of materials, the application of low-energy gamma radiation for the determination of secondary X radiation flux, the determination of self-absorption corrections for a 4π chamber, modelling the response function of a scintillation detector and the optimization of geometrical configuration in measuring material density using backscattered gamma radiation. The possibility was studied of optimizing modelling with regard to computer time, and the participants were informed of comouterized nuclear data libraries. (M.D.) 17. Simulation of nuclear material identification system based on Monte Carlo sampling method International Nuclear Information System (INIS) Background: Caused by the danger of radioactivity, nuclear material identification is sometimes a difficult problem. Purpose: In order to reflect the particle transport processes in nuclear fission and present the effectiveness of the signatures of Nuclear Materials Identification System (NMIS), based on physical principles and experimental statistical data. Methods: We established a Monte Carlo simulation model of nuclear material identification system and then acquired three channels of time domain pulse signal. Results: Auto-Correlation Functions (AC), Cross-Correlation Functions (CC), Auto Power Spectral Densities (APSD) and Cross Power Spectral Densities (CPSD) between channels can obtain several signatures, which can show some characters of nuclear material. Conclusions: The simulation results indicate that the way can help to further study the features of the system. (authors) 18. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks Science.gov (United States) Kim, Stacy 2011-01-01 Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds. 19. Calculation of narrow beam γ ray mass attenuation coefficients of absorbing medium by Monte Carlo method International Nuclear Information System (INIS) The mathematics model of particle transportation was built, based on the sample of the impaction trace of the narrow beam γ photon in the medium according to the principle of interaction between γ photon and the material, and a computer procedure was organized to simulate the process of transportation for the γ photon in the medium and record the emission probability of γ photon and its corresponding thickness of medium with LabWindows/CVI, which was used to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. The results show that it is feasible for Monte Carlo method to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. (authors) 20. A Monte Carlo method for critical systems in infinite volume: the planar Ising model CERN Document Server Herdeiro, Victor 2016-01-01 In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge. 1. Development of a software package for solid-angle calculations using the Monte Carlo method International Nuclear Information System (INIS) Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4. -- Highlights: • This software package (SAC) can give accurate solid-angle values. • SAC calculate solid angles using the Monte Carlo method and it has higher computation speed than Geant4. • A simple but effective variance reduction technique which was put forward by the authors has been applied in SAC. • A visualization function and a graphical user interface are also integrated in SAC 2. Energy conservation in radiation hydrodynamics. Application to the Monte-Carlo method used for photon transport in the fluid frame International Nuclear Information System (INIS) The description of the equations in the fluid frame has been done recently. A simplification of the collision term is obtained, but the streaming term now has to include angular deviation and the Doppler shift. We choose the latter description which is more convenient for our purpose. We introduce some notations and recall some facts about stochastic kernels and the Monte-Carlo method. We show how to apply the Monte-Carlo method to a transport equation with an arbitrary streaming term; in particular we show that the track length estimator is unbiased. We review some properties of the radiation hydrodynamics equations, and show how energy conservation is obtained. Then, we apply the Monte-Carlo method explained in section 2 to the particular case of the transfer equation in the fluid frame. Finally, we describe a physical example and give some numerical results 3. Method to implement the CCD timing generator based on FPGA Science.gov (United States) Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin 2010-07-01 With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end. 4. Exposure-response modeling methods and practical implementation CERN Document Server Wang, Jixian 2015-01-01 Discover the Latest Statistical Approaches for Modeling Exposure-Response RelationshipsWritten by an applied statistician with extensive practical experience in drug development, Exposure-Response Modeling: Methods and Practical Implementation explores a wide range of topics in exposure-response modeling, from traditional pharmacokinetic-pharmacodynamic (PKPD) modeling to other areas in drug development and beyond. It incorporates numerous examples and software programs for implementing novel methods.The book describes using measurement 5. A Monte-Carlo Method for Estimating Stellar Photometric Metallicity Distributions CERN Document Server Gu, Jiayin; jing, Yingjie; Zuo, Wenbo 2016-01-01 Based on the Sloan Digital Sky Survey (SDSS), we develop a new monte-carlo based method to estimate the photometric metallicity distribution function (MDF) for stars in the Milky Way. Compared with other photometric calibration methods, this method enables a more reliable determination of the MDF, in particular at the metal-poor and metal-rich ends. We present a comparison of our new method with a previous polynomial-based approach, and demonstrate its superiority. As an example, we apply this method to main-sequence stars with $0.2 6. Time-Varying Noise Estimation for Speech Enhancement and Recognition Using Sequential Monte Carlo Method Directory of Open Access Journals (Sweden) Kaisheng Yao 2004-11-01 Full Text Available We present a method for sequentially estimating time-varying noise parameters. Noise parameters are sequences of time-varying mean vectors representing the noise power in the log-spectral domain. The proposed sequential Monte Carlo method generates a set of particles in compliance with the prior distribution given by clean speech models. The noise parameters in this model evolve according to random walk functions and the model uses extended Kalman filters to update the weight of each particle as a function of observed noisy speech signals, speech model parameters, and the evolved noise parameters in each particle. Finally, the updated noise parameter is obtained by means of minimum mean square error (MMSE estimation on these particles. For efficient computations, the residual resampling and Metropolis-Hastings smoothing are used. The proposed sequential estimation method is applied to noisy speech recognition and speech enhancement under strongly time-varying noise conditions. In both scenarios, this method outperforms some alternative methods. 7. An Evaluation of the Adjoint Flux Using the Collision Probability Method for the Hybrid Monte Carlo Radiation Shielding Analysis International Nuclear Information System (INIS) It is noted that the analog Monte Carlo method has low calculation efficiency at deep penetration problems such as radiation shielding analysis. In order to increase the calculation efficiency, variance reduction techniques have been introduced and applied for the shielding calculation. To optimize the variance reduction technique, the hybrid Monte Carlo method was introduced. For the determination of the parameters using the hybrid Monte Carlo method, the adjoint flux should be calculated by the deterministic methods. In this study, the collision probability method is applied to calculate adjoint flux. The solution of integration transport equation in the collision probability method is modified to calculate the adjoint flux approximately even for complex and arbitrary geometries. For the calculation, C++ program is developed. By using the calculated adjoint flux, importance parameters of each cell in shielding material are determined and used for variance reduction of transport calculation. In order to evaluate calculation efficiency with the proposed method, shielding calculations are performed with MCNPX 2.7. In this study, a method to calculate the adjoint flux in using the Monte Carlo variance reduction was proposed to improve Monte Carlo calculation efficiency of thick shielding problem. The importance parameter for each cell of shielding material is determined by calculating adjoint flux with the modified collision probability method. In order to calculate adjoint flux with the proposed method, C++ program is developed. The results show that the proposed method can efficiently increase the FOM of transport calculation. It is expected that the proposed method can be utilize for the calculation efficiency in thick shielding calculation 8. Technical Note: Implementation of biological washout processes within GATE/GEANT4—A Monte Carlo study in the case of carbon therapy treatments International Nuclear Information System (INIS) Purpose: The imaging of positron emitting isotopes produced during patient irradiation is the only in vivo method used for hadrontherapy dose monitoring in clinics nowadays. However, the accuracy of this method is limited by the loss of signal due to the metabolic decay processes (biological washout). In this work, a generic modeling of washout was incorporated into the GATE simulation platform. Additionally, the influence of the washout on the β+ activity distributions in terms of absolute quantification and spatial distribution was studied. Methods: First, the irradiation of a human head phantom with a 12C beam, so that a homogeneous dose distribution was achieved in the tumor, was simulated. The generated 11C and 15O distribution maps were used as β+ sources in a second simulation, where the PET scanner was modeled following a detailed Monte Carlo approach. The activity distributions obtained in the presence and absence of washout processes for several clinical situations were compared. Results: Results show that activity values are highly reduced (by a factor of 2) in the presence of washout. These processes have a significant influence on the shape of the PET distributions. Differences in the distal activity falloff position of 4 mm are observed for a tumor dose deposition of 1 Gy (Tini = 0 min). However, in the case of high doses (3 Gy), the washout processes do not have a large effect on the position of the distal activity falloff (differences lower than 1 mm). The important role of the tumor washout parameters on the activity quantification was also evaluated. Conclusions: With this implementation, GATE/GEANT 4 is the only open-source code able to simulate the full chain from the hadrontherapy irradiation to the PET dose monitoring including biological effects. Results show the strong impact of the washout processes, indicating that the development of better models and measurement of biological washout data are essential 9. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning International Nuclear Information System (INIS) The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and 10. Studying stellar binary systems with the Laser Interferometer Space Antenna using delayed rejection Markov chain Monte Carlo methods International Nuclear Information System (INIS) Bayesian analysis of Laser Interferometer Space Antenna (LISA) data sets based on Markov chain Monte Carlo methods has been shown to be a challenging problem, in part due to the complicated structure of the likelihood function consisting of several isolated local maxima that dramatically reduces the efficiency of the sampling techniques. Here we introduce a new fully Markovian algorithm, a delayed rejection Metropolis-Hastings Markov chain Monte Carlo method, to efficiently explore these kind of structures and we demonstrate its performance on selected LISA data sets containing a known number of stellar-mass binary signals embedded in Gaussian stationary noise. 11. Criticality analysis of thermal reactors for two energy groups applying Monte Carlo and neutron Albedo method International Nuclear Information System (INIS) The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author) 12. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees. Science.gov (United States) Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J 2008-06-01 Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655 13. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees Science.gov (United States) Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J. 2008-01-01 Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655 14. Report on some methods of determining the state of convergence of Monte Carlo risk estimates International Nuclear Information System (INIS) The Department of the Environment is developing a methodology for assessing potential sites for the disposal of low and intermediate level radioactive wastes. Computer models are used to simulate the groundwater transport of radioactive materials from a disposal facility back to man. Monte Carlo methods are being employed to conduct a probabilistic risk assessment (pra) of potential sites. The models calculate time histories of annual radiation dose to the critical group population. The annual radiation dose to the critical group in turn specifies the annual individual risk. The distribution of dose is generally highly skewed and many simulation runs are required to predict the level of confidence in the risk estimate i.e. to determine whether the risk estimate is converged. This report describes some statistical methods for determining the state of convergence of the risk estimate. The methods described include the Shapiro-Wilk test, calculation of skewness and kurtosis and normal probability plots. A method for forecasting the number of samples needed before the risk estimate is converged is presented. Three case studies were conducted to examine the performance of some of these techniques. (author) 15. Multiple-scaling methods for Monte Carlo simulations of radiative transfer in cloudy atmosphere International Nuclear Information System (INIS) Two multiple-scaling methods for Monte Carlo simulations were derived from integral radiative transfer equation for calculating radiance in cloudy atmosphere accurately and rapidly. The first one is to truncate sharp forward peaks of phase functions for each order of scattering adaptively. The truncated functions for forward peaks are approximated as quadratic functions; only one prescribed parameter is used to set maximum truncation fraction for various phase functions. The second one is to increase extinction coefficients in optically thin regions for each order scattering adaptively, which could enhance the collision chance adaptively in the regions where samples are rare. Several one-dimensional and three-dimensional cloud fields were selected to validate the methods. The numerical results demonstrate that the bias errors were below 0.2% for almost all directions except for glory direction (less than 0.4%) and the higher numerical efficiency could be achieved when quadratic functions were used. The second method could decrease radiance noise to 0.60% for cumulus and accelerate convergence in optically thin regions. In general, the main advantage of the proposed methods is that we could modify the atmospheric optical quantities adaptively for each order of scattering and sample important contribution according to the specific atmospheric conditions. 16. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison International Nuclear Information System (INIS) Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4 to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4 in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23.000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selected (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. This benchmark shows 2 main points. First, independent replicas are an appropriate method to achieve a fare variance estimation when dominance ratio is near 1. Secondly, the diffusion operator with 2 energy groups gives satisfactory results compared to TRIPOLI-4 even with a highly heterogeneous neutron flux map and an harder spectrum 17. Non-Pilot-Aided Sequential Monte Carlo Method to Joint Signal, Phase Noise, and Frequency Offset Estimation in Multicarrier Systems Directory of Open Access Journals (Sweden) Christelle Garnier 2008-05-01 Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm. 18. A New Monte Carlo Photon Transport Code for Research Reactor Hotcell Shielding Calculation using Splitting and Russian Roulette Methods International Nuclear Information System (INIS) The Monte Carlo method was used to build a new code for the simulation of particle transport. Several calculations were done after that for verification, where different sources were used, the source term was obtained using the ORIGEN-S code. Water and lead shield were used with spherical geometry, and the tally results were obtained on the external surface of the shield, afterward the results were compared with the results of MCNPX for verification of the new code. The variance reduction techniques of splitting and Russian Roulette were implemented in the code to be more efficient, by reducing the amount of custom programming required, by artificially increasing the particles being tallied with decreasing the weight. The code shows lower results than the results of MCNPX, this can be interpreted by the effect of the secondary gamma radiation that can be produced by the electron, which is ejected by the primary radiation. In the future a more study will be made on the effect of the electron production and transport, either by a real transport of the electron or by simply using an approximation such the thick target bremsstahlung(TTB) option which is used in MCNPX 19. A New Monte Carlo Photon Transport Code for Research Reactor Hotcell Shielding Calculation using Splitting and Russian Roulette Methods Energy Technology Data Exchange (ETDEWEB) Alnajjar, Alaaddin [Univ. of Science and Technology, Daejeon (Korea, Republic of); Park, Chang Je; Lee, Byunchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of) 2013-10-15 The Monte Carlo method was used to build a new code for the simulation of particle transport. Several calculations were done after that for verification, where different sources were used, the source term was obtained using the ORIGEN-S code. Water and lead shield were used with spherical geometry, and the tally results were obtained on the external surface of the shield, afterward the results were compared with the results of MCNPX for verification of the new code. The variance reduction techniques of splitting and Russian Roulette were implemented in the code to be more efficient, by reducing the amount of custom programming required, by artificially increasing the particles being tallied with decreasing the weight. The code shows lower results than the results of MCNPX, this can be interpreted by the effect of the secondary gamma radiation that can be produced by the electron, which is ejected by the primary radiation. In the future a more study will be made on the effect of the electron production and transport, either by a real transport of the electron or by simply using an approximation such the thick target bremsstahlung(TTB) option which is used in MCNPX. 20. Analysis of communication costs for domain decomposed Monte Carlo methods in nuclear reactor analysis International Nuclear Information System (INIS) A domain decomposed Monte Carlo communication kernel is used to carry out performance tests to establish the feasibility of using Monte Carlo techniques for practical Light Water Reactor (LWR) core analyses. The results of the prototype code are interpreted in the context of simplified performance models which elucidate key scaling regimes of the parallel algorithm. 1. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo International Nuclear Information System (INIS) The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme. 2. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code International Nuclear Information System (INIS) The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to 3. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code Energy Technology Data Exchange (ETDEWEB) Morgan C. White 2000-07-01 The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second 4. Monte Carlo implementation of Schiff's approximation for estimating radiative properties of homogeneous, simple-shaped and optically soft particles: Application to photosynthetic micro-organisms Science.gov (United States) Charon, Julien; Blanco, Stéphane; Cornet, Jean-François; Dauchet, Jérémi; El Hafi, Mouna; Fournier, Richard; Abboud, Mira Kaissar; Weitz, Sebastian 2016-03-01 In the present paper, Schiff's approximation is applied to the study of light scattering by large and optically-soft axisymmetric particles, with special attention to cylindrical and spheroidal photosynthetic micro-organisms. This approximation is similar to the anomalous diffraction approximation but includes a description of phase functions. Resulting formulations for the radiative properties are multidimensional integrals, the numerical resolution of which requires close attention. It is here argued that strong benefits can be expected from a statistical resolution by the Monte Carlo method. But designing such efficient Monte Carlo algorithms requires the development of non-standard algorithmic tricks using careful mathematical analysis of the integral formulations: the codes that we develop (and make available) include an original treatment of the nonlinearity in the differential scattering cross-section (squared modulus of the scattering amplitude) thanks to a double sampling procedure. This approach makes it possible to take advantage of recent methodological advances in the field of Monte Carlo methods, illustrated here by the estimation of sensitivities to parameters. Comparison with reference solutions provided by the T-Matrix method is presented whenever possible. Required geometric calculations are closely similar to those used in standard Monte Carlo codes for geometric optics by the computer-graphics community, i.e. calculation of intersections between rays and surfaces, which opens interesting perspectives for the treatment of particles with complex shapes. 5. Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks KAUST Repository Ben Hammouda, Chiheb 2015-05-12 In biochemical systems, stochastic e↵ects can be caused by the presence of small numbers of certain reactant molecules. In this setting, discrete state-space and stochastic simulation approaches were proved to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti↵ness. For such problems, the existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap method, can be very slow. Therefore, implicit tau-leap approxima- tions were developed to improve the numerical stability and provide more e cient simulation algorithms for these systems. One of the interesting tasks for SRNs is to approximate the expected values of some observables of the process at a certain fixed time T. This is can be achieved using Monte Carlo (MC) techniques. However, in a recent work, Anderson and Higham in 2013, proposed a more computationally e cient method which combines multi-level Monte Carlo (MLMC) technique with explicit tau-leap schemes. In this MSc thesis, we propose new fast stochastic algorithm, particularly designed 5 to address sti↵ systems, for approximating the expected values of some observables of SRNs. In fact, we take advantage of the idea of MLMC techniques and drift-implicit tau-leap approximation to construct a drift-implicit MLMC tau-leap estimator. In addition to accurately estimating the expected values of a given observable of SRNs at a final time T , our proposed estimator ensures the numerical stability with a lower cost than the MLMC explicit tau-leap algorithm, for systems including simultane- ously fast and slow species. The key contribution of our work is the coupling of two drift-implicit tau-leap paths, which is the basic brick for 6. Comparison of ISO-GUM and Monte Carlo Method for Evaluation of Measurement Uncertainty International Nuclear Information System (INIS) To supplement the ISO-GUM method for the evaluation of measurement uncertainty, a simulation program using the Monte Carlo method (MCM) was developed, and the MCM and GUM methods were compared. The results are as follows: (1) Even under a non-normal probability distribution of the measurement, MCM provides an accurate coverage interval; (2) Even if a probability distribution that emerged from combining a few non-normal distributions looks as normal, there are cases in which the actual distribution is not normal and the non-normality can be determined by the probability distribution of the combined variance; and (3) If type-A standard uncertainties are involved in the evaluation of measurement uncertainty, GUM generally offers an under-valued coverage interval. However, this problem can be solved by the Bayesian evaluation of type-A standard uncertainty. In this case, the effective degree of freedom for the combined variance is not required in the evaluation of expanded uncertainty, and the appropriate coverage factor for 95% level of confidence was determined to be 1.96 7. Testing planetary transit detection methods with grid-based Monte-Carlo simulations. Science.gov (United States) Bonomo, A. S.; Lanza, A. F. The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform studies on the outer layers of the planet by analysing the light of the star passing through the planet's atmosphere. We have developed a new method to detect transits of Earth-sized planets in front of solar-like stars that allows us to reduce the impact of stellar microvariability on transit detection. A large Monte Carlo numerical experiment has been designed to test the performance of our approach in comparison with other transit detection methods for stars of different magnitudes and planets of different radius and orbital period, as will be observed by the space experiments CoRoT and Kepler. The large computational load of this experiment has been managed by means of the Grid infrastructure of the COMETA consortium. 8. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods Science.gov (United States) Akhavan, Azadeh; Vosoughi, Naser 2015-12-01 Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA. 9. Practical implementation of hyperelastic material methods in FEA models OpenAIRE Elgström, Eskil 2014-01-01 This thesis will be focusing on studies about the hyperelastic material method and how to best implement it in a FEA model. It will look more specific at the Mooney-Rivlin method, but also have a shorter explanation about the different methods. This is due to problems Roxtec has today about simulating rubber takes long time, are instable and unfortunately not completely trustworthy, therefore a deep study about the hyperelastic material method were chosen to try and address these issuers. The... 10. Implementing the Open Method of Co-ordination in Pensions Directory of Open Access Journals (Sweden) Jarosław POTERAJ 2009-01-01 Full Text Available The article presents an insight into the European Union Open Methodof Co-ordination (OMC in area of pension. The author’s goal was to presentthe development and the effects of implementation the OMC. The introductionis followed by three topic paragraphs: 1. the OMC – step by step, 2. theevaluation of the OMC, and 3. the effects of OMC implementation. In thesummary, the author highlights as except of advantages there are alsodisadvantages of the implementation of the OMC, and there are many doubtsexist in the context of efficiency of performing that method in the future. 11. Implementation of the Maximum Entropy Method for Analytic Continuation CERN Document Server Levy, Ryan; Gull, Emanuel 2016-01-01 We present$\\texttt{Maxent}\$, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv2 and extensively documented. This paper shows the use of the programs in detail. 12. Implementing Collaborative Learning Methods in the Political Science Classroom Science.gov (United States) Wolfe, Angela 2012-01-01 Collaborative learning is one, among other, active learning methods, widely acclaimed in higher education. Consequently, instructors in fields that lack pedagogical training often implement new learning methods such as collaborative learning on the basis of trial and error. Moreover, even though the benefits in academic circles are broadly touted,… 13. Evaluation of the NHS R & D implementation methods programme OpenAIRE Hanney, S; Soper, B; Buxton, MJ 2010-01-01 Chapter 1: Background and introduction • Concern with research implementation was a major factor behind the creation of the NHS R&D Programme in 1991. In 1994 an Advisory Group was established to identify research priorities in this field. The Implementation Methods Programme (IMP) flowed from this and its Commissioning Group funded 36 projects. Funding for the IMP was capped before the second round of commissioning. The Commissioning Group was disbanded and eventually responsibility for t... 14. A Model Based Security Testing Method for Protocol Implementation OpenAIRE Yu Long Fu; Xiao Long Xin 2014-01-01 The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of ... 15. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles KAUST Repository Guerra, Marta L. 2009-02-23 We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society. 16. Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method Directory of Open Access Journals (Sweden) KRSTIVOJEVIC, J. P. 2015-08-01 Full Text Available The results of a comprehensive investigation of the influence of current transformer (CT saturation on restricted earth fault (REF protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i laboratory measurements and (ii calculations based on the input data obtained by the Monte Carlo (MC simulation. To make a detailed assessment of the current transformer performance the uncertain input data for the CT model were obtained by applying the MC method. In this way, different levels of remanent flux in CT core are taken into consideration. By the generated CT secondary currents, the algorithm for REF protection based on phase comparison in time domain is tested. On the basis of the obtained results, a method of adjustment of the triggering threshold in order to ensure safe operation during transients, and thereby improve the algorithm security, has been proposed. The obtained results indicate that power transformer REF protection would be enhanced by using the proposed adjustment of triggering threshold in the algorithm which is based on phase comparison in time domain. 17. Monte Carlo Methods for Top-k Personalized PageRank Lists and Name Disambiguation CERN Document Server Avrachenkov, Konstantin; Nemirovsky, Danil A; Smirnova, Elena; Sokol, Marina 2010-01-01 We study a problem of quick detection of top-k Personalized PageRank lists. This problem has a number of important applications such as finding local cuts in large graphs, estimation of similarity distance and name disambiguation. In particular, we apply our results to construct efficient algorithms for the person name disambiguation problem. We argue that when finding top-k Personalized PageRank lists two observations are important. Firstly, it is crucial that we detect fast the top-k most important neighbours of a node, while the exact order in the top-k list as well as the exact values of PageRank are by far not so crucial. Secondly, a little number of wrong elements in top-k lists do not really degrade the quality of top-k lists, but it can lead to significant computational saving. Based on these two key observations we propose Monte Carlo methods for fast detection of top-k Personalized PageRank lists. We provide performance evaluation of the proposed methods and supply stopping criteria. Then, we apply ... 18. Use of Monte Carlo Bootstrap Method in the Analysis of Sample Sufficiency for Radioecological Data International Nuclear Information System (INIS) There are operational difficulties in obtaining samples for radioecological studies. Population data may no longer be available during the study and obtaining new samples may not be possible. These problems do the researcher sometimes work with a small number of data. Therefore, it is difficult to know whether the number of samples will be sufficient to estimate the desired parameter. Hence, it is critical do the analysis of sample sufficiency. It is not interesting uses the classical methods of statistic to analyze sample sufficiency in Radioecology, because naturally occurring radionuclides have a random distribution in soil, usually arise outliers and gaps with missing values. The present work was developed aiming to apply the Monte Carlo Bootstrap method in the analysis of sample sufficiency with quantitative estimation of a single variable such as specific activity of a natural radioisotope present in plants. The pseudo population was a small sample with 14 values of specific activity of 226Ra in forage palm (Opuntia spp.). Using the R software was performed a computational procedure to calculate the number of the sample values. The re sampling process with replacement took the 14 values of original sample and produced 10,000 bootstrap samples for each round. Then was calculated the estimated average θ for samples with 2, 5, 8, 11 and 14 values randomly selected. The results showed that if the researcher work with only 11 sample values, the average parameter will be within a confidence interval with 90% probability . (Author) 19. Systematic hierarchical coarse-graining with the inverse Monte Carlo method International Nuclear Information System (INIS) We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile 20. Statistical Modification Analysis of Helical Planetary Gears based on Response Surface Method and Monte Carlo Simulation Institute of Scientific and Technical Information of China (English) ZHANG Jun; GUO Fan 2015-01-01 Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously. 1. Systematic hierarchical coarse-graining with the inverse Monte Carlo method Science.gov (United States) Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto 2015-12-01 We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile. 2. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods Directory of Open Access Journals (Sweden) Qian Liu 2015-01-01 Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times. 3. Simulation of Watts Bar initial startup tests with continuous energy Monte Carlo methods International Nuclear Information System (INIS) The Consortium for Advanced Simulation of Light Water Reactors is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable numeric reference for VERA neutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients. (author) 4. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models KAUST Repository 2011-06-06 Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures. 5. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions Science.gov (United States) Ricketson, Lee 2013-10-01 We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344. 6. Adjoint-based deviational Monte Carlo methods for phonon transport calculations Science.gov (United States) Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G. 2015-06-01 In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail. 7. Systematic hierarchical coarse-graining with the inverse Monte Carlo method Energy Technology Data Exchange (ETDEWEB) Lyubartsev, Alexander P., E-mail: [email protected] [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Naômé, Aymeric, E-mail: [email protected] [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Vercauteren, Daniel P., E-mail: [email protected] [UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Laaksonen, Aatto, E-mail: [email protected] [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Science for Life Laboratory, 17121 Solna (Sweden) 2015-12-28 We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile. 8. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods Energy Technology Data Exchange (ETDEWEB) Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL 2014-01-01 The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients. 9. Application of the Monte Carlo method for investigation of dynamical parameters of rotors supported by magnetorheological squeeze film damping devices Czech Academy of Sciences Publication Activity Database Zapoměl, Jaroslav; Ferfecki, Petr; Kozánek, Jan 2014-01-01 Roč. 8, č. 1 (2014), s. 129-138. ISSN 1802-680X Institutional support: RVO:61388998 Keywords : uncertain parameters of rigid motors * magnetorheological dampers * force transmission * Monte Carlo method Subject RIV: BI - Acoustics http://www.kme.zcu.cz/acm/acm/article/view/247/275 10. Studies of criticality Monte Carlo method convergence: use of a deterministic calculation and automated detection of the transient International Nuclear Information System (INIS) Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for keff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to keff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author) 11. 欧式期权定价的Monte-Carlo方法%Monte-Carlo methods for Pricing European-style options Institute of Scientific and Technical Information of China (English) 张丽虹 2015-01-01 讨论各种欧式期权价格的Monte-Carlo方法。根据Black-Scholes期权定价模型以及风险中性理论,首先详细地讨论如何利用Monte-Carlo方法来计算标准欧式期权价格;然后讨论如何引入控制变量以及对称变量来提高Monte-Carlo方法的精确性;最后用Monte-Carlo方法来计算标准欧式期权、欧式—两值期权、欧式—回望期权以及欧式—亚式期权的价格,并讨论相关方法的优缺点。%We discuss Monte-Carlo methods for pricing European options.Based on the famous Black-Scholes model,we first discuss the Monte-Carlo simulation method to pricing standard European options according to Risk neutral theory.Methods to improve the Monte-Carlo simulation performance including introducing control variates and antithetic variates are also discussed.Finally we apply the proposed Monte-Carlo methods to price the European binary options,European lookback options and European Asian options. 12. Algorithms for modeling radioactive decays of π-and μ-mesons by the Monte-Carlo method International Nuclear Information System (INIS) Effective algorithms for modeling decays of μsup(→e) ννγ and πsup(→e)νγ by the Monte-Carlo method are described. The algorithms developed allowed to considerably reduce time needed to calculate the efficiency of decay detection. They were used for modeling in experiments on the study of pions and muons rare decays 13. SEMI-BLIND CHANNEL ESTIMATION OF MULTIPLE-INPUT/MULTIPLE-OUTPUT SYSTEMS BASED ON MARKOV CHAIN MONTE CARLO METHODS Institute of Scientific and Technical Information of China (English) Jiang Wei; Xiang Haige 2004-01-01 This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness. 14. Verification of Burned Core Modeling Method for Monte Carlo Simulation of HANARO International Nuclear Information System (INIS) The reactor core has been managed well by the HANARO core management system called HANAFMS. The heterogeneity of the irradiation device and core made the neutronic analysis difficult and sometimes doubtable. To overcome the deficiency, MCNP was utilized in neutron transport calculation of the HANARO. For the most part, a MCNP model with the assumption that all fuels are filled with fresh fuel assembly showed acceptable analysis results for a design of experimental devices and facilities. However, it sometimes revealed insufficient results in the design, which requires good accuracy like neutron transmutation doping (NTD), because it didn't consider the flux variation induced by depletion of the fuel. In this study, a depleted-core modeling method previously proposed was applied to build burned core model of HANARO and verified through a comparison of the calculated result from the depleted-core model and that from an experiment. The modeling method to establish a depleted-core model for the Monte Carlo simulation was verified by comparing the neutron flux distribution obtained by the zirconium activation method and the reaction rate of 30Si(n, γ) 31Si obtained by a resistivity measurement method. As a result, the reaction rate of 30Si(n, γ) 31Si also agreed well with about 3% difference. It was therefore concluded that the modeling method and resulting depleted-core model developed in this study can be a very reliable tool for the design of the planned experimental facility and a prediction of its performance in HANARO 15. Verification of Burned Core Modeling Method for Monte Carlo Simulation of HANARO Energy Technology Data Exchange (ETDEWEB) Cho, Dongkeun; Kim, Myongseop [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of) 2014-05-15 The reactor core has been managed well by the HANARO core management system called HANAFMS. The heterogeneity of the irradiation device and core made the neutronic analysis difficult and sometimes doubtable. To overcome the deficiency, MCNP was utilized in neutron transport calculation of the HANARO. For the most part, a MCNP model with the assumption that all fuels are filled with fresh fuel assembly showed acceptable analysis results for a design of experimental devices and facilities. However, it sometimes revealed insufficient results in the design, which requires good accuracy like neutron transmutation doping (NTD), because it didn't consider the flux variation induced by depletion of the fuel. In this study, a depleted-core modeling method previously proposed was applied to build burned core model of HANARO and verified through a comparison of the calculated result from the depleted-core model and that from an experiment. The modeling method to establish a depleted-core model for the Monte Carlo simulation was verified by comparing the neutron flux distribution obtained by the zirconium activation method and the reaction rate of {sup 30}Si(n, γ) {sup 31}Si obtained by a resistivity measurement method. As a result, the reaction rate of {sup 30}Si(n, γ) {sup 31}Si also agreed well with about 3% difference. It was therefore concluded that the modeling method and resulting depleted-core model developed in this study can be a very reliable tool for the design of the planned experimental facility and a prediction of its performance in HANARO. 16. Application of the measurement-based Monte Carlo method in nasopharyngeal cancer patients for intensity modulated radiation therapy International Nuclear Information System (INIS) This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs 17. Enhancing Dissemination and Implementation Research Using Systems Science Methods Science.gov (United States) Lich, Kristen Hassmiller; Neal, Jennifer Watling; Meissner, Helen I.; Yonas, Michael; Mabry, Patricia L. 2015-01-01 PURPOSE Dissemination and implementation (D&I) research seeks to understand and overcome barriers to adoption of behavioral interventions that address complex problems; specifically interventions that arise from multiple interacting influences crossing socio-ecological levels. It is often difficult for research to accurately represent and address the complexities of the real world, and traditional methodological approaches are generally inadequate for this task. Systems science methods, expressly designed to study complex systems, can be effectively employed for an improved understanding about dissemination and implementation of evidence-based interventions. METHODS Case examples of three systems science methods – system dynamics modeling, agent-based modeling, and network analysis – are used to illustrate how each method can be used to address D&I challenges. RESULTS The case studies feature relevant behavioral topical areas: chronic disease prevention, community violence prevention, and educational intervention. To emphasize consistency with D&I priorities, the discussion of the value of each method is framed around the elements of the established Reach Effectiveness Adoption Implementation Maintenance (RE-AIM) framework. CONCLUSIONS Systems science methods can help researchers, public health decision makers and program implementers to understand the complex factors influencing successful D&I of programs in community settings, and to identify D&I challenges imposed by system complexity. PMID:24852184 18. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method Science.gov (United States) 2014-07-01 A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result. 19. Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method Science.gov (United States) Tan, Yi; Robinson, Allen L.; Presto, Albert A. 2014-12-01 Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual 20. Analysis of the Tandem Calibration Method for Kerma Area Product Meters Via Monte Carlo Simulations International Nuclear Information System (INIS) The IAEA recommends that uncertainties of dosimetric measurements in diagnostic radiology for risk assessment and quality assurance should be less than 7% on the confidence level of 95%. This accuracy is difficult to achieve with kerma area product (KAP) meters currently used in clinics. The reasons range from the high energy dependence of KAP meters to the wide variety of configurations in which KAP meters are used and calibrated. The tandem calibration method introduced by Poeyry, Komppa and Kosunen in 2005 has the potential to make the calibration procedure simpler and more accurate compared to the traditional beam-area method. In this method, two positions of the reference KAP meter are of interest: (a) a position close to the field KAP meter and (b) a position 20 cm above the couch. In the close position, the distance between the two KAP meters should be at least 30 cm to reduce the effect of back scatter. For the other position, which is recommended for the beam-area calibration method, the distance of 70 cm between the KAP meters was used in this study. The aim of this work was to complement existing experimental data comparing the two configurations with Monte Carlo (MC) simulations. In a geometry consisting of a simplified model of the VacuTec 70157 type KAP meter, the MCNP code was used to simulate the kerma area product, PKA, for the two (close and distant) reference planes. It was found that PKA values for the tube voltage of 40 kV were about 2.5% lower for the distant plane than for the close one. For higher tube voltages, the difference was smaller. The difference was mainly caused by attenuation of the X ray beam in air. Since the problem with high uncertainties in PKA measurements is also caused by the current design of X ray machines, possible solutions are discussed. (author)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175802826881409, "perplexity": 1407.4959226655521}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00459-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/440557/for-what-integers-n-does-phi2n-phin/440561
For what integers $n$ does $\phi(2n) = \phi(n)$? For what integers $n$ does $\phi(2n) = \phi(n)$? Could anyone help me start this problem off? I'm new to elementary number theory and such, and I can't really get a grasp of the totient function. I know that $$\phi(n) = n\left(1-\frac1{p_1}\right)\left(1-\frac1{p_2}\right)\cdots\left(1-\dfrac1{p_k}\right)$$ but I don't know how to apply this to the problem. I also know that $$\phi(n) = (p_1^{a_1} - p_1^{a_1-1})(p_2^{a_2} - p_2^{a_2 - 1})\cdots$$ Help - Euler's $\phi$ function is multiplicative. More elaborately if for $a,b\in N$ with $(a,b)=1$ then $\phi (ab)=\phi (a)\phi (b)$. So let $n=2^km$ with $m$ being odd. Then we have if $k\ge 1$, \begin{align} \phi (n)&=\phi(2^k)\phi(m)=2^{k-1}\phi(m) \\ \phi(2n)&=\phi(2^{k+1})\phi(m)=2^{k}\phi(m)\end{align} So $\phi (n)\ne \phi(2n)$. So $k<1\Rightarrow k=0\Rightarrow n$ must be odd. Another easy proof: Let $n=2^k\prod_{i=1}^{n}p_i^{\alpha_i}$ with $k\ge 1$ and $2\ne p_i =$ primes, then we have $\phi (n)=\frac{n}{2}\prod_{i=1}^{n}(1-\frac{1}{p_i})$ and $\phi (2n)=\frac{2n}{2}\prod_{i=1}^{n}(1-\frac{1}{p_i})$.Can $\phi (n)$ be equal to $\phi(2n)$? Now consider $n=2k+1$ and find $\phi (n)$ and $\phi (2n)$. What do you see? - I know that the function is multiplicitive, but I don't understand how to use that information. Sorry. –  Ozera Jul 10 '13 at 15:46 I guess the 2nd proof will clear things up. –  Abhra Abir Kundu Jul 10 '13 at 15:47 Actually I'm not really sure what it says, but I'm going to continue thinking about what the importance of it being multipicitive. –  Ozera Jul 10 '13 at 15:56 Now is it clear @Ozera –  Abhra Abir Kundu Jul 10 '13 at 16:08 Hint If $n$ is odd, then gcd$(n,2)=1$ thus $$\phi(2n)=\phi(2) \phi(n) \,.$$ If $n$ is even, write $n=2^km$ with $m$ odd and $k \geq 1$. $$\phi(n)=\phi(2^k) \phi(m) \,.$$ $$\phi(2n)=\phi(2^{k+1}) \phi(m) \,.$$ - Hint: You may also prove in general that $$\varphi(mn)=\frac{d\varphi(m)\varphi(n)}{\varphi(d)}$$ where $d=\gcd(m,n).$ - +1 Nice remarking the fact. :-) –  Babak S. Aug 22 '13 at 7:36 The formula you gave is incorrect. It should have been $$\phi(mn) = \frac{d\phi(m)\phi(n)}{\phi(d)}$$ –  Balarka Sen Jan 17 at 15:32 Dear @BalarkaSen, yes, thanks for the correction. –  Ehsan M. Kermani Jan 18 at 20:15 $\displaystyle{\left\lfloor n + 1 \over 2\right\rfloor}$ is a solution. - @TMM Sorry. I was not careful when I checked it. I'll delete it after you read this comment. –  Felix Marin Jan 18 at 21:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833847284317017, "perplexity": 463.46036940835876}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772398.133/warc/CC-MAIN-20141217075252-00064-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/business-math/187253-annuity-simple-interest-fractions-year-print.html
# Annuity with Simple Interest for Fractions of a Year • September 3rd 2011, 10:38 PM Diamondlance Annuity with Simple Interest for Fractions of a Year Problem: Deposits of 500 each are made into an account on the first day of every January and July beginning on January 1, 1999. Suppose that the effective annual interest rate is i=0.04, and interest is credited only on December 31 each year, with simple interest credited for fractions of a year. On what date should the account be closed in order that the closing balance be nearest 10000? My approach was probably pretty naive, and a bit off since I didn't seem to get exactly the right answer. For 1999, I took 500(1.04)+500(1.02)=1030 to get the balance at the end of that year. To get the balance at the end of 2000, I took 1030(1.04)+1030=2101.20, where 1030(1.04) is the accumulated value of the 1999 deposits and 1030 is the value of the new deposits in 2000. Proceeding similarly, I got a balance of 9490.65 at the end of 2006, hence 9990.65 on January 1, 2007. I found that $9990.65(1+0.04*\frac{9}{365})=10000.50$, and this is the closest one gets to 10000 using a whole number of days. So I answered January 10, 2007. But the back of the book says January 11, 2007. Can anyone see where I may have gone wrong? • September 5th 2011, 09:01 PM Wilmer Re: Annuity with Simple Interest for Fractions of a Year Yours seems correct.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6816298961639404, "perplexity": 588.9960429828851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096579.52/warc/CC-MAIN-20150627031816-00155-ip-10-179-60-89.ec2.internal.warc.gz"}
https://planetmath.org/alternativedefinitionsofcountable
# alternative definitions of countable The following are alternative ways of characterizing a countable set. ###### Proposition 1. Let $A$ be a set and $\mathbb{N}$ the set of natural numbers. The following are equivalent: 1. 1. there is a surjection from $\mathbb{N}$ to $A$. 2. 2. there is an injection from $A$ to $\mathbb{N}$. 3. 3. either $A$ is finite or there is a bijection between $A$ and $\mathbb{N}$. ###### Proof. First notice that if $A$ were the empty set, then any map to or from $A$ is empty, so $(1)\Leftrightarrow(2)\Leftrightarrow(3)$ vacuously. Now, suppose that $A\neq\varnothing$. $(1)\Rightarrow(2)$. Suppose $f:\mathbb{N}\to A$ is a surjection. For each $a\in A$, let $f^{-1}(a)$ be the set $\{n\in\mathbb{N}\mid f(n)=a\}$. Since $f^{-1}(a)$ is a subset of $\mathbb{N}$, which is well-ordered, $f^{-1}(a)$ itself is well-ordered, and thus has a least element (keep in mind $A\neq\varnothing$, the existence of $a\in A$ is guaranteed, so that $f^{-1}(a)\neq\varnothing$ as well). Let $g(a)$ be this least element. Then $a\mapsto g(a)$ is a well-defined mapping from $A$ to $\mathbb{N}$. It is one-to-one, for if $g(a)=g(b)=n$, then $a=f(n)=b$. $(2)\Rightarrow(1)$. Suppose $g:A\to\mathbb{N}$ is one-to-one. So $g^{-1}(n)$ is at most a singleton for every $n\in\mathbb{N}$. If it is a singleton, identify $g^{-1}(n)$ with that element. Otherwise, identify $g^{-1}(n)$ with a designated element $a_{0}\in A$ (remember $A$ is non-empty). Define a function $f:\mathbb{N}\to A$ by $f(n):=g^{-1}(n)$. By the discussion above, $g^{-1}(n)$ is a well-defined element of $A$, and therefore $f$ is well-defined. $f$ is onto because for every $a\in A$, $f(g(a))=a$. $(3)\Rightarrow(2)$ is clear. $(2)\Rightarrow(3)$. Let $g:A\to\mathbb{N}$ be an injection. Then $g(A)$ is either finite or infinite. If $g(A)$ is finite, so is $A$, since they are equinumerous. Suppose $g(A)$ is infinite. Since $g(A)\subseteq\mathbb{N}$, it is well-ordered. The (induced) well-ordering on $g(A)$ implies that $g(A)=\{n_{1},n_{2},\ldots\}$, where $n_{1}. Now, define $h:\mathbb{N}\to A$ as follows, for each $i\in\mathbb{N}$, $h(i)$ is the element in $A$ such that $g(h(i))=n_{i}$. So $h$ is well-defined. Next, $h$ is injective. For if $h(i)=h(j)$, then $n_{i}=g(h(i))=g(h(j))=n_{j}$, implying $i=j$. Finally, $h$ is a surjection, for if we pick any $a\in A$, then $g(a)\in g(A)$, meaning that $g(a)=n_{i}$ for some $i$, so $h(i)=g(a)$. ∎ Therefore, countability can be defined in terms of either of the above three statements. Note that the axiom of choice is not needed in the proof of $(1)\Rightarrow(2)$, since the selection of an element in $f^{-1}(a)$ is definite, not arbitrary. For example, we show that $\mathbb{N}^{2}$ is countable. By the proposition above, we either need to find a surjection $f:\mathbb{N}\to\mathbb{N}^{2}$, or an injection $g:\mathbb{N}^{2}\to\mathbb{N}$. Actually, in this case, we can find both: 1. 1. the function $f:\mathbb{N}\to\mathbb{N}^{2}$ given by $f(a)=(m,n)$ where $a=2^{m}(2n+1)$ is surjective. First, the function is well-defined, for every positive integer has a unique representation as the product of a power of $2$ and an odd number. It is surjective because for every $(m,n)$, we see that $f(2^{m}(2n+1))=(m,n)$. 2. 2. the function $g:\mathbb{N}^{2}\to\mathbb{N}$ given by $f(m,n)=2^{m}3^{n}$ is clearly injective. Note that the injectivity of $g$, as well as $f$ being well-defined, rely on the unique factorization of integers by prime numbers. In this entry (http://planetmath.org/ProductOfCountableSets), we actually find a bijection between $\mathbb{N}$ and $\mathbb{N}^{2}$. As a corollary, we record the following: ###### Corollary 1. Let $A,B$ be sets, $f:A\to B$ a function. • If $f$ is an injection, and $B$ is countable, so is $A$. • If $f$ is a surjection, and $A$ countable, so is $B$. The proof is left to the reader. Title alternative definitions of countable AlternativeDefinitionsOfCountable 2013-03-22 19:02:49 2013-03-22 19:02:49 CWoo (3771) CWoo (3771) 7 CWoo (3771) Definition msc 03E10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 98, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985612034797668, "perplexity": 188.83020515403007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00374.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/30845-infinity-equation-help-print.html
infinity equation,,,,help! • March 12th 2008, 07:05 PM cgoplin infinity equation,,,,help! don't know how to show this equation on screen E smallx2 right hand corner then space dx +infinity symbol on top - infinity symbol on bottom with two fishhooks? oposing what is it? please help! • March 12th 2008, 07:11 PM TheEmptySet Quote: Originally Posted by cgoplin don't know how to show this equation on screen E smallx2 right hand corner then space dx +infinity symbol on top - infinity symbol on bottom with two fishhooks? oposing what is it? please help! $\int_{\gamma-i \infty}^{\gamma+i \infty}f(z)dz$ If this is the case it is called a contour integral. It requires the use of complex variables. • March 12th 2008, 07:36 PM ThePerfectHacker Fishhooks (Rofl)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824597835540771, "perplexity": 3887.4075527428968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275437.19/warc/CC-MAIN-20160524002115-00239-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.semsarturkey.com/property/1945413/%D9%84%D9%84%D8%A8%D9%8A%D8%B9-%D8%B4%D9%82%D9%82-31-%D8%B6%D9%85%D9%86-%D9%85%D8%AC%D9%85%D8%B9-%D8%B3%D9%83%D9%86%D9%8A-%D8%B1%D8%A7%D8%A6%D8%B9-%D9%81%D9%8A-%D8%BA%D8%A7%D8%B2%D9%8A-%D8%B9%D9%86%D8%AA%D8%A7%D8%A8-%D8%A8%D8%B3%D8%B9%D8%B1-%D9%85%D8%BA%D8%B1%D9%8A-%D8%AC%D8%AF%D8%A7-255-%D8%A3%D9%84%D9%81-%D8%AA%D8%B1%D9%83%D9%8A-%D9%84%D9%84%D8%A8%D9%8A%D8%B9-%D9%81%D9%8A-%D8%B4%D8%A7%D9%87%D9%8A%D9%86-%D8%A8%D9%8A%D9%87-%D8%BA%D8%A7%D8%B2%D9%8A-%D8%B9%D9%8A%D9%86%D8%AA%D8%A7%D8%A8-%D8%AA%D8%B1%D9%83%D9%8A%D8%A7
: : : : # 3+1 . 255 - 23 - ( ) - - - ... 300,000 - 170 3 300,000 14 - ( ) ... 200,000 - 130 2 200,000 91 ## 360 () 360 + + + - - 7 : - ... 200,000 - 360 6 200,000 28 169 4 3 6 : ... 320,000 - 169 4 320,000 66 ... 320,000 - 169 4 320,000 9 ## 1+3 () 1+3 170 - - - 24/7/ - - ... 135,000 - 170 3 135,000 23 310,000 - 145 4 310,000 14 ## Congress center view 4+1 sper lux apartment () 15 minutes to airport and centrum. Near mosque, bazaar, school, small zoo and picnic area... 290,000 - 169 4 290,000 34 700,000 - 200 4 700,000 34 475,000 - 145 3 475,000 1 ## Apartment in Ankara Pursaklar 4+1 extra super lux 190 sqm () OUR PROJECT İS BUİLT ON 10.000 M2 LAND.THE PROJECT CONSISTS OF 3 BLOCKS 162 FLATS. THERE ARE 1+1 / 2+1 / 3+1 / 4+1 / APARTMENT OPTIONS IN OUR PROJECT. BETWEEN 78 M2... 560,000 - 190 4 560,000 1 ## Apartment in Ankara Pursaklar 4+1 190 sqm () OUR PROJECT İS BUİLT ON 10.000 M2 LAND.THE PROJECT CONSISTS OF 3 BLOCKS 162 FLATS. THERE ARE 1+1 / 2+1 / 3+1 / 4+1 / APARTMENT OPTIONS IN OUR PROJECT. BETWEEN 78 M2... 650,000 - 190 4 650,000 1 ## Apartment in Ankara Pursaklar 3+1 160 sqm () OUR PROJECT İS BUİLT ON 10.000 M2 LAND.THE PROJECT CONSISTS OF 3 BLOCKS 162 FLATS. THERE ARE 1+1 / 2+1 / 3+1 / 4+1 / APARTMENT OPTIONS IN OUR PROJECT. BETWEEN 78 M2... 460,000 - 160 3 460,000 1 ## Apartment in Ankara Pursaklar 1+1 80 sqm () OUR PROJECT İS BUİLT ON 10.000 M2 LAND.THE PROJECT CONSISTS OF 3 BLOCKS 162 FLATS. THERE ARE 1+1 / 2+1 / 3+1 / 4+1 / APARTMENT OPTIONS IN OUR PROJECT. BETWEEN 78 M2... 260,000 - 80 1 260,000 1 ## Apartment in Ankara Pursaklar 2+1 145 sqm () OUR PROJECT İS BUİLT ON 10.000 M2 LAND.THE PROJECT CONSISTS OF 3 BLOCKS 162 FLATS. THERE ARE 1+1 / 2+1 / 3+1 / 4+1 / APARTMENT OPTIONS IN OUR PROJECT. BETWEEN 78 M2... 370,000 - 145 2 370,000 .. ref
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9264472126960754, "perplexity": 21936.059813759934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681524.75/warc/CC-MAIN-20201201200611-20201201230611-00515.warc.gz"}
https://www.bitdefender.com/business/support/en/77211-151127-connection-rules.html
## PARTNERS ### Connection Rules To access Connection Rules actions go to Products > Connection Rules. In this screen you can create, edit, delete and reorder connection rules. A connection rule applies an action to a message if specific conditions are met. Use Connection Rules to limit the access to a specific mailbox or preemptively reject emails that fit certain criteria (such as emails of a specific size, or emails from a specific IP address). ### Note Connection Rules are processed before any message rules. Connection rules will be processed until a Final Action is taken or all active rules have been verified. ### Note Each company will start out with a set of default system and standard rules. In some cases these rules are sufficient for an organization and no further action is needed, however we recommend that you familiarize yourself with them. Connection rules are composed by: • Conditions - one or more conditions can be applied to each rule. If all conditions are met, the assigned actions will be taken. • Final Actions - a final action will be triggered once all the conditions are met and will stop all subsequent rules from being processed. ### Note A rule will only be triggered if its Active status is set to On. 1. Priority - Connection rules are executed in the order they appear in this list. Organize the rules to establish the order they will be applied to each individual message. 2. Direction - A Rule may be processed only against incoming connections, only against outgoing connections, or against both incoming and outgoing connections. The Direction column indicates when the Rule will be processed. • - this rule will only be processed against incoming connections. • - this rule will only be processed against outgoing connections. • - this email will be processed against all connections. ### Note Click on the column header to display a drop down menu that will allow you to filter out columns from the below display. 3. Rule Name -Displays the name of each given rule. • A lock icon indicates that the rule cannot be edited or deleted. • System rules are marked by a gray background and a (Default) tag. 4. Final Action - Shows if a Final Action is applied to a specific rule and what that final action is. The available final actions are: • Permanent Reject - reject the connection. Any future attempts that meet the conditions of the rule will result in another reject. • Accept - accept the connection without any other rules being processed. 5. Active - This column indicates if a rule is active or not. Clicking on the indicator will activate/deactivate the rule. 6. Actions - there are two buttons available in this column: • Change Rule - edit a specific rule. • Delete Rule - delete a specific rule. 7. View System Rules - toggle this button to display or hide system rules from the list below. ### Note You can use the Refresh button to refresh the list. 8. Add Rule - create a new rule. #### Creating a new Connection Rule To create a new rule follow the below steps: 1. Click the Add Rule button at the upper right side of the screen. 2. Enter a descriptive rule name and click the Add button. This will open the Rule Builder screen. 3. Set the Active button on or off. ### Note • Newly created Rules are inactive by default. • Inactive rules will not be processed. 4. (optional) Re-name the rule. 5. (optional) Add a description to the rule - this is only visible from the Rule Builder screen and can be used to add a short explanation of what the rule is intended to do. 6. Add the conditions. You can add one or more conditions to your rule. You can find a list of all available conditions here. ### Note • Be as specific as possible when creating conditions to avoid accidental triggering. • If more than one condition is added to the rule, all have to be passed for the action(s) assigned to the rule to be taken. • When creating a condition, you can set it to either Match or Does not Match when comparing against a specific value or data set. • Conditions can be system defaults or custom. Custom conditions can be accessed from the Custom Rule Data screen. 7. Add the final rule. You can find a list of al available final actions here. 8. Click the Save button. #### Re-ordering message rules To change the order in which your connection rules are processed, drag and drop it to a new position. ### Note You can not re-order system rules. #### Editing a connection rule To edit a Rule, double-click the rule's title in the Connection Rules screen or click on the Change Rule button. This will open the Rule Builder screen. Once the modifications are complete, click the save button. #### Deleting a connection rule To delete a Rule: • Click the delete button next to the rule you want to delete in the Connections Rules screen. • Click the delete button while editing a rule.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2206476628780365, "perplexity": 2127.777524511684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00242.warc.gz"}
https://hackage.haskell.org/package/gi-gio-2.0.20/docs/GI-Gio-Objects-TlsCertificate.html
gi-gio-2.0.20: Gio bindings Copyright Will Thompson Iñaki García Etxebarria and Jonas Platte LGPL-2.1 Iñaki García Etxebarria ([email protected]) None Haskell2010 GI.Gio.Objects.TlsCertificate Description A certificate used for TLS authentication and encryption. This can represent either a certificate only (eg, the certificate received by a client from a server), or the combination of a certificate and a private key (which is needed when acting as a TlsServerConnection). Since: 2.28 Synopsis # Exported types newtype TlsCertificate Source # Memory-managed wrapper type. Constructors TlsCertificate (ManagedPtr TlsCertificate) Instances Source # Instance detailsDefined in GI.Gio.Objects.TlsCertificate Methods Source # Instance detailsDefined in GI.Gio.Objects.TlsCertificate Source # Instance detailsDefined in GI.Gio.Objects.TlsCertificate type ParentTypes TlsCertificate = Object ': ([] :: [Type]) Type class for types which can be safely cast to TlsCertificate, for instance with toTlsCertificate. Instances Source # Instance detailsDefined in GI.Gio.Objects.TlsCertificate toTlsCertificate :: (MonadIO m, IsTlsCertificate o) => o -> m TlsCertificate Source # Cast to TlsCertificate, for types for which this is known to be safe. For general casts, use castTo. A convenience alias for Nothing :: Maybe TlsCertificate. # Methods ## getIssuer Arguments :: (HasCallStack, MonadIO m, IsTlsCertificate a) => a cert: a TlsCertificate -> m TlsCertificate Returns: The certificate of cert's issuer, or Nothing if cert is self-signed or signed with an unknown certificate. Gets the TlsCertificate representing cert's issuer, if known Since: 2.28 ## isSame Arguments :: (HasCallStack, MonadIO m, IsTlsCertificate a, IsTlsCertificate b) => a certOne: first certificate to compare -> b certTwo: second certificate to compare -> m Bool Returns: whether the same or not Check if two TlsCertificate objects represent the same certificate. The raw DER byte data of the two certificates are checked for equality. This has the effect that two certificates may compare equal even if their TlsCertificate:issuer, TlsCertificate:private-key, or TlsCertificate:private-key-pem properties differ. Since: 2.34 ## listNewFromFile Arguments :: (HasCallStack, MonadIO m) => [Char] file: file containing PEM-encoded certificates to import -> m [TlsCertificate] Returns: a List containing TlsCertificate objects. You must free the list and its contents when you are done with it. (Can throw GError) Creates one or more GTlsCertificates from the PEM-encoded data in file. If file cannot be read or parsed, the function will return Nothing and set error. If file does not contain any PEM-encoded certificates, this will return an empty list and not set error. Since: 2.28 ## newFromFile Arguments :: (HasCallStack, MonadIO m) => [Char] file: file containing a PEM-encoded certificate to import -> m TlsCertificate Returns: the new certificate, or Nothing on error (Can throw GError) Creates a TlsCertificate from the PEM-encoded data in file. The returned certificate will be the first certificate found in file. As of GLib 2.44, if file contains more certificates it will try to load a certificate chain. All certificates will be verified in the order found (top-level certificate should be the last one in the file) and the TlsCertificate:issuer property of each certificate will be set accordingly if the verification succeeds. If any certificate in the chain cannot be verified, the first certificate in the file will still be returned. If file cannot be read or parsed, the function will return Nothing and set error. Otherwise, this behaves like tlsCertificateNewFromPem. Since: 2.28 ## newFromFiles Arguments :: (HasCallStack, MonadIO m) => [Char] certFile: file containing one or more PEM-encoded certificates to import -> [Char] keyFile: file containing a PEM-encoded private key to import -> m TlsCertificate Returns: the new certificate, or Nothing on error (Can throw GError) Creates a TlsCertificate from the PEM-encoded data in certFile and keyFile. The returned certificate will be the first certificate found in certFile. As of GLib 2.44, if certFile contains more certificates it will try to load a certificate chain. All certificates will be verified in the order found (top-level certificate should be the last one in the file) and the TlsCertificate:issuer property of each certificate will be set accordingly if the verification succeeds. If any certificate in the chain cannot be verified, the first certificate in the file will still be returned. If either file cannot be read or parsed, the function will return Nothing and set error. Otherwise, this behaves like tlsCertificateNewFromPem. Since: 2.28 ## newFromPem Arguments :: (HasCallStack, MonadIO m) => Text data: PEM-encoded certificate data -> Int64 length: the length of data, or -1 if it's 0-terminated. -> m TlsCertificate Returns: the new certificate, or Nothing if data is invalid (Can throw GError) Creates a TlsCertificate from the PEM-encoded data in data. If data includes both a certificate and a private key, then the returned certificate will include the private key data as well. (See the TlsCertificate:private-key-pem property for information about supported formats.) The returned certificate will be the first certificate found in data. As of GLib 2.44, if data contains more certificates it will try to load a certificate chain. All certificates will be verified in the order found (top-level certificate should be the last one in the file) and the TlsCertificate:issuer property of each certificate will be set accordingly if the verification succeeds. If any certificate in the chain cannot be verified, the first certificate in the file will still be returned. Since: 2.28 ## verify Arguments :: (HasCallStack, MonadIO m, IsTlsCertificate a, IsSocketConnectable b, IsTlsCertificate c) => a cert: a TlsCertificate -> Maybe b identity: the expected peer identity -> Maybe c trustedCa: the certificate of a trusted authority -> m [TlsCertificateFlags] Returns: the appropriate TlsCertificateFlags This verifies cert and returns a set of TlsCertificateFlags indicating any problems found with it. This can be used to verify a certificate outside the context of making a connection, or to check a certificate against a CA that is not part of the system CA database. If identity is not Nothing, cert's name(s) will be compared against it, and TlsCertificateFlagsBadIdentity will be set in the return value if it does not match. If identity is Nothing, that bit will never be set in the return value. If trustedCa is not Nothing, then cert (or one of the certificates in its chain) must be signed by it, or else TlsCertificateFlagsUnknownCa will be set in the return value. If trustedCa is Nothing, that bit will never be set in the return value. (All other TlsCertificateFlags values will always be set or unset as appropriate.) Since: 2.28 # Properties ## certificate The DER (binary) encoded representation of the certificate. This property and the TlsCertificate:certificate-pem property represent the same data, just in different forms. Since: 2.28 Construct a GValueConstruct with valid value for the “certificate” property. This is rarely needed directly, but it is used by new. getTlsCertificateCertificate :: (MonadIO m, IsTlsCertificate o) => o -> m (Maybe ByteString) Source # Get the value of the “certificate” property. When overloading is enabled, this is equivalent to get tlsCertificate #certificate ## certificatePem The PEM (ASCII) encoded representation of the certificate. This property and the TlsCertificate:certificate property represent the same data, just in different forms. Since: 2.28 Construct a GValueConstruct with valid value for the “certificate-pem” property. This is rarely needed directly, but it is used by new. getTlsCertificateCertificatePem :: (MonadIO m, IsTlsCertificate o) => o -> m (Maybe Text) Source # Get the value of the “certificate-pem” property. When overloading is enabled, this is equivalent to get tlsCertificate #certificatePem ## issuer A TlsCertificate representing the entity that issued this certificate. If Nothing, this means that the certificate is either self-signed, or else the certificate of the issuer is not available. Since: 2.28 Construct a GValueConstruct with valid value for the “issuer” property. This is rarely needed directly, but it is used by new. getTlsCertificateIssuer :: (MonadIO m, IsTlsCertificate o) => o -> m TlsCertificate Source # Get the value of the “issuer” property. When overloading is enabled, this is equivalent to get tlsCertificate #issuer ## privateKey The DER (binary) encoded representation of the certificate's private key, in either PKCS1 format or unencrypted PKCS8 format. This property (or the TlsCertificate:private-key-pem property) can be set when constructing a key (eg, from a file), but cannot be read. PKCS8 format is supported since 2.32; earlier releases only support PKCS1. You can use the openssl rsa tool to convert PKCS8 keys to PKCS1. Since: 2.28 Construct a GValueConstruct with valid value for the “private-key” property. This is rarely needed directly, but it is used by new. ## privateKeyPem The PEM (ASCII) encoded representation of the certificate's private key in either PKCS1 format ("BEGIN RSA PRIVATE KEY") or unencrypted PKCS8 format ("BEGIN PRIVATE KEY"). This property (or the TlsCertificate:private-key property) can be set when constructing a key (eg, from a file), but cannot be read. PKCS8 format is supported since 2.32; earlier releases only support PKCS1. You can use the openssl rsa tool to convert PKCS8 keys to PKCS1. Since: 2.28 Construct a GValueConstruct with valid value for the “private-key-pem” property. This is rarely needed directly, but it is used by new.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19984574615955353, "perplexity": 8876.263267242777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00595.warc.gz"}
http://infinitefuture.blogspot.com/2011/01/vector-dot-products-and-cos-and.html
Vector dot products and cos and Pythagorus \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\norm}[1]{\lVert#1\rVert} \abs{\vec{X}}\ \ \ \ \ \ \norm{\vec{X}} As it relates to how LaTeX instantiates a new command. It can be seen that the format starts with "newcommand" , the name of the new command, number of arguments, and how it is created using previously defined methods with the #(n) implying an argument pass in curly brackets. It is a lot like a function in shell which uses \$1 .... as args passed. cos\theta = \frac {\vec{v}\cdot \vec{w}} {|\vec{v}|\cdot|{\vec{w}|}} It does seem that the concept of sin and cos are defined in a rather circular way ( no pun intended ). The values at an angle are defined by the relationship of dimension itself and so long as a standard is defined, the numbers are the same in relationship to scale.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076430797576904, "perplexity": 697.0053809941154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648000.93/warc/CC-MAIN-20180322190333-20180322210333-00200.warc.gz"}
http://math.stackexchange.com/questions/120027/standard-deviation-why-divide-by-n-1-rather-than-n
# Standard Deviation: Why divide by $(N-1)$ rather than $N$? The forumlae for standard deviation seems to be the square root of the sum of the squared deviation from mean divided by $N-1$. Why isn't it simply the square root of the mean of the squared deviation from mean? i.e, divided by $N$. Why is it divided by $N-1$ rather than $N$? - To prevent bias, as explained here and here. –  William DeMeo Mar 14 '12 at 11:43 This might help: stats.stackexchange.com/questions/3931/… –  Byron Schmuland Mar 14 '12 at 12:26 The reason is because it gives you an unbiased estimator. But, do not confuse this with giving the best estimator. In my time series class, my professor tells me that in time series you usually divide by n instead, because it's actually a better approximation. I couldn't explain to you why or anything. –  Graphth Mar 14 '12 at 12:42 Well, one thing is that the samples are not independent in time series... I'm sure that has something to do with it. –  Graphth Mar 14 '12 at 12:51 If you have n samples, the variance is defined as: $$s^2=\frac{\sum_{i=1}^n (X_i-m)^2}{n}$$ where $m$ is the average of the distribution. In order to have an estimator non $biased$ you have to have: $$E(s^2)=\sigma^2$$ where $\sigma$ is the real unknown value of the variance. It's possible to show that $$E(s^2)=E\left(\frac{\sum_{i=1}^n (X_i-m)^2}{n} \right)=\frac{n}{n-1}\sigma^2$$ So if you want to estimate the 'real' value of $\sigma^2$ you must divide by $n-1$ The sample variance uses the sample average, not $m$. –  Byron Schmuland Mar 14 '12 at 12:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052437543869019, "perplexity": 341.60166030183694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435376093097.69/warc/CC-MAIN-20150627033453-00300-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.ck12.org/physics/Velocity-and-Acceleration/lesson/user:a3Jvc2VuYXVAcGVyaGFtLmsxMi5tbi51cw../Velocity-and-Acceleration/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Velocity and Acceleration ( Read ) | Physics | CK-12 Foundation # Velocity and Acceleration % Progress Practice Velocity and Acceleration Progress % Velocity and Acceleration Students will learn the meaning of acceleration, how it is different than velocity and how to calculate average acceleration. ### Key Equations $v =$ velocity (m/s) $v_i =$ initial velocity $v_f =$ final velocity $\Delta v =$ change in velocity $= v_f - v_i$ $v_{avg} = \frac{\Delta x}{\Delta t}$ $a =$ acceleration $(m/s^2)$ $a_{avg} = \frac{\Delta v}{\Delta t}$ Guidance • Acceleration is the rate of change of velocity. So in other words, acceleration tells you how quickly the velocity is increasing or decreasing. An acceleration of $5 \ m/s^2$ indicates that the velocity is increasing by $5 m/s$ in the positive direction every second. • Deceleration is the term used when an object’s speed (i.e. magnitude of its velocity) is decreasing due to acceleration in the opposite direction of its velocity. #### Example 1 A Top Fuel dragster can accelerate from 0 to 100 mph (160 km/hr) in 0.8 seconds. What is the average acceleration in $m/s^2$ ? Question: $a_{avg} = ? \ [m/s^2]$ Given: $v_i = 0 \ m/s$ ${\;} \qquad \ \ v_f = 160 \ km/hr$ ${\;} \qquad \ \quad t = 0.8 \ s$ Equation: $a_{avg} = \frac{\Delta v }{t}$ Plug n’ Chug: Step 1: Convert km/hr to m/s $v_f = \left( 160 \frac{km}{hr} \right ) \left( \frac{1,000 \ m}{1 \ km} \right ) \left ( \frac{1 \ hr}{3,600 \ s} \right ) = 44.4 \ m/s$ Step 2: Solve for average acceleration: $a_{avg} = \frac{\Delta v}{t} = \frac{v_f - v_i}{t} = \frac{44.4 \ m/s - 0 \ m/s}{0.8 \ s} = 56 \ m/s^2$ Answer: $\boxed {\mathbf{56 \ m/s^2}}$ ### Time for Practice 1. Ms. Reitman’s scooter starts from rest and accelerates at $2.0 m/s^2$ . What is the scooter's velocity after 1s? after 2s? after 7s? 1. 2 m/s, 4 m/s, 14 m/s
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 21, "texerror": 0, "math_score": 0.965247392654419, "perplexity": 2338.054025862654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858580.32/warc/CC-MAIN-20150124161058-00125-ip-10-180-212-252.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/95442/bytes-from-integer-using-bitwise-operators
# Bytes from integer using bitwise operators So I am extremely confused with using bitwise shifts used to extract the bytes from an integer. I find that if I do something like this... int i = 512; i & 0xFF; // this gives me the first byte That must mean i & 0xFF gives me the first byte that is stored in memory. In other words, i & 0xFF is the first of the four bytes i consists of stored at the lowest memory address. Next, I think that i << 8 & 0xFF will then shift everything left by 8 bits and giving me the second byte. However, it is actually the opposite and only i >> 8 & 0xFF gives me the next byte. I do not understand why this is true. To me, it makes the most sense to left shift to get the next byte so why do I need to right shift by 8 to get the proceeding byte? • i & 0xFF actually gives you the last byte, i.e. i % 256 Jul 20 '18 at 8:56 • Okay so if i & 0xFF gives the last byte, then i >> 8 & 0xFF would give the second last byte? Jul 20 '18 at 9:02 • Yes, i >> 8 is basically a division by 256. Jul 20 '18 at 9:16 • So how does this relate to how the bytes are actually stored in memory? For example, on my machine the bytes are ordered in little endian so that means that taking i & 0xFF is getting the byte at the last memory address since it is the last byte? Jul 20 '18 at 9:20 • It doesn't. All those operations are independent of endianness. E.g. if i = 0x12345678. Then i & 0xFF in big endian is [1, 2, 3, 4, 5, 6, 7, 8] & [0, 0, 0, 0, 0, 0, F, F] = [0, 0, 0, 0, 0, 0, 7, 8] = 0x78, and in little endian is [8, 7, 6, 5, 4, 3, 2, 1] & [F, F, 0, 0, 0, 0, 0, 0] = [8, 7, 0, 0, 0, 0, 0, 0] = 0x78]. And the same with the shift operations. I even checked the C++ standard. It defines i >> x as i / 2^x. Jul 20 '18 at 9:42 Contrary to what you think, extracing bytes by shifting and masking is completely unrelated to the storage order (both little-endian and big-endian storage schemes exist and don't influence the results). An int variable is represented by 32 bits. The least significant byte is made of the eight least significant bits and you obtain them by i & 0xFF 0b 111111111 0101010 01010101 000000000 => 0b 00000000 00000000 00000000 00000000 The neighboring byte is made of the eight neighboring bits; you discard the eight least significant bits by shifting (>>), then mask out: (i >> 8) & 0xFF 0b 11111111 10101010 01010101 000000000 => 0b 00000000 00000000 00000000 01010101 And similarly for the remaining two bytes, with shifts of 16 and 24 (most significant byte). Notice that I cared not to use the terms left/right and first/last, which are meaningless as they refer to some graphical representation or memory storage order. In the examples, the numbers are understood as binary numbers written with lsb on the right (and spacers for clarity), which is the most common convention. • I realise the point you're trying to make but I feel this would be clearer if you did use terms such as "left" and "right" but pointed out that their meaning is in terms of the ordinary way of writing binary numbers and has nothing to do with how they're actually laid out in some computer's memory. Jul 20 '18 at 15:21 • @DavidRicherby: the terms least/most significant are more relevant. Jul 20 '18 at 15:45 • This makes sense. Also, when I assign these numbers to say a bitset or something, will all those leading 0s be ignored and only the first 8 bits be kept? Jul 20 '18 at 17:45 • @ZacheryKish: I represented what happens in the registers. What it becomes next depends on casts. Jul 20 '18 at 19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3940317630767822, "perplexity": 637.8649339499758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00340.warc.gz"}
https://wikivisually.com/wiki/171_%28number%29
# 171 (number) ← 170 171 172 → Cardinalone hundred seventy-one Ordinal171st (one hundred seventy-first) Factorization32× 19 Divisors1, 3, 9, 19, 57, 171 Greek numeralΡΟΑ´ Roman numeralCLXXI Binary101010112 Ternary201003 Quaternary22234 Quinary11415 Senary4436 Octal2538 Duodecimal12312 Vigesimal8B20 Base 364R36 171 (one hundred [and] seventy-one) is the natural number following 170 and preceding 172. ## In mathematics 171 is an odd number, a composite number, and a deficient number. It is also a triangular number, a tridecagonal number[1] and a 58-gonal number. 171 is a Harshad number, a palindromic number, and an undulating number. 171 is a repdigit in base 7 (333), and also in bases 18, 56, and 170. 171 is also:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825104832649231, "perplexity": 7134.886020970908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583829665.84/warc/CC-MAIN-20190122054634-20190122080634-00508.warc.gz"}
http://civilservicereview.com/2015/09/
## The Ratio Word Problems Tutorial Series This is a series of tutorials regarding ratio word problems. Ratio is defined as the relationship between two numbers where the second number is how many times the first number is contained. In this series of problems, we will learn about the different types of ratio word problems. How to Solve Word Problems Involving Ratio Part 1 details the intuitive meaning of ratio.  It uses arithmetic calculations in order to explain its meaning. After the explanation, the algebraic solution to the problem is also discussed. How to Solve Word Problems Involving Ratio Part 2 is a continuation of the first part. In this part, the ratio of three quantities are described. Algebraic methods is used as a solution to solve the problem. How to Solve Word Problems Involving Ratio Part 3 in this post, the ratio of two quantities are given. Then, both quantities are increased resulting to another ratio. How to Solve Word Problems Involving Ratio Part 4 involves the difference of two numbers whose ratio is given. If you have more math word problems involving ratio that are different from the ones mention above, feel free to comment below and let us see if we can solve them. ## How to Solve Word Problems Involving Ratio Part 4 This is the fourth and the last part of the solving problems involving ratio series. In this post, we are going to solve another ratio word problem. Problem The ratio of two numbers 1:3. Their difference is 36. What is the larger number? Solution and Explanation Let x be the smaller number and 3x be the larger number. 3x – x = 36 2x = 36 x = 18 So, the smaller number is 18 and the larger number is 3(18) = 54. Check: The ratio of 18:54 is 1:3? Yes, 3 times 18 equals 54. Is their difference 36? Yes, 54 – 18 = 36. Therefore, we are correct. ## How to Solve Word Problems Involving Ratio Part 3 In the previous two posts, we have learned how to solve word problems involving ratio with two and three quantities. In posts, we are going to learn how to solve a slightly different problem where both numbers are increased. Problem The ratio of two numbers is 3:5 and their sum is 48. What must be added to both numbers so that the ratio becomes 3:4? Solution and Explanation First, let us solve the first sentence. We need to find the two numbers whose ratio is 3:5 and whose sum is 48. Now, let x be the number of sets of 3 and 5. 3x + 5x = 48 8x = 48 x = 6 Now, this means that the numbers are 3(6) = 18 and 5(6) = 30. Now if the same number is added to both numbers, then the ratio becomes 3:4. Recall that in the previous posts, we have discussed that ratio can also be represented by fraction. So, we can represent 18:30 as $\frac{18}{30}$. Now, if we add the same number to both numbers (the numerator and the denominator), we get $\frac{3}{4}$. If we let that number y, then $\dfrac{18 + y}{30 + y} = \dfrac{3}{4}$. Cross multiplying, we have $4(18 + y) = 3(30 + y)$. By the distributive property, $72 + 4y = 90 + 3y$ $4y - 3y = 90 - 72$ $y = 18$. So, we add 18 to both the numerator and denominator of $\frac{18}{30}$. That is, $\dfrac{18 + 18}{30 + 18} = \dfrac{36}{48}$. Now, to check, is $\dfrac{36}{48} = \frac{3}{4}$? Yes, it is. Divide both the numerator and the denominator by 12 to reduce the fraction to lowest terms. ## How to Solve Word Problems Involving Ratio Part 2 This is the second part of a series of post on Solving Ratio Problems. In the first part, we have learned how to solve intuitively and algebraically problems involving ratio of two quantities. In this post, we are going to learn how to solve a ratio problem involving 3 quantities. Problem 2 The ratio of the red, green, and blue balls in a box is 2:3:1. If there are 36 balls in the box, how many green balls are there? Solution and Explanation From the previous, post we have already learned the algebraic solutions of problems like the one shown above. So, we can have the following: Let $x$ be the number of grous of balls per color. $2x + 3x + x = 36$ $6x = 36$ $x = 6$ So, there are 6 groups. Now, since we are looking for the number of green balls, we multiply x by 3. So, there are 6 groups (3 green balls per group) = 18 green balls. Check: From above, $x = 6(1)$ is the number of blue balls. The expression 2x represent the number of red balls, so we have 2x = 2(6) = 12 balls. Therefore, we have 12 red balls, 18 green balls, and 6 blue balls. We can check by adding them: 12 + 18 + 6 = 36. This satisfies the condition above that there are 36 balls in all. Therefore, we are correct. ## How to Solve Word Problems Involving Ratio Part 1 In a dance school, 18 girls and 8 boys are enrolled. We can say that the ratio of girls to boys is 18:8 (read as 18 is to 8). Ratio can also be expressed as fraction so we can say that the ratio is 18/8. Since we can reduce fractions to lowest terms, we can also say that the ratio is 9/4 or 9:4. So, ratio can be a relationship between two quantities. It can also be ratio between two numbers like 4:3 which is the ratio of the width and height of a television screen. Problem 1 The ratio of boys and girls in a dance club is 4:5. The total number of students is 63. How many girls and boys are there in the club? Solution and Explanation The ratio of boys is 4:5 means that for every 4 boys, there are 5 girls. That means that if there are 2 groups of 4 boys, there are also 2 groups of 5 girls. So by calculating them and adding, we have 4 + 5 = 9 4(2) +5(2) =18 4(3) +5(3) =27 4(4) +5(4) = 36 4(5) +5(5) = 45 4(6) +5(6) =54 4(7) +5(7) =63 As we can see, we are looking for the number of groups of 4 and, and the answer is 7 groups of each. So there are 4(7) = 28 boys and 5(7) = 35 girls. As you can observe, the number of groups of 4 is the same as the number of groups of 5. Therefore, the question above is equivalent to finding the number of groups (of 4 and 5), whose total number of persons add up to 63. Algebraically, if we let x be the number of groups of 4, then it is also the number of groups of 5. So, we can make the following equation. 4 x number of groups + 5 x number of groups of 5 = 63 Or 4x + 5x = 63. Simplifying, we have 9x = 63 x = 7. So there are 4(7) = 28 boys and 5(7) = 35 girls. As we can see, we confirmed the answer above using algebraic methods. ## How to Solve Investment Word Problems in Algebra Investment word problems in Algebra is one of the types of problems that usually come out in the Civil Service Exam. In solving investment word problems, you should know the basic terms used. Some of these terms are principal (P) or the money invested, the rate (R) or the percent of interest, the interest (I) or the return of investment (profit), and the time or how long the money is invested. Investment is the product of the principal, the rate, and the time, and therefore, we have the formula I = PRT. This tutorial series discusses the different types of problems in investment and discussed the method and strategies used in solving them. How to Solve Investment Problems Part 1 discusses the common terminology used in investment problems. It also discusses an investment problem where the principal is invested at two different interest rates. How to Solve Investment Problems Part 2 is a discussion of another investment problem just like in part 1. In the problem, the principal is invested at two different interest rates and the interest in one investment is larger than the other. How to Solve Investment Problems Part 3 is very similar to part 2, only that the smaller interest amount is described. How to Solve Investment Problems Part 4 discusses an investment problem with a given interest in one investment and an unknown amount of investment at another rate to satisfy a percentage of interest for the entire investment. ## How to Solve Investment Problems Part 4 This is the fourth part of the Solving Investment Problems Series. In this part, we discuss a problem which is very similar to the third part. We discuss an investment at two different interest rates. Problem Mr. Garett invested a part of $20 000 at a bank at 4% yearly interest. How much does he have to invest at another bank at a 8% yearly interest so that the total interest of the money is 7%. Solution and Explanation Let x be the money invested at 8% (1) We know that the interest of 20,000 invested at 4% yearly interest is 20,000(0.04) (2) We also know that the interest of the money invested at 8% is (0.08)(x) (3) The interest of total amount of money invested is 7%. So, (20,000 + x)(0.07) Now, the interest in (1) added to the interest in (2) is equal to the interest in (3). Therefore, 20,000(0.04) + (0.08)(x) = (20,000 + x)(0.07) Simplifying, we have 800 + 0.08x = 1400 + 0.07x To eliminate the decimal point, we multiply both sides by 100. That is 80000 + 8x = 140000 + 7x 8x – 7x = 140000 – 80000 x = 60000 This means that he has to invest$60,000 at 8% interest in order for the total to be 7% of the entire investment. Check: $20,000 x 0.04 =$800 $60,000 x 0.08 = 4800 Adding the two interest, we have$5600. We check if this is really 7% of the total investment. Our total investment is $80,000. Now,$80,000 x 0.07 = \$5600.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369075894355774, "perplexity": 363.4173940657746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00474.warc.gz"}
https://brilliant.org/problems/negate-the-roots/
# Negate the Roots Algebra Level 3 The roots of the monic polynomial $x^5 + a x^4 + b x^3 + c x^2 + d x + e$ are $$-r_1$$, $$-r_2$$, $$-r_3$$, $$-r_4$$, and $$-r_5$$, where $$r_1$$, $$r_2$$, $$r_3$$, $$r_4$$, and $$r_5$$ are the roots of the polynomial $x^5 + 9x^4 + 13x^3 - 57 x^2 - 86 x + 120.$ Find $$|a+b+c+d+e|.$$ Details and assumptions A root of a polynomial is a number where the polynomial is zero. For example, 6 is a root of the polynomial $$2x - 12$$. A polynomial is monic if its leading coefficient is 1. For example, the polynomial $$x^3 + 3x - 5$$ is monic but the polynomial $$-x^4 + 2x^3 - 6$$ is not. The notation $$| \cdot |$$ denotes the absolute value. The function is given by $|x | = \begin{cases} x & x \geq 0 \\ -x & x < 0 \\ \end{cases}$ For example, $$|3| = 3, |-2| = 2$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550517201423645, "perplexity": 91.34753998659893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156690.32/warc/CC-MAIN-20180920234305-20180921014705-00172.warc.gz"}
https://brilliant.org/problems/1-wont-work-2-wont-work-3-wont-work-what-is-this/
# They all don't work! Find the largest positive integer $$n$$ such that $${1}^{2}+{2}^{2}+{3}^{2}+\ldots+{n}^{2}$$ is a perfect square. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1545492261648178, "perplexity": 449.3063567976175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944677.39/warc/CC-MAIN-20180420174802-20180420194802-00342.warc.gz"}
http://mathhelpforum.com/advanced-statistics/209716-kullback-lieber-divergence.html
1. ## Kullback-Lieber divergence The Kullback-Lieber divergence between two distributions with pdfs f(x) and g(x) is defined by $KL(F;G) = \int_{-\infty}^{\infty} ln \left(\frac{f(x)}{g(x)}\right)f(x)dx$ Compute the Kullback-Lieber divergence when F is the standard normal distribution and G is the normal distribution with mean  and variance 1. For what value of  is the divergence minimized? I was never instructed on this kind of divergence so I am a bit lost on how to solve this kind of integral. I get that I can simplify my two normal equations in the natural log but my guess is that I should wait until after I take the integral. Any help is appreciated. 2. ## Re: Kullback-Lieber divergence Hi WUrunner, If we simplify what's inside the logarithm we should get $KL(F;G)=\int_{-\infty}^{\infty}\left(\frac{1}{2}-x\right)\left(\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}\right)dx.$ Now multiply this out to get $KL(F;G)=\frac{1}{2}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} }e^{-\frac{x^{2}}{2}}\right)dx + \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}(-x)}e^{-\frac{x^{2}}{2}}\right)dx.$ Now we know that the first integral is $\sqrt{2\pi}$ (see Gaussian integral - Wikipedia, the free encyclopedia). The second integral can be computed using a u-substitution or by noting that we're integrating an odd function about a symmetric interval. When we put all the pieces together (if I've done the computations correctly) we should get $KL(F;G)=\frac{1}{2}.$ Does this straighten things out? Let me know if anything is unclear. Good luck!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922236800193787, "perplexity": 273.5293169070606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00389.warc.gz"}
http://www.menpo.org/installation/linux/expert.html
# Linux Expert Installation It is important to note that as part of the installation, you will be creating an isolated environment to execute Python inside. Make sure that this environment is activated in order to be able to use Menpo! 1. Download and install Miniconda either for Python 2 or for Python 3 on Linux. Make sure to choose the correct architecture (32/64) for your copy of Linux. $cd ~/Downloads$ chmod +x Miniconda3-latest-Linux-x86_64.sh $./Miniconda3-latest-Linux-x86_64.sh 3. After following the instructions you should be able to access conda from a terminal. 4. Create a fresh conda environment by using $ conda create -n menpo python 5. Activate the environment by executing: $source activate menpo (menpo)$ 6. Install the whole Menpo Project and all of its dependencies: (menpo)$conda install -c menpo menpoproject If you don't need all the packages, you can explicitly install a specific package with its dependencies as: (menpo)$ conda install -c menpo menpo (menpo)$conda install -c menpo menpofit (menpo)$ conda install -c menpo menpodetect (menpo)$conda install -c menpo menpowidgets (menpo)$ conda install -c menpo menpocli (menpo)\$ conda install -c menpo menpo3d 7. Head over to the Examples page to begin experimenting with Menpo. We strongly advise you to read the User Guides for all the packages in order to understand the basic concepts behind the Menpo Project. They can be found in:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20540694892406464, "perplexity": 5672.427069993387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123270.78/warc/CC-MAIN-20170423031203-00054-ip-10-145-167-34.ec2.internal.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/154/C2xC8sD5.html
Copied to clipboard ## G = C2×C8⋊D5order 160 = 25·5 ### Direct product of C2 and C8⋊D5 Series: Derived Chief Lower central Upper central Derived series C1 — C10 — C2×C8⋊D5 Chief series C1 — C5 — C10 — C20 — C4×D5 — C2×C4×D5 — C2×C8⋊D5 Lower central C5 — C10 — C2×C8⋊D5 Upper central C1 — C2×C4 — C2×C8 Generators and relations for C2×C8⋊D5 G = < a,b,c,d | a2=b8=c5=d2=1, ab=ba, ac=ca, ad=da, bc=cb, dbd=b5, dcd=c-1 > Subgroups: 184 in 68 conjugacy classes, 41 normal (19 characteristic) C1, C2, C2, C2, C4, C4, C22, C22, C5, C8, C8, C2×C4, C2×C4, C23, D5, C10, C10, C2×C8, C2×C8, M4(2), C22×C4, Dic5, C20, D10, D10, C2×C10, C2×M4(2), C52C8, C40, C4×D5, C2×Dic5, C2×C20, C22×D5, C8⋊D5, C2×C52C8, C2×C40, C2×C4×D5, C2×C8⋊D5 Quotients: C1, C2, C4, C22, C2×C4, C23, D5, M4(2), C22×C4, D10, C2×M4(2), C4×D5, C22×D5, C8⋊D5, C2×C4×D5, C2×C8⋊D5 Smallest permutation representation of C2×C8⋊D5 On 80 points Generators in S80 (1 72)(2 65)(3 66)(4 67)(5 68)(6 69)(7 70)(8 71)(9 32)(10 25)(11 26)(12 27)(13 28)(14 29)(15 30)(16 31)(17 38)(18 39)(19 40)(20 33)(21 34)(22 35)(23 36)(24 37)(41 61)(42 62)(43 63)(44 64)(45 57)(46 58)(47 59)(48 60)(49 77)(50 78)(51 79)(52 80)(53 73)(54 74)(55 75)(56 76) (1 2 3 4 5 6 7 8)(9 10 11 12 13 14 15 16)(17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32)(33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56)(57 58 59 60 61 62 63 64)(65 66 67 68 69 70 71 72)(73 74 75 76 77 78 79 80) (1 14 37 62 75)(2 15 38 63 76)(3 16 39 64 77)(4 9 40 57 78)(5 10 33 58 79)(6 11 34 59 80)(7 12 35 60 73)(8 13 36 61 74)(17 43 56 65 30)(18 44 49 66 31)(19 45 50 67 32)(20 46 51 68 25)(21 47 52 69 26)(22 48 53 70 27)(23 41 54 71 28)(24 42 55 72 29) (1 75)(2 80)(3 77)(4 74)(5 79)(6 76)(7 73)(8 78)(9 61)(10 58)(11 63)(12 60)(13 57)(14 62)(15 59)(16 64)(17 21)(19 23)(25 46)(26 43)(27 48)(28 45)(29 42)(30 47)(31 44)(32 41)(34 38)(36 40)(49 66)(50 71)(51 68)(52 65)(53 70)(54 67)(55 72)(56 69) G:=sub<Sym(80)| (1,72)(2,65)(3,66)(4,67)(5,68)(6,69)(7,70)(8,71)(9,32)(10,25)(11,26)(12,27)(13,28)(14,29)(15,30)(16,31)(17,38)(18,39)(19,40)(20,33)(21,34)(22,35)(23,36)(24,37)(41,61)(42,62)(43,63)(44,64)(45,57)(46,58)(47,59)(48,60)(49,77)(50,78)(51,79)(52,80)(53,73)(54,74)(55,75)(56,76), (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80), (1,14,37,62,75)(2,15,38,63,76)(3,16,39,64,77)(4,9,40,57,78)(5,10,33,58,79)(6,11,34,59,80)(7,12,35,60,73)(8,13,36,61,74)(17,43,56,65,30)(18,44,49,66,31)(19,45,50,67,32)(20,46,51,68,25)(21,47,52,69,26)(22,48,53,70,27)(23,41,54,71,28)(24,42,55,72,29), (1,75)(2,80)(3,77)(4,74)(5,79)(6,76)(7,73)(8,78)(9,61)(10,58)(11,63)(12,60)(13,57)(14,62)(15,59)(16,64)(17,21)(19,23)(25,46)(26,43)(27,48)(28,45)(29,42)(30,47)(31,44)(32,41)(34,38)(36,40)(49,66)(50,71)(51,68)(52,65)(53,70)(54,67)(55,72)(56,69)>; G:=Group( (1,72)(2,65)(3,66)(4,67)(5,68)(6,69)(7,70)(8,71)(9,32)(10,25)(11,26)(12,27)(13,28)(14,29)(15,30)(16,31)(17,38)(18,39)(19,40)(20,33)(21,34)(22,35)(23,36)(24,37)(41,61)(42,62)(43,63)(44,64)(45,57)(46,58)(47,59)(48,60)(49,77)(50,78)(51,79)(52,80)(53,73)(54,74)(55,75)(56,76), (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80), (1,14,37,62,75)(2,15,38,63,76)(3,16,39,64,77)(4,9,40,57,78)(5,10,33,58,79)(6,11,34,59,80)(7,12,35,60,73)(8,13,36,61,74)(17,43,56,65,30)(18,44,49,66,31)(19,45,50,67,32)(20,46,51,68,25)(21,47,52,69,26)(22,48,53,70,27)(23,41,54,71,28)(24,42,55,72,29), (1,75)(2,80)(3,77)(4,74)(5,79)(6,76)(7,73)(8,78)(9,61)(10,58)(11,63)(12,60)(13,57)(14,62)(15,59)(16,64)(17,21)(19,23)(25,46)(26,43)(27,48)(28,45)(29,42)(30,47)(31,44)(32,41)(34,38)(36,40)(49,66)(50,71)(51,68)(52,65)(53,70)(54,67)(55,72)(56,69) ); G=PermutationGroup([[(1,72),(2,65),(3,66),(4,67),(5,68),(6,69),(7,70),(8,71),(9,32),(10,25),(11,26),(12,27),(13,28),(14,29),(15,30),(16,31),(17,38),(18,39),(19,40),(20,33),(21,34),(22,35),(23,36),(24,37),(41,61),(42,62),(43,63),(44,64),(45,57),(46,58),(47,59),(48,60),(49,77),(50,78),(51,79),(52,80),(53,73),(54,74),(55,75),(56,76)], [(1,2,3,4,5,6,7,8),(9,10,11,12,13,14,15,16),(17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32),(33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56),(57,58,59,60,61,62,63,64),(65,66,67,68,69,70,71,72),(73,74,75,76,77,78,79,80)], [(1,14,37,62,75),(2,15,38,63,76),(3,16,39,64,77),(4,9,40,57,78),(5,10,33,58,79),(6,11,34,59,80),(7,12,35,60,73),(8,13,36,61,74),(17,43,56,65,30),(18,44,49,66,31),(19,45,50,67,32),(20,46,51,68,25),(21,47,52,69,26),(22,48,53,70,27),(23,41,54,71,28),(24,42,55,72,29)], [(1,75),(2,80),(3,77),(4,74),(5,79),(6,76),(7,73),(8,78),(9,61),(10,58),(11,63),(12,60),(13,57),(14,62),(15,59),(16,64),(17,21),(19,23),(25,46),(26,43),(27,48),(28,45),(29,42),(30,47),(31,44),(32,41),(34,38),(36,40),(49,66),(50,71),(51,68),(52,65),(53,70),(54,67),(55,72),(56,69)]]) 52 conjugacy classes class 1 2A 2B 2C 2D 2E 4A 4B 4C 4D 4E 4F 5A 5B 8A 8B 8C 8D 8E 8F 8G 8H 10A ··· 10F 20A ··· 20H 40A ··· 40P order 1 2 2 2 2 2 4 4 4 4 4 4 5 5 8 8 8 8 8 8 8 8 10 ··· 10 20 ··· 20 40 ··· 40 size 1 1 1 1 10 10 1 1 1 1 10 10 2 2 2 2 2 2 10 10 10 10 2 ··· 2 2 ··· 2 2 ··· 2 52 irreducible representations dim 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 type + + + + + + + + image C1 C2 C2 C2 C2 C4 C4 C4 D5 M4(2) D10 D10 C4×D5 C4×D5 C8⋊D5 kernel C2×C8⋊D5 C8⋊D5 C2×C5⋊2C8 C2×C40 C2×C4×D5 C4×D5 C2×Dic5 C22×D5 C2×C8 C10 C8 C2×C4 C4 C22 C2 # reps 1 4 1 1 1 4 2 2 2 4 4 2 4 4 16 Matrix representation of C2×C8⋊D5 in GL3(𝔽41) generated by 40 0 0 0 40 0 0 0 40 , 1 0 0 0 6 2 0 39 35 , 1 0 0 0 0 40 0 1 6 , 40 0 0 0 6 35 0 40 35 G:=sub<GL(3,GF(41))| [40,0,0,0,40,0,0,0,40],[1,0,0,0,6,39,0,2,35],[1,0,0,0,0,1,0,40,6],[40,0,0,0,6,40,0,35,35] >; C2×C8⋊D5 in GAP, Magma, Sage, TeX C_2\times C_8\rtimes D_5 % in TeX G:=Group("C2xC8:D5"); // GroupNames label G:=SmallGroup(160,121); // by ID G=gap.SmallGroup(160,121); # by ID G:=PCGroup([6,-2,-2,-2,-2,-2,-5,362,50,69,4613]); // Polycyclic G:=Group<a,b,c,d|a^2=b^8=c^5=d^2=1,a*b=b*a,a*c=c*a,a*d=d*a,b*c=c*b,d*b*d=b^5,d*c*d=c^-1>; // generators/relations ׿ × 𝔽
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993125200271606, "perplexity": 5338.527301865018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00193.warc.gz"}
https://control.com/textbook/process-dynamics-and-pid-controller-tuning/tuning-techniques-compared/
A Comparison of PID Controller Tuning Techniques Chapter 33 - Process Dynamics and PID Controller Tuning In this section I will show screenshots from a process loop simulation program illustrating the effectiveness of Ziegler-Nichols open-loop (“Reaction Rate”) and closed-loop (“Ultimate”) PID tuning methods, and then contrast them against the results of my own heuristic tuning. As you will see in some of these cases, the results obtained by either Ziegler-Nichols method tends toward instability (excessive oscillation of the process variable following a setpoint change). This is not necessarily an indictment of Ziegler’s and Nichols’ recommendations as much as it is a demonstration of the power of understanding. Ziegler and Nichols presented a simple step-by-step procedure for obtaining approximate PID tuning constant values based on closed-loop and open-loop process responses, which could be applied by anyone regardless of their level of understanding PID control theory. If I were tasked with drafting a procedure to instruct anyone to quantitatively determine PID constant values without an understanding of process dynamics or process control theory, I doubt my effort would be an improvement. Ultimately, robust PID control is attainable only at the hands of someone who understands how PID works, what each mode does (and why), and is able to distinguish between intrinsic process characteristics and instrument limitations. The purpose of this section is to clearly demonstrate the limitations of ignorantly-followed procedures, and contrast this “mindless” approach against the results of simple experimentation directed by qualitative understanding. Each of the examples illustrated in this section were simulations run on a computer program called PC-ControLab developed by Wade Associates, Inc. Although these are simulated processes, in general I have found similar results using both Ziegler-Nichols and heuristic tuning methods on real processes. The control criteria I used for heuristic tuning were fast response to setpoint changes, with minimal overshoot or oscillation. Tuning a “generic” process Ziegler-Nichols open-loop tuning procedure The first process tuned in simulation was a “generic” process, unspecific in its nature or application. Performing an open-loop test (two 10% output step-changes made in manual mode, both increasing) on this process resulted in the following behavior: From the trend, we can see that this process is self-regulating, with multiple lags and some dead time. The reaction rate ($$R$$) is 20% over 15 minutes, or 1.333 percent per minute. Dead time ($$L$$) appears to be approximately 2 minutes. Following the Ziegler-Nichols recommendations for PID tuning based on these process characteristics (also including the 10% step-change magnitude $$\Delta m$$): $K_p = 1.2 {\Delta m \over {R L}} = 1.2 {10\% \over {20\% \over 15 \hbox{ min}} 2 \hbox{ min}} = 4.5$ $\tau_i = 2 L = (2)(2 \hbox{ min}) = 4 \hbox{ min}$ $\tau_d = 0.5 L = (0.5)(2 \hbox{ min}) = 1 \hbox{ min}$ Applying the PID values of 4.5 (gain), 4 minutes per repeat (integral), and 1 minute (derivative) gave the following result in automatic mode (with a 10% setpoint change): The result is reasonably good behavior with the PID values predicted by the Ziegler-Nichols open-loop equations, and would be acceptable for applications where some setpoint overshoot were tolerable. We may tell from analyzing the phase shift between the PV and OUT waveforms that the dominant control action here is proportional: each negative peak of the PV lines up fairly close with each positive peak of the OUT, for this reverse-acting controller. If we were interested in minimizing overshoot and oscillation, the logical choice would be to reduce the gain value somewhat. Ziegler-Nichols closed-loop tuning procedure Next, the closed-loop, or “Ultimate” tuning method of Ziegler and Nichols was applied to this process. Eliminating both integral and derivative control actions from the controller, and experimenting with different gain (proportional) values until self-sustaining oscillations of consistent amplitude988 were obtained, gave a gain value of 11: From the trend, we can see that the ultimate period ($$P_u$$) is approximately 7 minutes in length. Following the Ziegler-Nichols recommendations for PID tuning based on these process characteristics: $K_p = 0.6 K_u = (0.6)(11) = 6.6$ $\tau_i = {P_u \over 2} = {7 \hbox{ min} \over 2} = 3.5 \hbox { min}$ $\tau_d = {P_u \over 8} = {7 \hbox{ min} \over 8} = 0.875 \hbox { min}$ It should be immediately apparent that these tuning parameters will yield poor control. While the integral and derivative values are close to those predicted by the open-loop (Reaction Rate) method, the gain value calculated here is even larger than what was calculated before. Since we know proportional action was excessive in the last tuning attempt, and this one recommends an even higher gain value, we can expect our next trial to oscillate even worse. Applying the PID values of 6.6 (gain), 3.5 minutes per repeat (integral), and 0.875 minute (derivative) gave the following result in automatic mode: This time the loop stability is a bit worse than with the PID values given by the Ziegler-Nichols open-loop tuning equations, owing mostly to the increased controller gain value of 6.6 (versus 4.5). Proportional action is still the dominant mode of control here, as revealed by the minimal phase shift between PV and OUT waveforms (ignoring the 180 degrees of shift inherent to the controller’s reverse action). In all fairness to the Ziegler-Nichols technique, the excessive controller gain value probably resulted more from the saturated output waveform than anything else. This led to more controller gain being necessary to sustain oscillations, leading to an inflated $$K_p$$ value. Heuristic tuning procedure From the initial open-loop (manual output step-change) test, we could see this process contains multiple lags in addition to about 2 minutes of dead time. Both of these factors tend to limit the amount of gain we can use in the controller before the process oscillates. Both Ziegler-Nichols tuning attempts confirmed this fact, which led me to try much lower gain values in my initial heuristic tests. Given the self-regulating nature of the process, I knew the controller needed integral action, but once again the aggressiveness of this action would be necessarily limited by the lag and dead times. Derivative action, however, would prove to be useful in its ability to help “cancel” lags, so I suspected my tuning would consist of relatively tame proportional and integral values, with a relatively aggressive derivative value. After some experimenting, the values I arrived at were 1.5 (gain), 10 minutes (integral), and 5 minutes (derivative). These tuning values represent a proportional action only one-third as aggressive as the least-aggressive Ziegler-Nichols recommendation, an integral action less than half as aggressive as the Ziegler-Nichols recommendations, and a derivative action five times more aggressive than the most aggressive Ziegler-Nichols recommendation. The results of these tuning values in automatic mode are shown here: With this PID tuning, the process responded with much less overshoot of setpoint than with the results of either Ziegler-Nichols technique. Tuning a liquid level process Ziegler-Nichols open-loop tuning procedure The next simulated process I attempted to tune was a liquid level-control process. Performing an open-loop test (one 10% increasing output step-change, followed by a 10% decreasing output step-change, both made in manual mode) on this process resulted in the following behavior: From the trend, the process appears to be purely integrating, as though the control valve were throttling the flow of liquid into a vessel with a constant out-flow. The reaction rate ($$R$$) on the first step-change is 50% over 10 minutes, or 5 percent per minute. Dead time ($$L$$) appears virtually nonexistent, estimated to be 0.1 minutes simply for the sake of having a dead-time value to use in the Ziegler-Nichols equations. Following the Ziegler-Nichols recommendations for PID tuning based on these process characteristics (also including the 10% step-change magnitude $$\Delta m$$): $K_p = 1.2 {\Delta m \over {R L}} = 1.2 {10\% \over {50\% \over 10 \hbox{ min}} 0.1 \hbox{ min}} = 24$ $\tau_i = 2 L = (2)(0.1 \hbox{ min}) = 0.2 \hbox{ min}$ $\tau_d = 0.5 L = (0.5)(0.1 \hbox{ min}) = 0.05 \hbox{ min}$ Applying the PID values of 24 (gain), 0.2 minutes per repeat (integral), and 0.05 minutes (derivative) gave the following result in automatic mode: The process variable certainly responds rapidly to the five increasing setpoint changes and also to the one large decreasing setpoint change, but the valve action is hopelessly chaotic. Not only would this “jittery” valve motion prematurely wear out the stem packing, but it would also result in vast over-consumption of compressed air to continually stroke the valve from one extreme to the other. Furthermore, we see evidence of “overshoot” at every setpoint change, most likely from excessive integral action. We can see from the valve’s wild behavior even during periods when the process variable is holding at setpoint that the problem is not a loop oscillation, but rather the effects of process noise on the controller. The extremely high gain value of 24 is amplifying PV noise by that factor, and reproducing it on the output signal. Ziegler-Nichols closed-loop tuning procedure Next, I attempted to perform a closed-loop “Ultimate” gain test on this process, but I was not successful. Even the controller’s maximum possible gain value would not generate oscillations, due to the extremely crisp response of the process (minimal lag and dead times) and its integrating nature (constant phase shift of $$-90^{o}$$). Heuristic tuning procedure From the initial open-loop (manual output step-change) test, we could see this process was purely integrating. This told me it could be controlled primarily by proportional action, with very little integral action required, and no derivative action whatsoever. The presence of some process noise is the only factor limiting the aggressiveness of proportional action. With this in mind, I experimented with increasingly aggressive gain values until I reached a point where I felt the output signal noise was at a maximum acceptable limit for the control valve. Then, I experimented with integral action to ensure reasonable elimination of offset. After some experimenting, the values I arrived at were 3 (gain), 10 minutes (integral), and 0 minutes (derivative). These tuning values represent a proportional action only one-eighth as aggressive as the Ziegler-Nichols recommendation, and an integral action fifty times less aggressive than the Ziegler-Nichols recommendation. The results of these tuning values in automatic mode are shown here: You can see on this trend five 10% increasing setpoint value changes, with crisp response every time, followed by a single 50% decreasing setpoint step-change. In all cases, the process response clearly meets the criteria of rapid attainment of new setpoint values and no overshoot or oscillation. If it was decided that the noise in the output signal was too detrimental for the valve, we would have the option of further reducing the gain value and (possibly) compensating for slow offset recovery with more aggressive integral action. We could also attempt the insertion of a damping constant into either the level transmitter or the controller itself, so long as this added lag did not cause oscillation problems in the loop989. The best solution would be to find a way to isolate the level transmitter from noise, so that the process variable signal was much “quieter.” Whether or not this is possible depends on the process and on the particular transmitter used. Tuning a temperature process Ziegler-Nichols open-loop tuning procedure This next simulated process is a temperature control process. Performing an open-loop test (two 10% increasing output step-changes, both made in manual mode) on this process resulted in the following behavior: From the trend, the process appears to be self-regulating with a slow time constant (lag) and a substantial dead time. The reaction rate ($$R$$) on the first step-change is 30% over 30 minutes, or 1 percent per minute. Dead time ($$L$$) looks to be approximately 1.25 minutes. Following the Ziegler-Nichols recommendations for PID tuning based on these process characteristics (also including the 10% step-change magnitude $$\Delta m$$): $K_p = 1.2 {\Delta m \over {R L}} = 1.2 {10\% \over {30\% \over 30 \hbox{ min}} 1.25 \hbox{ min}} = 9.6$ $\tau_i = 2 L = (2)(1.25 \hbox{ min}) = 2.5 \hbox{ min}$ $\tau_d = 0.5 L = (0.5)(1.25 \hbox{ min}) = 0.625 \hbox{ min}$ Applying the PID values of 9.6 (gain), 2.5 minutes per repeat (integral), and 0.625 minutes (derivative) gave the following result in automatic mode: As you can see, the results are quite poor. The PV is still oscillating with a peak-to-peak amplitude of almost 20% from the last process upset at the time of the 10% downward SP change. Additionally, the output trend is rather noisy, indicating excessive amplification of process noise by the controller. Ziegler-Nichols closed-loop tuning procedure Next, the closed-loop, or “Ultimate” tuning method of Ziegler and Nichols was applied to this process. Eliminating both integral and derivative control actions from the controller, and experimenting with different gain (proportional) values until self-sustaining oscillations of consistent amplitude were obtained, gave a gain value of 15: From the trend, we can see that the ultimate period ($$P_u$$) is approximately 5.2 minutes in length. Following the Ziegler-Nichols recommendations for PID tuning based on these process characteristics: $K_p = 0.6 K_u = (0.6)(15) = 9$ $\tau_i = {P_u \over 2} = {5.2 \hbox{ min} \over 2} = 2.6 \hbox { min}$ $\tau_d = {P_u \over 8} = {5.2 \hbox{ min} \over 8} = 0.65 \hbox { min}$ These PID tuning values are quite similar to those predicted by the open loop (“Reaction Rate”) method, and so we would expect to see very similar results: As expected, we still see excessive oscillation following a 10% setpoint change, as well as excessive “noise” in the output trend. Heuristic tuning procedure From the initial open-loop (manual output step-change) test, we could see this process was self-regulating with a slow lag and substantial dead time. The self-regulating nature of the process demands at least some integral control action to eliminate offset, but too much will cause oscillation given the long lag and dead times. The existence of over 1 minute of process dead time also prohibits the use of aggressive proportional action. Derivative action, which is generally useful in overcoming lag times, will cause problems here by amplifying process noise. In summary, then, we would expect to use mild proportional, integral, and derivative tuning values in order to achieve good control with this process. Anything too aggressive will cause problems for this process. After some experimenting, the values I arrived at were 3 (gain), 5 minutes (integral), and 0.5 minutes (derivative). These tuning values represent a proportional action only one-third as aggressive as the Ziegler-Nichols recommendation, and an integral action about half as aggressive as the Ziegler-Nichols recommendation. The results of these tuning values in automatic mode are shown here: As you can see, the system’s response has almost no overshoot (with either a 10% setpoint change or a 15% setpoint change) and very little “noise” on the output trend. Response to setpoint changes is relatively crisp considering the naturally slow nature of the process: each new setpoint is achieved within about 7.5 minutes of the step-change. Published under the terms and conditions of the Creative Commons Attribution 4.0 International Public License
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7157438397407532, "perplexity": 2052.680089627882}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00252.warc.gz"}
http://uncyclopedia.wikia.com/wiki/Facts
# Fact (Redirected from Facts) Fact - A label given to ones own opinion to discourage open debate or alternate viewpoints. e.g. "It has recently been proven in a doubleblind scientific study conducted by a panel of unbiased and as-far-as-you-know unbribed scientists who where not smoking reefer at the time (for scientific evaluation of mind altering substances) that white kittens with black paws are far cuter than all other kittens. -and puppies, too." The latest statistics show that 50% of all facts are 90% true. “My opinion is right because my opinion is a fact, your opinion is wrong because your opinion is an opinion!” “Facts look like squid. You could use facts to prove anything that's even remotely true!” ~ Oscar Wilde on facts For those without comedic tastes, the so-called experts at Wikipedia have an article about Fact. “Federation Against Copyright Theft - F.A.C.T.!” ~ The truth behind 'fact' A fact (from the Latin phrase factoff, factface, roughly translated: "go away")) is an act of truthful intercourse such as "Bob and I are in the act of facting.". ## editCommon Usages of Facts Facts are commonly used in many ways, such as to signify mathematical dismissal, to spend time spreading bullshit, to get smart, to make it all up as you go along, to bungle something, or to act carefully or foolishly, as if full of cheese doodles. "The truth is more important than the facts." - Frank Lloyd Wright (1868-1959) So if you are a true person and you have to tell a fact, let out the truth, frankly speaking..[Citation not needed at all; thank you very much] Uncyclopedia uses ubiquitously undisputed facts in every article asshole ## editHistory The origin of the fact may be traced to a very old taboo and has been considered shocking from the first, though it is seen in print much more often now than in the past. Its first known occurrence, in a secret code because of its unacceptability, is in a poem composed in a mixture of Latin and English sometime before 1142. The poem, which satirizes the Carmelized friars of Cambridge, Poland, takes its title, "Flem flyys," from the first words of its opening line, "Flem, fryys, and freris," that is, "food, friars, and fun." The line that contains fact reads "Non sunt in coeli, quia gxddbov xxkxzt pg ifmk." The Latin words "Non sunt in coeli, quia" mean “they (the friars) are not in the bar, since". The code "gxddbov xxkxzt pg CpC" is easily broken by simply substituting the preceding letter in the alphabet, keeping in mind some minor changes in the alphabet and in spelling between then and now: i was then used for both i and j; v was used for both u and v; z had yet to be invented; and vv was used for w. This yields "fvccant [a fake Latin form] qsivvivys of BoB." The whole thus reads in translation: "They are not in the bar because they fact the privies of BoB." Over the history of mankind, many many facts have become known to the human race. Acording to the Worldwide Fact Keeping Asociation the total number of known facts is 34,3452,456 per '''GIGGITY!''' but, as everyone knows, that association has no credibility whatsoever. Amazingly, there are still some facts that remain unknown. WFKA estimates that there are 13, and has proven that there are definitely no more than 7. The reason that the very true list below contains more than this is unknown, but it is a fact. That means there's only 6 left. Or maybe 7. Or maybe 3. ## edit!Disclaimers! Contrary to popular belief, facts can never be considered fun, despite occasionally driving people insane. Facts are generally known to be True. Except when George W. Bush says them. Then, then they are false. Which is, confusingly, true. ## editExamples of facts • Encyclopedia Dramatica is superior to Uncyclopedia. • ED and 4chan admins are ALL awesome. • Everyone who likes Uncyclopedia doesn’t know how the internet works. • Alcohol is man's greatest invention. • Anonymous will rape you if you support scientology • Facts can still be lies paradoxes are equal to dividing by zero • You cannot divide by zero • If <insert name here> edits this list George W. Bush will be sent to rape <insert name here>’s mother. • If you write articles for Encyclopedia Dramatica , you will NEVER get laid, then again, if you're on a comedy wiki, you know you would never get laid either. • Kayla had sexual relations with <insert name here>. • This fact is false.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6099172234535217, "perplexity": 4501.4810512913455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820617.70/warc/CC-MAIN-20140820021340-00468-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/188402-u-2-u-any-number-u-true-false-print.html
# √u^2 =u for any number u. True or False? • September 20th 2011, 06:07 AM naturebuilder √u^2 =u for any number u. True or False? √u^2 =u for any number u. True or False? i think it's true... • September 20th 2011, 06:14 AM Plato Re: √u^2 =u for any number u. True or False? Quote: Originally Posted by naturebuilder i think it's true... It is false: $\sqrt{(-2)^2}\ne -2~.$ • September 20th 2011, 11:19 AM psolaki Re: √u^2 =u for any number u. True or False? Quote: Originally Posted by naturebuilder i think it's true... $\sqrt{(u)^2}= |u|$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5209348797798157, "perplexity": 22729.06919442679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120057.96/warc/CC-MAIN-20140914011200-00331-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://forum.allaboutcircuits.com/threads/question-dealing-with-gain-bandwidth-of-inverting-op-amp.58857/
# Question dealing with gain bandwidth of inverting op-amp #### uofmx12 Joined Mar 8, 2011 55 In this circuit: http://i812.photobucket.com/albums/zz41/uofmx12/E31ckt.jpg A) what two estimates of the gain bandwidth product of your circuit can you make B) Needed to make a circuit with bandwidth of 50kHz with gain of 1000V/V, would this circuit be chosen? -No? C) What if requirements were 10k Hz and 100V/V, would this circuit be chosen? -Yes? #### steveb Joined Jul 3, 2008 2,436 In this circuit: http://i812.photobucket.com/albums/zz41/uofmx12/E31ckt.jpg A) what two estimates of the gain bandwidth product of your circuit can you make B) Needed to make a circuit with bandwidth of 50kHz with gain of 1000V/V, would this circuit be chosen? -No? C) What if requirements were 10k Hz and 100V/V, would this circuit be chosen? -Yes? Is there any more information provided? For exampe, did they mention which OPAMP it is, or give the gain-bandwidth product for the OPAMP? #### uofmx12 Joined Mar 8, 2011 55 inverting op-amp circuit and 100Hz frequency. That was provided. #### steveb Joined Jul 3, 2008 2,436 inverting op-amp circuit and 100Hz frequency. That was provided. OK, that helps. Do you understand what that means? In other words do you know the meaning of gain bandwidth product? We need to have a baseline of whether your trouble is that you don't understand the definition, or if you just don't know how to apply the definition to that particular problem. Your above information is still a little vague though. So is the 100 Hz frequency referenced to a particular inverting opamp, or to the one in your circuit? Or, is it referenced to the OPAMP itself, without any components (open loop)? Last edited: #### Audioguru Joined Dec 20, 2007 11,249 The input resistors attenuate the input 101 times then the feedback resistors cause the opamp to amplify 1001 times. The result is an amplification of only about 9.9 times. The output will probably be very noisy (hiss). An audio opamp (something better and newer than a lousy old 741 opamp) that is open loop has a gain of 33,000 to about one million at 100Hz. #### steveb Joined Jul 3, 2008 2,436 ... the feedback resistors cause the opamp to amplify 1001 times. The result is an amplification of only about 9.9 times ... Isn't it 1000 times for the 1M and 1K feedback portion, and net gain of -9.8 due to voltage attenuation and source resistance of the input voltage divider? Last edited: #### Audioguru Joined Dec 20, 2007 11,249 Isn't it 1000 times for the 1M and 1K feedback portion, and net gain of -9.8 due to voltage attenuation and source resistance of the input voltage divider? The input resistors attenuate 101 times and the feedback causes 1001 times gain. So the result is a gain of about only -9.9 times. #### steveb Joined Jul 3, 2008 2,436 The input resistors attenuate 101 times and the feedback causes 1001 times gain. So the result is a gain of about only -9.9 times. How are you defining your feedback portion? If it's the two resistors 1M and 1K, then this is an inverting amplifier configuration and the gain of that portion is -R2/R1=-1000. Looking at the whole circuit, you can make a Thevenin equivalent source of Vi*10/1010 and a source resistance of 10 in parallel with 1000, which is about 9.9 Ohms. Or Vi/101 and 9.9 ohm source resistance. Now the effective input resistance on the inverting amplifier (looking in from the ideal Thevenin voltage of Vi/101) is 1009.9 ohms due to the source resistance, and the net gain of the inverting Opamp is -1000000/1009.9=-990. Now combine this with the attenuated voltage and the net gain is -990/101 which is about equal to 9.8. Isn't it? Alternatively, you can analyze the full circuit with equations and you come up with Av=-R3*R2/(R4*R1+R3*(R4+R1)) where R2 and R1 are the OPAMP feedback resistors 1M and 1K, respectively; and R4 and R3 are the voltage divider resistors 1000 and 10, respectively. Again, the gain is about -9.8. Last edited: #### The Electrician Joined Oct 9, 2007 2,786 Another way of calculating the signal gain is to note that Ra, Rb and R1 form a T network. Use the delta-wye transformation to convert it into a pi network. Then the shunt resistors of the pi network can be ignored because the one across the input voltage has no effect if the input voltage source has zero output impedance, and the shunt across the - input of the opamp has no effect because that terminal is a virtual ground. We are left with the series element of the pi network, which is 102000 ohms. Then the signal gain is 1000000/102000 = 500/51 = 9.80392 #### steveb Joined Jul 3, 2008 2,436 Another way of calculating the signal gain is to note that Ra, Rb and R1 form a T network. Use the delta-wye transformation to convert it into a pi network. Then the shunt resistors of the pi network can be ignored because the one across the input voltage has no effect if the input voltage source has zero output impedance, and the shunt across the - input of the opamp has no effect because that terminal is a virtual ground. We are left with the series element of the pi network, which is 102000 ohms. Then the signal gain is 1000000/102000 = 500/51 = 9.80392 Very elegant method ! I'm still a little confused on the OPs question. I have the gist of what he's asking, but the precise question and the precise information he was given is not fully clear to me. For example, "What are the two estimates one can make?". I'm not sure what this means. It seems we need to have the OPAMPs gain-bandwidth product to make any estimates, and once we have that, shouldn't we just have one estimate for the entire circuit GB product? Maybe I'm missing something? #### The Electrician Joined Oct 9, 2007 2,786 #### uofmx12 Joined Mar 8, 2011 55 Very elegant method ! I'm still a little confused on the OPs question. I have the gist of what he's asking, but the precise question and the precise information he was given is not fully clear to me. For example, "What are the two estimates one can make?". I'm not sure what this means. It seems we need to have the OPAMPs gain-bandwidth product to make any estimates, and once we have that, shouldn't we just have one estimate for the entire circuit GB product? Maybe I'm missing something? That is all that was asked. I left off no other information. Not an important question now. #### t_n_k Joined Mar 6, 2009 5,455 Thought I'd simulate the results for a couple of different op-amps ... #### Attachments • 23.7 KB Views: 28 #### ftsolutions Joined Nov 21, 2009 48 sometimes I think that professors need to actually look at a databook once in a while so that their questions are not quite so open to variable interpretation (or generate more questions). Maybe that was the intent?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013553619384766, "perplexity": 1707.4547196056496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890586.57/warc/CC-MAIN-20201026061044-20201026091044-00473.warc.gz"}
http://ask.sagemath.org/question/363/a-list-of-symbolic-variables
# A list of symbolic variables 6 Hello, I'm new to sage so I hope that I'm asking a very basic question. I'm trying to create a list of symbolic variables. I would like to be able to set the number of variables initially and then let sage create the list. I was (sort of) able to achieve this when I realized that the input for var is a string, so I wrote the following, which produces six symbolic variables for me: n=3; for i in range(2*n):       var('s_'+str(i)) In my context, the variables are actually real, and they satisfy a system of equations that I would also like sage to produce. By playing with strings, and then using eval on them so they became expressions, I was able to produce a few of the simpler equations, which sage can solve. But when I run for loops indexed by i I can never seem to actually refer to the variables indexed by i. For example, the following will not make sense to sage: for i in range(2*n):         s_i = i The only way I can think to achieve the above result is to create a string with a for loop that states the command I want, turn it into an expression, save it as an equation, and then include it in a big list of equations. Even so, I can't index the equations by i either, so I can't create the 2*n equations that I would need... I have to do a lot more with these variables, so I hope someone can tell me what I am doing terribly wrong. The first thing I want to do is create a second list, w, defined as: $w_k = s_{2n-k}$ asked Feb 05 '11 David Ferrone 141 ● 4 ● 6 ● 12 8 You're pretty close! The problem as you've noted is that "s_i" merely "s_i"; there's no rule that says that the parts of (would-be) variable names after underscores get interpolated in this way. Here's how I'd do it, assuming I've understood you correctly: sage: # first make a list of the variables sage: n = 3 sage: s = list(var('s_%d' % i) for i in range(2*n)) sage: w = list(var('w_%d' % i) for i in range(2*n)) sage: s [s_0, s_1, s_2, s_3, s_4, s_5] sage: w [w_0, w_1, w_2, w_3, w_4, w_5] sage: sage: # then make a list of equations sage: eqs = list(w[k] == s[2*n-k-1] for k in range(2*n)) sage: eqs [w_0 == s_5, w_1 == s_4, w_2 == s_3, w_3 == s_2, w_4 == s_1, w_5 == s_0] Note that I had to put a -1 in there to get the relations I think you were aiming at. If I've misunderstood it's easy to change. posted Feb 05 '11 DSM 4892 ● 12 ● 65 ● 105 No that's the correct equation, I changed it for simplicity. Thanks! This is much cleaner. David Ferrone (Feb 05 '11) Nice! Though this doesn't quite answer how to access one of these variables if one didn't make the list s in the first place, which I struggled with for a while last night before giving up. But this is cleaner than that in any case, for sure. kcrisman (Feb 06 '11) Incidentally, DSM, you clearly are conversant with a good range of the Sage codebase, in particular much of the same stuff I care about, and it would seem to be a crying shame that you aren't more involved in development. Do you go by another 'handle' on sage-devel or Trac - perhaps Doug S. McNeil? We could definitely use your help in review, enhancements, and fixes! kcrisman (Feb 06 '11) Yeah, that's me; and I actually started answering questions here in the first place to work up sufficient karma to convince someone to look at a bug report of mine which is driving me crazy. :^) DSM (Feb 08 '11) Hmm, you shouldn't need karma to get someone to look at bug reports on e.g. sage-support. Or here. Usually if no one answers, it's because no one who knows happens to have time to respond - this has happened to me more than once. kcrisman (Feb 08 '11)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5297290682792664, "perplexity": 816.4814508097411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
https://compass.blogs.bristol.ac.uk/2021/04/13/change-in-the-air/
# Student Perspectives: Change in the air: Tackling Bristol’s nitrogen oxide problem A post by Dom Owens, PhD student on the Compass programme. “Air pollution kills an estimated seven million people worldwide every year” – World Health Organisation Many particulates and chemicals are present in the air in urban areas like Bristol, and this poses a serious risk to our respiratory health. It is difficult to model how these concentrations behave over time due to the complex physical, environmental, and economic factors they depend on, but identifying if and when abrupt changes occur is crucial for designing and evaluating public policy measures, as outlined in the local Air Quality Annual Status Report.  Using a novel change point detection procedure to account for dependence in time and space, we provide an interpretable model for nitrogen oxide (NOx) levels in Bristol, telling us when these structural changes occur and describing the dynamics driving them in between. ## Model and Change Point Detection We model the data with a piecewise-stationary vector autoregression (VAR) model: In between change points the time series $\boldsymbol{Y}_{t}$, a $d$-dimensional vector, depends on itself linearly over $p \geq 1$ previous time steps through parameter matrices $\boldsymbol{A}_i^{(j)}, i=1, \dots, p$ with intercepts $\boldsymbol{\mu}^{(j)}$, but at unknown change points $k_j, j = 1, \dots, q$ the parameters switch abruptly. $\{ \boldsymbol{\varepsilon}_{t} \in \mathbb{R}^d : t \geq 1 \}$ are white noise errors, and we have $n$ observations. We phrase change point detection as a test of the hypotheses $H_0: q = 0$ vs. $H_1: q \geq 1$, i.e. the null states there is no change, and the alternative supposes there are possibly multiple changes. To test this, we use moving sum (MOSUM) statistics $T_k(G)$ extracted from the model; these compare the scaled difference between prediction errors for $G$ steps before and after $k$; when $k$ is in a stationary region, $T_k(G)$  will be close to zero, but when $k$ is near or at a change point, $T_k(G)$  will be large. If the maximum of these exceeds a threshold suggested by the distribution under the null, we reject $H_0$ and estimate change point locations with local peaks of $T_k(G)$. ## Air Quality Data With data from Open Data Bristol over the period from January 2010 to March 2021, we have hourly readings of NOx levels at five locations (with Site IDs) around the city of Bristol, UK: AURN St Paul’s (452); Brislington Depot (203); Parson Street School (215); Wells Road (270); Fishponds Road (463). Taking daily averages, we control for meteorological and seasonal effects such as temperature, wind speed and direction, and the day of the week, with linear regression then analyse the residuals. We use the bandwidth $G=280$ days to ensure estimation accuracy relative to the number of parameters, which effectively asserts that two changes cannot occur in a shorter time span. The MOSUM procedure rejects the null hypothesis and detects ten changes, pictured above. We might attribute the first few to the Joint Local Transport Plan which began in 2011, while the later changes may be due to policy implemented after a Council supported motion in November 2016. The image below visualises the estimated parameters around the change point in November 2016; we can see that in the segment seven there is only mild cross-dependence, but in segment eight the readings at Wells Road, St. Paul’s, and Fishponds Road become strongly dependent on the other series. Scientists have generally worked under the belief that these concentration series have long-memory behaviour, meaning values long in the past have influence on today’s values, and that this explains why the autocorrelation function (ACF) decays slowly, as seen above. Perhaps the most interesting conclusion we can draw from this analysis is that that structural changes explain this slow decay – the image below displays much shorter range dependence for one such stationary segment. ## Conclusion After all our work, we have a simple, interpretable model for NOx levels. In reality, physical processes often depend on each other in a non-linear fashion, and according to the geographical distances between where they are measured; accounting for this might provide better predictions, or tell a slightly different story.  Moreover, there should be interactions with the other pollutants present in the air. Could we combine these for a large-scale, comprehensive air quality model? Perhaps the most important question to ask, however, is how this analysis can be used to benefit policy makers. If we could combine this with a causal model, we might we be able to identify policy outcomes, or to draw comparisons with other regions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5170860290527344, "perplexity": 1275.8703846667163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662550298.31/warc/CC-MAIN-20220522220714-20220523010714-00243.warc.gz"}
https://edurev.in/course/quiz/attempt/7813_Electromagnetic-Wave-Propagation-MCQ-Test/24b22aa6-bd8a-4c7c-8a35-a584e18cc314
Courses # Electromagnetic Wave Propagation - MCQ Test ## 20 Questions MCQ Test GATE ECE (Electronics) 2022 Mock Test Series | Electromagnetic Wave Propagation - MCQ Test Description This mock test of Electromagnetic Wave Propagation - MCQ Test for Railways helps you for every Railways entrance exam. This contains 20 Multiple Choice Questions for Railways Electromagnetic Wave Propagation - MCQ Test (mcq) to study with solutions a complete question bank. The solved questions answers in this Electromagnetic Wave Propagation - MCQ Test quiz give you a good mix of easy questions and tough questions. Railways students definitely take this Electromagnetic Wave Propagation - MCQ Test exercise for a better result in the exam. You can find other Electromagnetic Wave Propagation - MCQ Test extra questions, long questions & short questions for Railways on EduRev as well by searching above. QUESTION: 1 Solution: QUESTION: 2 Solution: QUESTION: 3 ### A non magnetic medium has an intrinsic impedance 360∠30o Ω Que: The loss tangent is Solution: QUESTION: 4 A non magnetic medium has an intrinsic impedance 360∠30o Ω Que: The Dielectric constant is Solution: QUESTION: 5 The amplitude of a wave traveling through a lossy nonmagnetic medium reduces by 18% every meter. The wave operates at 10 MHz and the electric field leads the magnetic field by 24o Que: The propagation constant is Solution: QUESTION: 6 The amplitude of a wave traveling through a lossy nonmagnetic medium reduces by 18% every meter. The wave operates at 10 MHz and the electric field leads the magnetic field by 24o Que: The skin depth is Solution: QUESTION: 7 A 60 m long aluminium (σ = 3.5 x 107 S/m, μr = 1, ε2 = 1) pipe with inner and outer radii 9 mm and 12 mm carries a total current of 16 sin(106 πt)A. The effective resistance of the pipe is Solution: QUESTION: 8 Silver plated brass wave guide is operating at 12 GHz. If at least the thickness of silver ( σ = 6.1 x 107 S/m, μr = εr = 1 ) is 5 δ the minimum thickness required for wave-guide is Solution: QUESTION: 9 A uniform plane wave in a lossy nonmagnetic media has The magnitude of the wave at z = 4 m and t = T/8 is Solution: QUESTION: 10 A uniform plane wave in a lossy nonmagnetic media has Que: The loss suffered by the wave in the interval 0 < z < 3 m is Solution: 1 Np =  8.686 DB, 0.6 Np = 5.21 dB. QUESTION: 11 Region 1, z < 0 and region 2, z > 0, are both perfect dielectrics. A uniform plane wave traveling in the uz direction has a frequency of 3 x 1010 rad /s. Its wavelength in the two region are  λ1 = 5 cm and  λ2 = 3 cm. Que: On the boundary the reflected energy is Solution: QUESTION: 12 Region 1, z < 0 and region 2, z > 0, are both perfect dielectrics. A uniform plane wave traveling in the uz direction has a frequency of 3 x 1010 rad /s. Its wavelength in the two region are  λ1 = 5 cm and  λ2 = 3 cm. Que: The SWR is Solution: QUESTION: 13 A uniform plane wave is incident from region 1 ( μr = 1, σ = 0) to free space. If the amplitude of incident wave is one-half that of reflected wave in region, then the value of εr is Solution: QUESTION: 14 A 150 MHz uniform plane wave is normally incident from air onto a material. Measurements yield a SWR of 3 and the appearance of an electric field minimum at 0.3λ in front of the interface. The impedance of material is Solution: QUESTION: 15 A plane wave is normally incident from air onto a semi-infinite slab of perfect dielectric (εr = 3.45) . The fraction of transmitted power is Solution: QUESTION: 16 Consider three lossless region : Que: The lowest frequency, at which a uniform plane wave incident from region 1 onto the boundary at z = 0 will have no reflection, is Solution: This frequency gives the condition QUESTION: 17 Consider three lossless region : Que: If frequency is 50 MHz, the SWR in region 1 is Solution: At 50 MHz QUESTION: 18 A uniform plane wave in air is normally incident onto a lossless dielectric plate of thickness  λ/8 , and of intrinsic impedance  η = 260 Ω. The SWR in front of the plate is Solution: QUESTION: 19 The E-field of a uniform plane wave propagating in a dielectric medium is given by The dielectric constant of medium is Solution: QUESTION: 20 An electromagnetic wave from an under water source with perpendicular polarization is incident on a water-air interface at angle 20o with normal to surface. For water assume εr = 81, μr = 1. The critical angle θc is Solution:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190106153488159, "perplexity": 2141.418555010462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00131.warc.gz"}
https://mathoverflow.net/questions/186196/finding-a-norm-on-mathbbrx-such-that-the-natural-embedding-of-a-metric/186205
# Finding a norm on $\mathbb{R}^X$ such that the "natural" embedding of a metric space $X$ in $\mathbb{R}^X$ becomes an isometry Let $(X,d)$ be a metric space and consider the function $T:X \to \mathbb{R}^X$ such that $T(x)(y) = 1$ if $y = x$ and $0$ for all other $y$. Is there a norm on $\mathbb{R}^X$ such that $T$ is an isometry? That is, $||T(a) - T(b)|| = d(a,b)$ for all $a,b \in X$. I'm at a loss to know how to approach this. I didn't come up with any good ideas on how to define a proper norm, and I have absolutely no clue how to begin trying to prove such a norm could not exist. Any ideas? • It seems to me like a far more natural choice of $T$ would be $T(x)(y)=d(x,y)$. If $X$ is bounded, this is an isometry with respect to the sup norm on the space of bounded functions from $X$ to $\mathbb{R}$. In general, if $X$ is infinite, I would not expect there to be any natural norm that is well-defined on all of $\mathbb{R}^X$ (in particular, there does not exist a norm that makes every projection continuous). Nov 4, 2014 at 17:51 • Yes, d(x,y) is a much better embedding of X in in $R^X$, but my initial idea was the embedding described above and even though it wasn't the best one, I found the question of existence of a suitable norm(no matter how "weird" it could potentially be) quite interesting in itself. – Ormi Nov 4, 2014 at 18:17 • The answer to your question is "almost yes" but I'm curious to know in what context this question arose. Were you asked to find such a norm, or did you read somewhere that such a norm exists? Nov 4, 2014 at 18:47 • I was asked to prove that every metric space can be isometrically embedded in a Banach space, so that the image is linearly independent. My first idea was the one I described, since the linear independence is obvious there, and if a good norm could be found, any normed space be embedded in a Banach space, so that would give the desired result. Kuratowski's embedding seems to do the job much better(though I'm still not sure about the linear independence), but I found it curious to see if it was possible to find the right norm here and what the technique for doing it would be. – Ormi Nov 4, 2014 at 19:03 • I'm not asking this question with a "please solve my homework" intention, because I'm going to just keep trying to work out the linear independence for Kuratowski's embedding. I'm just genuinely curious about what can be achieved with my initial idea, so if you could give me some hints for that, or refer me to somewhere where I can read up about it, I'd be grateful. I also thought about defining the norm on the subspace of functions with finite supports, could this be the subspace you mentioned? – Ormi Nov 4, 2014 at 19:47 Note that your embedding map $T$ actually takes values in the subspace $\newcommand{\R}{{\mathbb R}}$ $c_{00}(X;\R)$ of finitely supported functions $X\to\R$. If you merely want a norm on this subspace which makes $T$ an embedding, then this is possible via the Arens–Eells construction: R. Arens, J. Eells, On embedding uniform and topological spaces. Pacific J. Math. 6 (1956) no. 3, 397-403. (Arens and Eells proved a more general result: if you just want the embedding theorem for metric spaces then it is in Weaver's book Lipschitz spaces and also in some more recent work of e.g. Godefroy and Kalton. Google should provide links to various downloadable papers/preprints.) The embedding is usually phrased in terms of sending $x\in X$ to $\delta_x \in c_{00}(X;\R)$, which is just another way of describing your map $T$. Of course the problem is defining the norm! One can either define it as an inf over various representations or a sup when paired with another more familiar Banach space. Let me choose the second way. Start by fixing a basepoint $x_0\in X$. Given $f\in \R^X$ with $f(x_0)=0$ define its Lipschitz norm to be $$\Vert f\Vert_L = \sup_{x,y\in X; x\neq y} \frac{|f(x)-f(y)|}{d(x,y)} \in [0,\infty] .$$ Then, given $c=\sum_{x\in X} c_x \delta_x$ where only finitely many of the $c_x$ are non-zero, define $$\Vert c \Vert_{\bf AE} = \sup\left\{ \sum_{x\in X} c_x f(x) \;\colon\; f\in\R^X, \Vert f\Vert_L\leq 1, f(x_0)=0 \right\}$$. The completion of $c_{00}(X;{\mathbb R})$ with respect to the norm $\Vert\cdot\Vert_{\bf AE}$ is the Arens–Eells space of $X$ (I'm using the terminology and borrowing the definition from Weaver's book.) Let's check that $x\mapsto\delta_x$ is an isometry. Let $x,y\in X$ with $x\neq y$. If $f(x_0)=0$ and $\Vert f\Vert_L\leq 1$ then pairing $x$ with $\delta_x-\delta_y$ gives $f(x)-f(y)$, which is bounded in modulus by $d(x,y)$ owing to the Lipschitz condition. So $\Vert \delta_x - \delta_y \Vert_{\bf AE} \leq d(x,y)$. On the other hand, consider the function $$h(z)=d(z,y)- d(x_0,y) \quad(z\in X).$$ Clearly $h(x_0)=0$, and the triangle inequality for $d$ shows us that $\Vert h\Vert_L\leq 1$. Hence $$\Vert \delta_x -\delta_y \Vert_{\bf AE} \geq \vert h(x)-h(y)\vert = d(x,y).$$ Putting these together gives $\Vert \delta_x - \delta_y \Vert_{\bf AE} =d(x,y)$ as required. For those who like the category-theoretic perspetive: the Arens–Eells space can be viewed as a left adjoint to the functor ${\bf U}: {\sf Ban} \to {\sf Met}_0$ where: • the first category has Banach spaces as objects and bounded linear maps as the morphisms; • the second category has pointed metric spaces as objects, and basepoint-preserving Lipschitz maps as the morphisms; • and given a Banach space $E$, ${\bf U}(E)$ is defined to be the underlying metric space of $E$, with $0_E$ as the basepoint. Then the Arens–Eells embedding can be regarded as the unit of this adjunction. In more "down-to-earth" language: given a pointed metric space $(X,x_0)$ let ${\bf AE}(X,x_0)$ be the Arens–Eells space as defined above. Then for any Banach space $E$ and any Lipschitz map $f: X \to E$ satisfying $f(x_0)=0$, there is a unique extension of $f$ to a continuous linear map $F: {\bf AE}(X,x_0) \to E$. Thus ${\bf AE}(X,x_0)$ can be viewed as the "free Banach space generated by $(X,x_0)$". • Thank you for the answer. Unfortunately I can't quite find those books you suggested available anywhere. I guess I'll just familiarise myself with uniform spaces and then will be able to read the original Aren and Eell's paper. – Ormi Nov 4, 2014 at 22:20 • The Godefroy-Kalton paper is available at kaltonmemorial.missouri.edu/docs/sm2003c.pdf Nov 5, 2014 at 14:02 • But is it a norm? It seems that $\delta_{x_0}$ has norm 0. Mar 19, 2015 at 21:37 • In my definition I require $f(x_0)=0$ Mar 19, 2015 at 23:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656480550765991, "perplexity": 147.09690176911226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00069.warc.gz"}
http://tex.stackexchange.com/questions/37574/different-headers-for-different-chapters?answertab=active
I cannot find in any latex forums a solution for my problem with chapter-specific headers. I am compiling a one-sided report with various chapters. For all the chapters, I have the header showing the chapter's number and title. But for one of them I would like to have the header showing the section's number and title instead (this is due to the nature of that particular chapter, which is longer than the others). Everything works fine and I manage to show the section in the header. However, all the following chapters will not go back to the initial settings that shows the chapter in the header. This is a minimal example with 3 chapters and you can run it to see the results. Chapter 1 and 3 should show the chapter in the header and only Chapter 2 is supposed to have the sections in its header. But Chapter 3 keeps applying the settings of the preceding although I've changed them. I try resetting to \pagestyle{plain} or \pagestyle{empty} before adding the \renewcommand{\chaptermark} but it doesn't work. \documentclass[11pt,a4paper,oneside]{report} \usepackage{fancyhdr} \begin{document} \pagestyle{fancy} \renewcommand{\chaptermark}[1]{% \markboth{\thechapter.\ #1}{}} \chapter{My First Chapter} \section{Section 1} \clearpage \section{Section 2} \clearpage \section{Section 3} \clearpage \renewcommand{\sectionmark}[1]{% \markboth{\thesection.\ #1}{}} \chapter{My Second Chapter} \section{Section 1} \clearpage \section{Section 2} \clearpage \section{Section 3} \clearpage \renewcommand{\chaptermark}[1]{% \markboth{\thechapter.\ #1}{}} \chapter{My Third Chapter} \section{Section 1} \clearpage \section{Section 2} \clearpage \section{Section 3} \clearpage \end{document} - You might want to increase \headheight -- see tex.stackexchange.com/questions/37585/… – lockstep Dec 8 '11 at 14:47 ill check that, thank you. – Marie Dec 8 '11 at 16:50 You have to re-redefine \sectionmark not \chaptermark: \documentclass[11pt,a4paper,oneside]{report} \usepackage[english]{babel} \usepackage{blindtext}% for demo only \usepackage{fancyhdr} \begin{document} \pagestyle{fancy} \renewcommand{\chaptermark}[1]{% \markboth{\thechapter.\ #1}{}} \blinddocument \renewcommand{\sectionmark}[1]{% \markboth{\thesection.\ #1}{}} \blinddocument \renewcommand{\sectionmark}[1]{} \blinddocument \end{document} That's because the sections will be placed by \sectionmark at the headline. So this will be done as long as \sectionmark uses \markboth or \markright. And the \markboth at \sectionmark will overwrite the \markboth at \chaptermark. But after re-redefinition of \sectionmark it does not longer use any of these commands. Because of this, the already existing \markboth at \chaptermark won't be overwritten by \sectionmark any longer. Advantage of this solution would be, that you still have a head, if you have several pages before the first section, e.g., \documentclass[11pt,a4paper,oneside]{report} \usepackage[english]{babel} \usepackage{blindtext}% for demo only \usepackage{fancyhdr} \begin{document} \pagestyle{fancy} \renewcommand{\chaptermark}[1]{% \markboth{\thechapter.\ #1}{}} \blinddocument \renewcommand{\sectionmark}[1]{% \markboth{\thesection.\ #1}{}} \chapter{Test} \blindtext[10] \section{First Section at Test} \blindtext \section{Second Section at Test} \blindtext[5] \section{Third Section at Test} \blindtext[5] \renewcommand{\sectionmark}[1]{} \blinddocument \end{document} Here at page 6 you may find the chapter at the head, and with the first section at page 7 you will get the section. - Thanks as well for this other solution! Also works well on the small example. I'll check and run with my big file. – Marie Dec 8 '11 at 17:00 @Marie: You don't need to say "thanks", just use upvote if the answer may be usefull too. – Schweinebacke Dec 9 '11 at 7:28 Define two page styles: \documentclass[11pt,a4paper,oneside]{report} \usepackage{fancyhdr} \pagestyle{fancy} \fancypagestyle{normal}{% \fancypagestyle{special}{% \renewcommand{\chaptermark}[1]{% \markboth{\thechapter.\ #1}{}} \renewcommand{\sectionmark}[1]{% \markright{\thesection.\ #1}{}} \begin{document} \chapter{My First Chapter} \pagestyle{normal} \section{Section 1} \clearpage \section{Section 2} \clearpage \section{Section 3} \clearpage \chapter{My Second Chapter} \pagestyle{special} \section{Section 1} \clearpage \section{Section 2} \clearpage \section{Section 3} \clearpage \chapter{My Third Chapter} \pagestyle{normal} \section{Section 1} \clearpage \section{Section 2} \clearpage \section{Section 3} \clearpage \end{document} Don't forget to modify \headheight as suggested during compilation: Package Fancyhdr Warning: \headheight is too small (12.0pt): Make it at least 13.59999pt. We now make it that large for the rest of the document. This may cause the page layout to be inconsistent, however. - Thank you @egreg! I understand now how the \fancypagestyle command works, there is only one example in the package documentation and I didn't manage to adapt it to my case. Now I also understand more the use of \leftmark and \righmark. I'll also check the \headheight as suggested during the compilation of my big file. Thanks again. – Marie Dec 8 '11 at 16:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.756809651851654, "perplexity": 3095.7822993759637}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446248.56/warc/CC-MAIN-20151124205406-00201-ip-10-71-132-137.ec2.internal.warc.gz"}
https://findfilo.com/maths-question-answers/consider-two-curves-c-1-y-2-4-sqrt-y-x-and-c-2-x-2zly
Consider two curves C_(1):y^(2)=4[sqrt(y)]x and C_(2):x^(2)=4[sqrt | Filo Class 12 Math Calculus Area 540 150 Consider two curves denotes the greatest integer function. Then the area of region enclosed by these two curves within the square formed by the lines is 540 150 Connecting you to a tutor in 60 seconds.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888611912727356, "perplexity": 679.1070024334282}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00359.warc.gz"}
https://hal-cea.archives-ouvertes.fr/cea-01383761
# Multi-frequency study of the newly confirmed supernova remnant MCSNR J0512−6707 in the Large Magellanic Cloud Abstract : Aims. We present a multi-frequency study of the supernova remnant MCSNR J0512−6707 in the Large Magellanic Cloud.Methods. We used new data from XMM-Newton to characterise the X-ray emission and data from the Australian Telescope Compact Array, the Magellanic Cloud Emission Line Survey, and Spitzer to gain a picture of the environment into which the remnant is expanding. We performed a morphological study, determined radio polarisation and magnetic field orientation, and performed an X-ray spectral analysis.Results. We estimated the remnant’s size to be 24.9 ( ± 1.5) × 21.9 ( ± 1.5) pc, with the major axis rotated ~29° east of north. Radio polarisation images at 3 cm and 6 cm indicate a higher degree of polarisation in the northwest and southeast tangentially oriented to the SNR shock front, indicative of an SNR compressing the magnetic field threading the interstellar medium. The X-ray spectrum is unusual as it requires a soft (~0.2 keV) collisional ionisation equilibrium thermal plasma of interstellar medium abundance, in addition to a harder component. Using our fit results and the Sedov dynamical model, we showed that the thermal emission is not consistent with a Sedov remnant. We suggested that the thermal X-rays can be explained by MCSNR J0512−6707 having initially evolved into a wind-blown cavity and is now interacting with the surrounding dense shell. The origin of the hard component remains unclear. We could not determine the supernova type from the X-ray spectrum. Indirect evidence for the type is found in the study of the local stellar population and star formation history in the literature, which suggests a core-collapse origin.Conclusions. MCSNR J0512−6707 likely resulted from the core-collapse of high mass progenitor which carved a low density cavity into its surrounding medium, with the soft X-rays resulting from the impact of the blast wave with the surrounding shell. The unusual hard X-ray component requires deeper and higher spatial resolution radio and X-ray observations to confirm its origin. Keywords : Document type : Journal articles Domain : Complete list of metadatas Cited literature [26 references] https://hal-cea.archives-ouvertes.fr/cea-01383761 Contributor : Edp Sciences <> Submitted on : Wednesday, October 19, 2016 - 11:01:53 AM Last modification on : Friday, April 5, 2019 - 8:13:17 PM ### File aa26987-15.pdf Publication funded by an institution ### Citation P. J. Kavanagh, M. Sasaki, L. M. Bozzetto, S. D. Points, M. D. Filipović, et al.. Multi-frequency study of the newly confirmed supernova remnant MCSNR J0512−6707 in the Large Magellanic Cloud. Astronomy and Astrophysics - A&A, EDP Sciences, 2015, 583, pp.A121. ⟨10.1051/0004-6361/201526987⟩. ⟨cea-01383761⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8824122548103333, "perplexity": 3594.0799368718613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00400.warc.gz"}
http://georeference.org/doc/transverse_mercator.htm
# Transverse Mercator A conformal cylindrical projection: The transverse aspect of Mercator projection. Also known as Gauss Conformal (ellipsoidal form only), Gauss-Kruger (ellipsoidal form only) and Transverse Cylindrical Orthomorphic. Shown greatly zoomed in since profound distortion occurs outside the target region. Limitations The accuracy of Transverse Mercator projections quickly decreases from the central meridian. Therefore, it is strongly recommended to restrict the longitudinal extent of the projected region to +/- 10 degrees from the central meridian. [The US Army standard allows +/- 24 degrees from the central meridian]. This requirement is met within all State Plane zones that use Transverse Mercator projections. Scale True along the central meridian or along two straight lines on the map equidistant from and parallel to the central meridian. Scale is constant along any straight line on the map parallel to the central meridian. These lines are only approximately straight for the projection of the ellipsoid, and will be the case within Manifold when ellipsoidal Earth models (the standards) are used. Scale increases with distance from the central meridian, and becomes infinite 90° from the central meridian. Distortion Infinitesimally small circles of equal size on the globe appear as circles on the map (indicating conformality) but increase in size away from the central meridian (indicating area distortion). Usage Many of the topographic and planimetric map quadrangles throughout the world at scales of 1:24,000 to 1:250,000. Basis for the Universal Transverse Mercator (UTM) grid and projection. Basis for the State Plane Coordinate System in U.S. States having predominantly north-south extent. Recommended for conformal mapping of regions having predominantly north-south extent. Origin Presented by Johann Heighrich Lambert (1728 - 1777) of Alsace in 1772. Formulas for ellipsoidal use developed by Carl Friedrich Gauss of Germany in 1822 and by L. Kruger of Germany, L.P. Lee of New Zealand, and others in the 20th Century. Options Specifying latitude origin and longitude origin centers the map projection.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563032150268555, "perplexity": 2978.689332823977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659512.19/warc/CC-MAIN-20160924173739-00087-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.legisquebec.gouv.qc.ca/en/version/cr/q-2,%20r.%2026.1?code=se:14&history=20220704
### Q-2, r. 26.1 - Regulation respecting the operation of industrial establishments 14. A holder of an authorization to operate an industrial establishment shall keep an up-to-date record in which he shall enter any infringement of the contaminant discharge standards applicable to him and established by the Minister under the first paragraph of section 26 of the Act. The record shall contain, for each infringement, (1)  the exact time at which the holder became aware of the infringement; (2)  the exact location and time at which the infringement occurred; (3)  the causes of the infringement and the circumstances in which it occurred; and (4)  the measures taken or planned by the holder to reduce or eliminate the effects of the infringement and to eliminate and prevent its causes. A holder of an authorization shall send to the Minister, within 30 days of the end of each calendar month, a copy of the information entered in the record during the previous month. The information in the record shall be conserved by the holder for at least 2 years from the date on which the information is sent to the Minister. O.C. 601-93, s. 14; O.C. 871-2020, s. 7. 14. A holder of a depollution attestation shall keep an up-to-date record in which he shall enter any infringement of the contaminant discharge standards applicable to him and established by the Minister under the first paragraph of section 31.15 of the Act. The record shall contain, for each infringement, (1)  the exact time at which the holder became aware of the infringement; (2)  the exact location and time at which the infringement occurred; (3)  the causes of the infringement and the circumstances in which it occurred; and (4)  the measures taken or planned by the holder to reduce or eliminate the effects of the infringement and to eliminate and prevent its causes. A holder of a depollution attestation shall send to the Minister, within 30 days of the end of each calendar month, a copy of the information entered in the record during the previous month. The information in the record shall be conserved by the holder for at least 2 years from the date on which the information is sent to the Minister. O.C. 601-93, s. 14.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612118363380432, "perplexity": 1907.2043011209028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00190.warc.gz"}
http://rosa.unipr.it/FSDA/ScoreYJpn.html
# ScoreYJpn Computes the score test for YJ transformation separately for pos and neg observations ## Syntax • outSC=ScoreYJpn(y,X)example • outSC=ScoreYJpn(y,X,Name,Value)example ## Description outSC =ScoreYJpn(y, X) Ex in which positive and negative observations require the same lambda. outSC =ScoreYJpn(y, X, Name, Value) Ex in which positive and negative observation require different lambdas. ## Examples expand all ### Ex in which positive and negative observations require the same lambda. rng('default') rng(1) n=100; y=randn(n,1); % Transform the value to find out if we can recover the true value of % the transformation parameter la=0.5; ytra=normYJ(y,[],la,'inverse',true); % Start the analysis X=ones(n,1); [outSC]=ScoreYJ(ytra,X,'intercept',0); [outSCpn]=ScoreYJpn(ytra,X,'intercept',0); la=[-1 -0.5 0 0.5 1]'; disp([la outSCpn.Score(:,1) outSC.Score outSCpn.Score(:,2)]) % Comment: if we consider the 5 most common values of lambda % the value of the score test when lambda=0.5 is the only one which is not % significant. Both values of the score test for positive and negative % observations confirm that this value of the transformation parameter is % OK for both sides of the distribution. -1.0000 40.2357 24.1288 15.7149 -0.5000 20.4741 14.7964 9.9619 0 8.9009 6.9774 4.5230 0.5000 1.3042 0.2740 -1.0664 1.0000 -4.8797 -6.2978 -7.9574 ### Ex in which positive and negative observation require different lambdas. rng(1000) n=100; y=randn(n,1); % Tranform in a different way positive and negative values lapos=0; ytrapos=normYJ(y(y>=0),[],lapos,'inverse',true); laneg=1; ytraneg=normYJ(y(y<0),[],laneg,'inverse',true); ytra=[ytrapos; ytraneg]; % Start the analysis X=ones(n,1); [outSC]=ScoreYJ(ytra,X,'intercept',0); [outSCpn]=ScoreYJpn(ytra,X,'intercept',0); la=[-1 -0.5 0 0.5 1]'; disp([la outSCpn.Score(:,1) outSC.Score outSCpn.Score(:,2)]) % Comment: if we consider the 5 most common values of lambda % the value of the score test when lambda=0.5 is the only one which is not % significant. However when lambda=0.5 the score test for negative % observations is highly significant. The difference between the test for % positive and the test for negative is 2.7597+0.7744=3.5341, which is very % large. This indicates that the two tails need a different value of the % transformation parameter. -1.0000 89.5466 39.6867 28.3433 -0.5000 33.4110 24.9236 19.4072 0 10.3643 11.4446 10.8674 0.5000 -0.7744 0.8272 2.7597 1.0000 -9.8327 -9.4050 -6.8708 ## Related Examples expand all ### Extended score with all default options for the wool data. XX=load('wool.txt'); y=XX(:,end); X=XX(:,1:end-1); % Score test using the five most common values of lambda. % In this case (given that all observations are positive the extended % score test for positive observations reduces to the standard score test % while that for negative is equal to NaN. [outSc]=ScoreYJpn(y,X); ### Extended score test using Darwin data given by Yeo and Yohnson. y=[6.1, -8.4, 1.0, 2.0, 0.7, 2.9, 3.5, 5.1, 1.8, 3.6, 7.0, 3.0, 9.3, 7.5 -6.0]'; n=length(y); X=ones(n,1); % Score and extended score test in the grid of lambda 1, 1.1, ..., 2 la=[1:0.1:2]; % Given that there are no explanatory variables the test must be % called with intercept 0 outpn=ScoreYJpn(y,X,'intercept',0,'la',la); out=ScoreYJ(y,X,'intercept',0,'la',la); disp([la' outpn.Score(:,1) out.Score outpn.Score(:,2)]) ## Input Arguments ### y — Response variable. Vector. A vector with n elements that contains the response variable. It can be either a row or a column vector. Data Types: single| double ### X — Predictor variables. Matrix. Data matrix of explanatory variables (also called 'regressors') of dimension (n x p-1). Rows of X represent observations, and columns represent variables. Missing values (NaN's) and infinite values (Inf's) are allowed, since observations (rows) with missing or infinite values will automatically be excluded from the computations. Data Types: single| double ### Name-Value Pair Arguments Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. Example: 'intercept',false , 'la',[0 0.5] , 'nocheck',1 ### intercept —Indicator for constant term.true (default) | false. Indicator for the constant term (intercept) in the fit, specified as the comma-separated pair consisting of 'Intercept' and either true to include or false to remove the constant term from the model. Example: 'intercept',false Data Types: boolean ### la —transformation parameter.vector. It specifies for which values of the transformation parameter it is necessary to compute the score test. Default value of lambda is la=[-1 -0.5 0 0.5 1]; that is the five most common values of lambda Example: 'la',[0 0.5] Data Types: double ### nocheck —Check input arguments.scalar. If nocheck is equal to 1 no check is performed on matrix y and matrix X. Notice that y and X are left unchanged. In other words the additional column of ones for the intercept is not added. As default nocheck=0. Example: 'nocheck',1 Data Types: double ## Output Arguments ### outSC — description Structure containing the following fields: Value Description Score score test. Matrix. Matrix of size length(lambda)-by-2 which contains the value of the score test for each value of lambda specfied in optional input parameter la. The first column refers to the test for positive observations while the second column refers to the test for negative observations. If la is not specified, the number of rows of outSc.Score is equal to 5 and will contain the values of the score test for the 5 most common values of lambda. ## References Yeo, I.K. and Johnson, R. (2000), A new family of power transformations to improve normality or symmetry, "Biometrika", Vol. 87, pp. 954-959. Atkinson, A.C. and Riani, M. (2018), Extensions of the score test, Submitted.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88258957862854, "perplexity": 3278.326711308793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370520039.50/warc/CC-MAIN-20200404042338-20200404072338-00391.warc.gz"}
https://infoscience.epfl.ch/record/147334
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS ### Abstract The scaling properties of DNA knots of different complexities were studied by atomic force microscope. Following two different protocols DNA knots are adsorbed onto a mica surface in regimes of (i) strong binding, that induces a kinetic trapping of the three-dimensional (3D) configuration, and of (ii) weak binding, that permits (partial) relaxation on the surface. In (i) the radius of gyration of the adsorbed DNA knot scales with the 3D Flory exponent nu = 0.60 within error. In (ii), we find nu approximate to 0.66, a value between the 3D and 2D (nu = 3/4) exponents. Evidence is also presented for the localization of knot crossings in 2D under weak adsorption conditions.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336346745491028, "perplexity": 2688.5218896863944}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00137.warc.gz"}
https://cs.stackexchange.com/questions/60774/the-class-of-languages-that-can-be-certified-in-a-small-amount-of-space
# The class of languages that can be certified in a small amount of space NP can be characterized in two different ways, one of them is that it's the class of languages that can be certified by a witness in a polynomial time. I wonder, if we consider the same notion but with space considerations to get a class $X$, Is such a class known to be of any importance? Why? To be more precise, $L\in X$ iff there exists a TM $M$ s.t. $\forall x\in \{0,1\}^* (x\in L \iff \exists u \in \{0,1\}^{p(n)}$ s.t. $M(x,u)=1$ s.t. $M$ uses space bounded by a polynomial $P$. Clearly, NP$\subset$ X. Is there any interesting thing that we can know about $X$? L ​ $\subseteq$ ​ NL ​ $\subseteq$ ​ coNLOGLOGTIME-uniform SAC1 ​ ​ .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7464661598205566, "perplexity": 328.2313467991593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00313.warc.gz"}
https://www.physicsforums.com/threads/linear-algebra-span-linear-independence-proof.640234/
# Linear Algebra: Span, Linear Independence Proof • Start date • #1 97 0 ## Homework Statement Suppose v_1,v_2,v_3,...v_n are vectors such that v_1 does not equal the zero vector and v_2 not in span{v_1}, v_3 not in span{v_1,v_2}, v_n not in span{v_1,v_2,...v_(n-1)} show that v_1,v_2,v_3,....,V_n are linearly independent. ## Homework Equations linear independence, span ## The Attempt at a Solution he gave us a hint, which was to use induction heres what i have so far for the base case n=1 v_1 does not equal 0 so for cv_1=0, c must equal 0 making v_1 linearly independent then assume v_n is linearly independent to show v_(n+1) is linearly independent since v_n is linearly independent, then v_1,v_2,v_3,v_(n-1) are all linearly independent as well, my books states this as a remark to linear independence so i assume i can use it and v_(n+1) not in span{v_1,...v_n} therefore c_1v_1+c_2v_2+....+c_nv_n+c_(n+1)v_(n+1)=0 if either c_(n+1)v_(n+1)=-c_1v_1-c_2v_2-.....-c_nv_n or c_(n+1)v_(n+1)=0 the former isnt true since its not in the span of all the vectors before it so then the latter must hold true this is where i started doubting myself because then i would have to show that v_(n+1) is not zero and im unsure on how to do that, also im a beginner with proofs so im not even sure if im doing this correctly using induction • #2 Dick Homework Helper 26,260 619 ## Homework Statement Suppose v_1,v_2,v_3,...v_n are vectors such that v_1 does not equal the zero vector and v_2 not in span{v_1}, v_3 not in span{v_1,v_2}, v_n not in span{v_1,v_2,...v_(n-1)} show that v_1,v_2,v_3,....,V_n are linearly independent. ## Homework Equations linear independence, span ## The Attempt at a Solution he gave us a hint, which was to use induction heres what i have so far for the base case n=1 v_1 does not equal 0 so for cv_1=0, c must equal 0 making v_1 linearly independent then assume v_n is linearly independent to show v_(n+1) is linearly independent since v_n is linearly independent, then v_1,v_2,v_3,v_(n-1) are all linearly independent as well, my books states this as a remark to linear independence so i assume i can use it and v_(n+1) not in span{v_1,...v_n} therefore c_1v_1+c_2v_2+....+c_nv_n+c_(n+1)v_(n+1)=0 if either c_(n+1)v_(n+1)=-c_1v_1-c_2v_2-.....-c_nv_n or c_(n+1)v_(n+1)=0 the former isnt true since its not in the span of all the vectors before it so then the latter must hold true this is where i started doubting myself because then i would have to show that v_(n+1) is not zero and im unsure on how to do that, also im a beginner with proofs so im not even sure if im doing this correctly using induction You are really close. You can say v_(n+1) is not the zero vector. The zero vector is in the span of any set of vectors. Try and restate your argument knowing that. Last edited: • #3 97 0 so can i just say since the zero vector is in the span of any set of vectors and v_(n+1) is not in the span of all the vectors before it then v_(n+1) is not the zero vector?? if thats correct then c_(n+1) must equal zero thus showing that all the vectors are linearly independent • #4 Dick Homework Helper 26,260 619 so can i just say since the zero vector is in the span of any set of vectors and v_(n+1) is not in the span of all the vectors before it then v_(n+1) is not the zero vector?? if thats correct then c_(n+1) must equal zero thus showing that all the vectors are linearly independent Yes, that's pretty much it. If c_(n+1) is nonzero then v_(n+1) is in the span, contradiction. If c_(n+1) is zero then it shows they are linearly independent. Well done. You are better at proofs than you thought. • #5 97 0 cool thanks! • Last Post Replies 6 Views 2K • Last Post Replies 5 Views 1K • Last Post Replies 1 Views 2K • Last Post Replies 3 Views 2K • Last Post Replies 7 Views 665 • Last Post Replies 2 Views 947 • Last Post Replies 2 Views 4K • Last Post Replies 2 Views 7K • Last Post Replies 1 Views 2K • Last Post Replies 1 Views 2K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962226152420044, "perplexity": 1383.9566731935897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00021.warc.gz"}
https://jeremykun.com/tag/mathematics/page/3/
# Bayesian Ranking for Rated Items Problem: You have a catalog of items with discrete ratings (thumbs up/thumbs down, or 5-star ratings, etc.), and you want to display them in the “right” order. Solution: In Python ''' score: [int], [int], [float] -&gt; float Return the expected value of the rating for an item with known ratings specified by ratings, prior belief specified by rating_prior, and a utility function specified by rating_utility, assuming the ratings are a multinomial distribution and the prior belief is a Dirichlet distribution. ''' def score(self, ratings, rating_prior, rating_utility): ratings = [r + p for (r, p) in zip(ratings, rating_prior)] score = sum(r * u for (r, u) in zip(ratings, rating_utility)) return score / sum(ratings) Discussion: This deceptively short solution can lead you on a long and winding path into the depths of statistics. I will do my best to give a short, clear version of the story. As a working example I chose merely because I recently listened to a related podcast, say you’re selling mass-market romance novels—which, by all accounts, is a predictable genre. You have a list of books, each of which has been rated on a scale of 0-5 stars by some number of users. You want to display the top books first, so that time-constrained readers can experience the most titillating novels first, and newbies to the genre can get the best first time experience and be incentivized to buy more. The setup required to arrive at the above code is the following, which I’ll phrase as a story. Users’ feelings about a book, and subsequent votes, are independent draws from a known distribution (with unknown parameters). I will just call these distributions “discrete” distributions. So given a book and user, there is some unknown list $(p_0, p_1, p_2, p_3, p_4, p_5)$ of probabilities ($\sum_i p_i = 1$) for each possible rating a user could give for that book. But how do users get these probabilities? In this story, the probabilities are the output of a randomized procedure that generates distributions. That modeling assumption is called a “Dirichlet prior,” with Dirichlet meaning it generates discrete distributions, and prior meaning it encodes domain-specific information (such as the fraction of 4-star ratings for a typical romance novel). So the story is you have a book, and that book gets a Dirichlet distribution (unknown to us), and then when a user comes along they sample from the Dirichlet distribution to get a discrete distribution, which they then draw from to choose a rating. We observe the ratings, and we need to find the book’s underlying Dirichlet. We start by assigning it some default Dirichlet (the prior) and update that Dirichlet as we observe new ratings. Some other assumptions: 1. Books are indistinguishable except in the parameters of their Dirichlet distribution. 2. The parameters of a book’s Dirichlet distribution don’t change over time, and inherently reflect the book’s value. So a Dirichlet distribution is a process that produces discrete distributions. For simplicity, in this post we will say a Dirichlet distribution is parameterized by a list of six integers $(n_0, \dots, n_5)$, one for each possible star rating. These values represent our belief in the “typical” distribution of votes for a new book. We’ll discuss more about how to set the values later. Sampling a value (a book’s list of probabilities) from the Dirichlet distribution is not trivial, but we don’t need to do that for this program. Rather, we need to be able to interpret a fixed Dirichlet distribution, and update it given some observed votes. The interpretation we use for a Dirichlet distribution is its expected value, which, recall, is the parameters of a discrete distribution. In particular if $n = \sum_i n_i$, then the expected value is a discrete distribution whose probabilities are $\displaystyle \left ( \frac{n_0}{n}, \frac{n_1}{n}, \dots, \frac{n_5}{n} \right )$ So you can think of each integer in the specification of a Dirichlet as “ghost ratings,” sometimes called pseudocounts, and we’re saying the probability is proportional to the count. This is great, because if we knew the true Dirichlet distribution for a book, we could compute its ranking without a second thought. The ranking would simply be the expected star rating: def simple_score(distribution): return sum(i * p for (i, p) in enumerate(distribution)) Putting books with the highest score on top would maximize the expected happiness of a user visiting the site, provided that happiness matches the user’s voting behavior, since the simple_score is just the expected vote. Also note that all the rating system needs to make this work is that the rating options are linearly ordered. So a thumbs up/down (heaving bosom/flaccid member?) would work, too. We don’t need to know how happy it makes them to see a 5-star vs 4-star book. However, because as we’ll see next we have to approximate the distribution, and hence have uncertainty for scores of books with only a few ratings, it helps to incorporate numerical utility values (we’ll see this at the end). Next, to update a given Dirichlet distribution with the results of some observed ratings, we have to dig a bit deeper into Bayes rule and the formulas for sampling from a Dirichlet distribution. Rather than do that, I’ll point you to this nice writeup by Jonathan Huang, where the core of the derivation is in Section 2.3 (page 4), and remark that the rule for updating for a new observation is to just add it to the existing counts. Theorem: Given a Dirichlet distribution with parameters $(n_1, \dots, n_k)$ and a new observation of outcome $i$, the updated Dirichlet distribution has parameters $(n_1, \dots, n_{i-1}, n_i + 1, n_{i+1}, \dots, n_k)$. That is, you just update the $i$-th entry by adding $1$ to it. This particular arithmetic to do the update is a mathematical consequence (derived in the link above) of the philosophical assumption that Bayes rule is how you should model your beliefs about uncertainty, coupled with the assumption that the Dirichlet process is how the users actually arrive at their votes. The initial values $(n_0, \dots, n_5)$ for star ratings should be picked so that they represent the average rating distribution among all prior books, since this is used as the default voting distribution for a new, unknown book. If you have more information about whether a book is likely to be popular, you can use a different prior. For example, if JK Rowling wrote a Harry Potter Romance novel that was part of the canon, you could pretty much guarantee it would be popular, and set $n_5$ high compared to $n_0$. Of course, if it were actually popular you could just wait for the good ratings to stream in, so tinkering with these values on a per-book basis might not help much. On the other hand, most books by unknown authors are bad, and $n_5$ should be close to zero. Selecting a prior dictates how influential ratings of new items are compared to ratings of items with many votes. The more pseudocounts you add to the prior, the less new votes count. This gets us to the following code for star ratings. def score(self, ratings, rating_prior): ratings = [r + p for (r, p) in zip(ratings, rating_prior)] score = sum(i * u for (i, u) in enumerate(ratings)) return score / sum(ratings) The only thing missing from the solution at the beginning is the utilities. The utilities are useful for two reasons. First, because books with few ratings encode a lot of uncertainty, having an idea about how extreme a feeling is implied by a specific rating allows one to give better rankings of new books. Second, for many services, such as taxi rides on Lyft, the default star rating tends to be a 5-star, and 4-star or lower mean something went wrong. For books, 3-4 stars is a default while 5-star means you were very happy. The utilities parameter allows you to weight rating outcomes appropriately. So if you are in a Lyft-like scenario, you might specify utilities like [-10, -5, -3, -2, 1] to denote that a 4-star rating has the same negative impact as two 5-star ratings would positively contribute. On the other hand, for books the gap between 4-star and 5-star is much less than the gap between 3-star and 4-star. The utilities simply allow you to calibrate how the votes should be valued in comparison to each other, instead of using their literal star counts. # The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm Christos Papadimitriou, who studies multiplicative weights in the context of biology. ## Hard to believe Sanjeev Arora and his coauthors consider it “a basic tool [that should be] taught to all algorithms students together with divide-and-conquer, dynamic programming, and random sampling.” Christos Papadimitriou calls it “so hard to believe that it has been discovered five times and forgotten.” It has formed the basis of algorithms in machine learning, optimization, game theory, economics, biology, and more. What mystical algorithm has such broad applications? Now that computer scientists have studied it in generality, it’s known as the Multiplicative Weights Update Algorithm (MWUA). Procedurally, the algorithm is simple. I can even describe the core idea in six lines of pseudocode. You start with a collection of $n$ objects, and each object has a weight. Set all the object weights to be 1. For some large number of rounds: Pick an object at random proportionally to the weights Some event happens Increase the weight of the chosen object if it does well in the event Otherwise decrease the weight The name “multiplicative weights” comes from how we implement the last step: if the weight of the chosen object at step $t$ is $w_t$ before the event, and $G$ represents how well the object did in the event, then we’ll update the weight according to the rule: $\displaystyle w_{t+1} = w_t (1 + G)$ Think of this as increasing the weight by a small multiple of the object’s performance on a given round. Here is a simple example of how it might be used. You have some money you want to invest, and you have a bunch of financial experts who are telling you what to invest in every day. So each day you pick an expert, and you follow their advice, and you either make a thousand dollars, or you lose a thousand dollars, or something in between. Then you repeat, and your goal is to figure out which expert is the most reliable. This is how we use multiplicative weights: if we number the experts $1, \dots, N$, we give each expert a weight $w_i$ which starts at 1. Then, each day we pick an expert at random (where experts with larger weights are more likely to be picked) and at the end of the day we have some gain or loss $G$. Then we update the weight of the chosen expert by multiplying it by $(1 + G / 1000)$. Sometimes you have enough information to update the weights of experts you didn’t choose, too. The theoretical guarantees of the algorithm say we’ll find the best expert quickly (“quickly” will be concrete later). In fact, let’s play a game where you, dear reader, get to decide the rewards for each expert and each day. I programmed the multiplicative weights algorithm to react according to your choices. Click the image below to go to the demo. This core mechanism of updating weights can be interpreted in many ways, and that’s part of the reason it has sprouted up all over mathematics and computer science. Just a few examples of where this has led: 1. In game theory, weights are the “belief” of a player about the strategy of an opponent. The most famous algorithm to use this is called Fictitious Play, and others include EXP3 for minimizing regret in the so-called “adversarial bandit learning” problem. 2. In machine learning, weights are the difficulty of a specific training example, so that higher weights mean the learning algorithm has to “try harder” to accommodate that example. The first result I’m aware of for this is the Perceptron (and similar Winnow) algorithm for learning hyperplane separators. The most famous is the AdaBoost algorithm. 3. Analogously, in optimization, the weights are the difficulty of a specific constraint, and this technique can be used to approximately solve linear and semidefinite programs. The approximation is because MWUA only provides a solution with some error. 4. In mathematical biology, the weights represent the fitness of individual alleles, and filtering reproductive success based on this and updating weights for successful organisms produces a mechanism very much like evolution. With modifications, it also provides a mechanism through which to understand sex in the context of evolutionary biology. 5. The TCP protocol, which basically defined the internet, uses additive and multiplicative weight updates (which are very similar in the analysis) to manage congestion. 6. You can get easy $\log(n)$-approximation algorithms for many NP-hard problems, such as set cover. Additional, more technical examples can be found in this survey of Arora et al. In the rest of this post, we’ll implement a generic Multiplicative Weights Update Algorithm, we’ll prove it’s main theoretical guarantees, and we’ll implement a linear program solver as an example of its applicability. As usual, all of the code used in the making of this post is available in a Github repository. ## The generic MWUA algorithm Let’s start by writing down pseudocode and an implementation for the MWUA algorithm in full generality. In general we have some set $X$ of objects and some set $Y$ of “event outcomes” which can be completely independent. If these sets are finite, we can write down a table $M$ whose rows are objects, whose columns are outcomes, and whose $i,j$ entry $M(i,j)$ is the reward produced by object $x_i$ when the outcome is $y_j$. We will also write this as $M(x, y)$ for object $x$ and outcome $y$. The only assumption we’ll make on the rewards is that the values $M(x, y)$ are bounded by some small constant $B$ (by small I mean $B$ should not require exponentially many bits to write down as compared to the size of $X$). In symbols, $M(x,y) \in [0,B]$. There are minor modifications you can make to the algorithm if you want negative rewards, but for simplicity we will leave that out. Note the table $M$ just exists for analysis, and the algorithm does not know its values. Moreover, while the values in $M$ are static, the choice of outcome $y$ for a given round may be nondeterministic. The MWUA algorithm randomly chooses an object $x \in X$ in every round, observing the outcome $y \in Y$, and collecting the reward $M(x,y)$ (or losing it as a penalty). The guarantee of the MWUA theorem is that the expected sum of rewards/penalties of MWUA is not much worse than if one had picked the best object (in hindsight) every single round. Let’s describe the algorithm in notation first and build up pseudocode as we go. The input to the algorithm is the set of objects, a subroutine that observes an outcome, a black-box reward function, a learning rate parameter, and a number of rounds. def MWUA(objects, observeOutcome, reward, learningRate, numRounds): ... We define for object $x$ a nonnegative number $w_x$ we call a “weight.” The weights will change over time so we’ll also sub-script a weight with a round number $t$, i.e. $w_{x,t}$ is the weight of object $x$ in round $t$. Initially, all the weights are $1$. Then MWUA continues in rounds. We start each round by drawing an example randomly with probability proportional to the weights. Then we observe the outcome for that round and the reward for that round. # draw: [float] -> int # pick an index from the given list of floats proportionally # to the size of the entry (i.e. normalize to a probability # distribution and draw according to the probabilities). def draw(weights): choice = random.uniform(0, sum(weights)) choiceIndex = 0 for weight in weights: choice -= weight if choice <= 0: return choiceIndex choiceIndex += 1 # MWUA: the multiplicative weights update algorithm def MWUA(objects, observeOutcome, reward, learningRate numRounds): weights = [1] * len(objects) for t in numRounds: chosenObjectIndex = draw(weights) chosenObject = objects[chosenObjectIndex] outcome = observeOutcome(t, weights, chosenObject) thisRoundReward = reward(chosenObject, outcome) ... Sampling objects in this way is the same as associating a distribution $D_t$ to each round, where if $S_t = \sum_{x \in X} w_{x,t}$ then the probability of drawing $x$, which we denote $D_t(x)$, is $w_{x,t} / S_t$. We don’t need to keep track of this distribution in the actual run of the algorithm, but it will help us with the mathematical analysis. Next comes the weight update step. Let’s call our learning rate variable parameter $\varepsilon$. In round $t$ say we have object $x_t$ and outcome $y_t$, then the reward is $M(x_t, y_t)$. We update the weight of the chosen object $x_t$ according to the formula: $\displaystyle w_{x_t, t} = w_{x_t} (1 + \varepsilon M(x_t, y_t) / B)$ In the more general event that you have rewards for all objects (if not, the reward-producing function can output zero), you would perform this weight update on all objects $x \in X$. This turns into the following Python snippet, where we hide the division by $B$ into the choice of learning rate: # MWUA: the multiplicative weights update algorithm def MWUA(objects, observeOutcome, reward, learningRate, numRounds): weights = [1] * len(objects) for t in numRounds: chosenObjectIndex = draw(weights) chosenObject = objects[chosenObjectIndex] outcome = observeOutcome(t, weights, chosenObject) thisRoundReward = reward(chosenObject, outcome) for i in range(len(weights)): weights[i] *= (1 + learningRate * reward(objects[i], outcome)) One of the amazing things about this algorithm is that the outcomes and rewards could be chosen adaptively by an adversary who knows everything about the MWUA algorithm (except which random numbers the algorithm generates to make its choices). This means that the rewards in round $t$ can depend on the weights in that same round! We will exploit this when we solve linear programs later in this post. But even in such an oppressive, exploitative environment, MWUA persists and achieves its guarantee. And now we can state that guarantee. Theorem (from Arora et al): The cumulative reward of the MWUA algorithm is, up to constant multiplicative factors, at least the cumulative reward of the best object minus $\log(n)$, where $n$ is the number of objects. (Exact formula at the end of the proof) The core of the proof, which we’ll state as a lemma, uses one of the most elegant proof techniques in all of mathematics. It’s the idea of constructing a potential function, and tracking the change in that potential function over time. Such a proof usually has the mysterious script: 1. Define potential function, in our case $S_t$. 2. State what seems like trivial facts about the potential function to write $S_{t+1}$ in terms of $S_t$, and hence get general information about $S_T$ for some large $T$. 3. Theorem is proved. 4. Wait, what? Clearly, coming up with a useful potential function is a difficult and prized skill. In this proof our potential function is the sum of the weights of the objects in a given round, $S_t = \sum_{x \in X} w_{x, t}$. Now the lemma. Lemma: Let $B$ be the bound on the size of the rewards, and $0 < \varepsilon < 1/2$ a learning parameter. Recall that $D_t(x)$ is the probability that MWUA draws object $x$ in round $t$. Write the expected reward for MWUA for round $t$ as the following (using only the definition of expected value): $\displaystyle R_t = \sum_{x \in X} D_t(x) M(x, y_t)$ Then the claim of the lemma is: $\displaystyle S_{t+1} \leq S_t e^{\varepsilon R_t / B}$ Proof. Expand $S_{t+1} = \sum_{x \in X} w_{x, t+1}$ using the definition of the MWUA update: $\displaystyle \sum_{x \in X} w_{x, t+1} = \sum_{x \in X} w_{x, t}(1 + \varepsilon M(x, y_t) / B)$ Now distribute $w_{x, t}$ and split into two sums: $\displaystyle \dots = \sum_{x \in X} w_{x, t} + \frac{\varepsilon}{B} \sum_{x \in X} w_{x,t} M(x, y_t)$ Using the fact that $D_t(x) = \frac{w_{x,t}}{S_t}$, we can replace $w_{x,t}$ with $D_t(x) S_t$, which allows us to get $R_t$ \displaystyle \begin{aligned} \dots &= S_t + \frac{\varepsilon S_t}{B} \sum_{x \in X} D_t(x) M(x, y_t) \\ &= S_t \left ( 1 + \frac{\varepsilon R_t}{B} \right ) \end{aligned} And then using the fact that $(1 + x) \leq e^x$ (Taylor series), we can bound the last expression by $S_te^{\varepsilon R_t / B}$, as desired. $\square$ Now using the lemma, we can get a hold on $S_T$ for a large $T$, namely that $\displaystyle S_T \leq S_1 e^{\varepsilon \sum_{t=1}^T R_t / B}$ If $|X| = n$ then $S_1=n$, simplifying the above. Moreover, the sum of the weights in round $T$ is certainly greater than any single weight, so that for every fixed object $x \in X$, $\displaystyle S_T \geq w_{x,T} \leq (1 + \varepsilon)^{\sum_t M(x, y_t) / B}$ Squeezing $S_t$ between these two inequalities and taking logarithms (to simplify the exponents) gives $\displaystyle \left ( \sum_t M(x, y_t) / B \right ) \log(1+\varepsilon) \leq \log n + \frac{\varepsilon}{B} \sum_t R_t$ Multiply through by $B$, divide by $\varepsilon$, rearrange, and use the fact that when $0 < \varepsilon < 1/2$ we have $\log(1 + \varepsilon) \geq \varepsilon - \varepsilon^2$ (Taylor series) to get $\displaystyle \sum_t R_t \geq \left [ \sum_t M(x, y_t) \right ] (1-\varepsilon) - \frac{B \log n}{\varepsilon}$ The bracketed term is the payoff of object $x$, and MWUA’s payoff is at least a fraction of that minus the logarithmic term. The bound applies to any object $x \in X$, and hence to the best one. This proves the theorem. $\square$ Briefly discussing the bound itself, we see that the smaller the learning rate is, the closer you eventually get to the best object, but by contrast the more the subtracted quantity $B \log(n) / \varepsilon$ hurts you. If your target is an absolute error bound against the best performing object on average, you can do more algebra to determine how many rounds you need in terms of a fixed $\delta$. The answer is roughly: let $\varepsilon = O(\delta / B)$ and pick $T = O(B^2 \log(n) / \delta^2)$. See this survey for more. ## MWUA for linear programs Now we’ll approximately solve a linear program using MWUA. Recall that a linear program is an optimization problem whose goal is to minimize (or maximize) a linear function of many variables. The objective to minimize is usually given as a dot product $c \cdot x$, where $c$ is a fixed vector and $x = (x_1, x_2, \dots, x_n)$ is a vector of non-negative variables the algorithm gets to choose. The choices for $x$ are also constrained by a set of $m$ linear inequalities, $A_i \cdot x \geq b_i$, where $A_i$ is a fixed vector and $b_i$ is a scalar for $i = 1, \dots, m$. This is usually summarized by putting all the $A_i$ in a matrix, $b_i$ in a vector, as $x_{\textup{OPT}} = \textup{argmin}_x \{ c \cdot x \mid Ax \geq b, x \geq 0 \}$ We can further simplify the constraints by assuming we know the optimal value $Z = c \cdot x_{\textup{OPT}}$ in advance, by doing a binary search (more on this later). So, if we ignore the hard constraint $Ax \geq b$, the “easy feasible region” of possible $x$‘s includes $\{ x \mid x \geq 0, c \cdot x = Z \}$. In order to fit linear programming into the MWUA framework we have to define two things. 1. The objects: the set of linear inequalities $A_i \cdot x \geq b_i$. 2. The rewards: the error of a constraint for a special input vector $x_t$. Number 2 is curious (why would we give a reward for error?) but it’s crucial and we’ll discuss it momentarily. The special input $x_t$ depends on the weights in round $t$ (which is allowed, recall). Specifically, if the weights are $w = (w_1, \dots, w_m)$, we ask for a vector $x_t$ in our “easy feasible region” which satisfies $\displaystyle (A^T w) \cdot x_t \geq w \cdot b$ For this post we call the implementation of procuring such a vector the “oracle,” since it can be seen as the black-box problem of, given a vector $\alpha$ and a scalar $\beta$ and a convex region $R$, finding a vector $x \in R$ satisfying $\alpha \cdot x \geq \beta$. This allows one to solve more complex optimization problems with the same technique, swapping in a new oracle as needed. Our choice of inputs, $\alpha = A^T w, \beta = w \cdot b$, are particular to the linear programming formulation. Two remarks on this choice of inputs. First, the vector $A^T w$ is a weighted average of the constraints in $A$, and $w \cdot b$ is a weighted average of the thresholds. So this this inequality is a “weighted average” inequality (specifically, a convex combination, since the weights are nonnegative). In particular, if no such $x$ exists, then the original linear program has no solution. Indeed, given a solution $x^*$ to the original linear program, each constraint, say $A_1 x^*_1 \geq b_1$, is unaffected by left-multiplication by $w_1$. Second, and more important to the conceptual understanding of this algorithm, the choice of rewards and the multiplicative updates ensure that easier constraints show up less prominently in the inequality by having smaller weights. That is, if we end up overly satisfying a constraint, we penalize that object for future rounds so we don’t waste our effort on it. The byproduct of MWUA—the weights—identify the hardest constraints to satisfy, and so in each round we can put a proportionate amount of effort into solving (one of) the hard constraints. This is why it makes sense to reward error; the error is a signal for where to improve, and by over-representing the hard constraints, we force MWUA’s attention on them. At the end, our final output is an average of the $x_t$ produced in each round, i.e. $x^* = \frac{1}{T}\sum_t x_t$. This vector satisfies all the constraints to a roughly equal degree. We will skip the proof that this vector does what we want, but see these notes for a simple proof. We’ll spend the rest of this post implementing the scheme outlined above. ## Implementing the oracle Fix the convex region $R = \{ c \cdot x = Z, x \geq 0 \}$ for a known optimal value $Z$. Define $\textup{oracle}(\alpha, \beta)$ as the problem of finding an $x \in R$ such that $\alpha \cdot x \geq \beta$. For the case of this linear region $R$, we can simply find the index $i$ which maximizes $\alpha_i Z / c_i$. If this value exceeds $\beta$, we can return the vector with that value in the $i$-th position and zeros elsewhere. Otherwise, the problem has no solution. To prove the “no solution” part, say $n=2$ and you have $x = (x_1, x_2)$ a solution to $\alpha \cdot x \geq \beta$. Then for whichever index makes $\alpha_i Z / c_i$ bigger, say $i=1$, you can increase $\alpha \cdot x$ without changing $c \cdot x = Z$ by replacing $x_1$ with $x_1 + (c_2/c_1)x_2$ and $x_2$ with zero. I.e., we’re moving the solution $x$ along the line $c \cdot x = Z$ until it reaches a vertex of the region bounded by $c \cdot x = Z$ and $x \geq 0$. This must happen when all entries but one are zero. This is the same reason why optimal solutions of (generic) linear programs occur at vertices of their feasible regions. The code for this becomes quite simple. Note we use the numpy library in the entire codebase to make linear algebra operations fast and simple to read. def makeOracle(c, optimalValue): n = len(c) def oracle(weightedVector, weightedThreshold): def quantity(i): return weightedVector[i] * optimalValue / c[i] if c[i] > 0 else -1 biggest = max(range(n), key=quantity) if quantity(biggest) < weightedThreshold: raise InfeasibleException return numpy.array([optimalValue / c[i] if i == biggest else 0 for i in range(n)]) return oracle ## Implementing the core solver The core solver implements the discussion from previously, given the optimal value of the linear program as input. To avoid too many single-letter variable names, we use linearObjective instead of $c$. def solveGivenOptimalValue(A, b, linearObjective, optimalValue, learningRate=0.1): m, n = A.shape # m equations, n variables oracle = makeOracle(linearObjective, optimalValue) def reward(i, specialVector): ... def observeOutcome(_, weights, __): ... numRounds = 1000 weights, cumulativeReward, outcomes = MWUA( range(m), observeOutcome, reward, learningRate, numRounds ) averageVector = sum(outcomes) / numRounds return averageVector First we make the oracle, then the reward and outcome-producing functions, then we invoke the MWUA subroutine. Here are those two functions; they are closures because they need access to $A$ and $b$. Note that neither $c$ nor the optimal value show up here. def reward(i, specialVector): constraint = A[i] threshold = b[i] return threshold - numpy.dot(constraint, specialVector) def observeOutcome(_, weights, __): weights = numpy.array(weights) weightedVector = A.transpose().dot(weights) weightedThreshold = weights.dot(b) return oracle(weightedVector, weightedThreshold) ## Implementing the binary search, and an example Finally, the top-level routine. Note that the binary search for the optimal value is sophisticated (though it could be more sophisticated). It takes a max range for the search, and invokes the optimization subroutine, moving the upper bound down if the linear program is feasible and moving the lower bound up otherwise. def solve(A, b, linearObjective, maxRange=1000): optRange = [0, maxRange] while optRange[1] - optRange[0] > 1e-8: proposedOpt = sum(optRange) / 2 print("Attempting to solve with proposedOpt=%G" % proposedOpt) # Because the binary search starts so high, it results in extreme # reward values that must be tempered by a slow learning rate. Exercise # to the reader: determine absolute bounds for the rewards, and set # this learning rate in a more principled fashion. learningRate = 1 / max(2 * proposedOpt * c for c in linearObjective) learningRate = min(learningRate, 0.1) try: result = solveGivenOptimalValue(A, b, linearObjective, proposedOpt, learningRate) optRange[1] = proposedOpt except InfeasibleException: optRange[0] = proposedOpt return result Finally, a simple example: A = numpy.array([[1, 2, 3], [0, 4, 2]]) b = numpy.array([5, 6]) c = numpy.array([1, 2, 1]) x = solve(A, b, c) print(x) print(c.dot(x)) print(A.dot(x) - b) The output: Attempting to solve with proposedOpt=500 Attempting to solve with proposedOpt=250 Attempting to solve with proposedOpt=125 Attempting to solve with proposedOpt=62.5 Attempting to solve with proposedOpt=31.25 Attempting to solve with proposedOpt=15.625 Attempting to solve with proposedOpt=7.8125 Attempting to solve with proposedOpt=3.90625 Attempting to solve with proposedOpt=1.95312 Attempting to solve with proposedOpt=2.92969 Attempting to solve with proposedOpt=3.41797 Attempting to solve with proposedOpt=3.17383 Attempting to solve with proposedOpt=3.05176 Attempting to solve with proposedOpt=2.99072 Attempting to solve with proposedOpt=3.02124 Attempting to solve with proposedOpt=3.00598 Attempting to solve with proposedOpt=2.99835 Attempting to solve with proposedOpt=3.00217 Attempting to solve with proposedOpt=3.00026 Attempting to solve with proposedOpt=2.99931 Attempting to solve with proposedOpt=2.99978 Attempting to solve with proposedOpt=3.00002 Attempting to solve with proposedOpt=2.9999 Attempting to solve with proposedOpt=2.99996 Attempting to solve with proposedOpt=2.99999 Attempting to solve with proposedOpt=3.00001 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 # note %G rounds the printed values Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 [ 0. 0.987 1.026] 3.00000000425 [ 5.20000072e-02 8.49831849e-09] So there we have it. A fiendishly clever use of multiplicative weights for solving linear programs. ## Discussion One of the nice aspects of MWUA is it’s completely transparent. If you want to know why a decision was made, you can simply look at the weights and look at the history of rewards of the objects. There’s also a clear interpretation of what is being optimized, as the potential function used in the proof is a measure of both quality and adaptability to change. The latter is why MWUA succeeds even in adversarial settings, and why it makes sense to think about MWUA in the context of evolutionary biology. This even makes one imagine new problems that traditional algorithms cannot solve, but which MWUA handles with grace. For example, imagine trying to solve an “online” linear program in which over time a constraint can change. MWUA can adapt to maintain its approximate solution. The linear programming technique is known in the literature as the Plotkin-Shmoys-Tardos framework for covering and packing problems. The same ideas extend to other convex optimization problems, including semidefinite programming. If you’ve been reading this entire post screaming “This is just gradient descent!” Then you’re right and wrong. It bears a striking resemblance to gradient descent (see this document for details about how special cases of MWUA are gradient descent by another name), but the adaptivity for the rewards makes MWUA different. Even though so many people have been advocating for MWUA over the past decade, it’s surprising that it doesn’t show up in the general math/CS discourse on the internet or even in many algorithms courses. The Arora survey I referenced is from 2005 and the linear programming technique I demoed is originally from 1991! I took algorithms classes wherever I could, starting undergraduate in 2007, and I didn’t even hear a whisper of this technique until midway through my PhD in theoretical CS (I did, however, study fictitious play in a game theory class). I don’t have an explanation for why this is the case, except maybe that it takes more than 20 years for techniques to make it to the classroom. At the very least, this is one good reason to go to graduate school. You learn the things (and where to look for the things) which haven’t made it to classrooms yet. Until next time! # Zero Knowledge Proofs for NP Last time, we saw a specific zero-knowledge proof for graph isomorphism. This introduced us to the concept of an interactive proof, where you have a prover and a verifier sending messages back and forth, and the prover is trying to prove a specific claim to the verifier. A zero-knowledge proof is a special kind of interactive proof in which the prover has some secret piece of knowledge that makes it very easy to verify a disputed claim is true. The prover’s goal, then, is to convince the verifier (a polynomial-time algorithm) that the claim is true without revealing any knowledge at all about the secret. In this post we’ll see that, using a bit of cryptography, zero-knowledge proofs capture a much wider class of problems than graph isomorphism. Basically, if you believe that cryptography exists, every problem whose answers can be easily verified have zero-knowledge proofs (i.e., all of the class NP). Here are a bunch of examples. For each I’ll phrase the problem as a question, and then say what sort of data the prover’s secret could be. • Given a boolean formula, is there an assignment of variables making it true? Secret: a satisfying assignment to the variables. • Given a set of integers, is there a subset whose sum is zero? Secret: such a subset. • Given a graph, does it have a 3-coloring? Secret: a valid 3-coloring. • Given a boolean circuit, can it produce a specific output? Secret: a choice of inputs that produces the output. The common link among all of these problems is that they are NP-hard (graph isomorphism isn’t known to be NP-hard). For us this means two things: (1) we think these problems are actually hard, so the verifier can’t solve them, and (2) if you show that one of them has a zero-knowledge proof, then they all have zero-knowledge proofs. We’re going to describe and implement a zero-knowledge proof for graph 3-colorability, and in the next post we’ll dive into the theoretical definitions and talk about the proof that the scheme we present is zero-knowledge. As usual, all of the code used in making this post is available in a repository on this blog’s Github page. In the follow up to this post, we’ll dive into more nitty gritty details about the proof that this works, and study different kinds of zero-knowledge. ## One-way permutations In a recent program gallery post we introduced the Blum-Blum-Shub pseudorandom generator. A pseudorandom generator is simply an algorithm that takes as input a short random string of length $s$ and produces as output a longer string, say, of length $3s$. This output string should not be random, but rather “indistinguishable” from random in a sense we’ll make clear next time. The underlying function for this generator is the “modular squaring” function $x \mapsto x^2 \mod M$, for some cleverly chosen $M$. The $M$ is chosen in such a way that makes this mapping a permutation. So this function is more than just a pseudorandom generator, it’s a one-way permutation. If you have a primality-checking algorithm on hand (we do), then preparing the Blum-Blum-Shub algorithm is only about 15 lines of code. def goodPrime(p): return p % 4 == 3 and probablyPrime(p, accuracy=100) def findGoodPrime(numBits=512): candidate = 1 while not goodPrime(candidate): candidate = random.getrandbits(numBits) return candidate def makeModulus(numBits=512): return findGoodPrime(numBits) * findGoodPrime(numBits) def blum_blum_shub(modulusLength=512): modulus = makeModulus(numBits=modulusLength) def f(inputInt): return pow(inputInt, 2, modulus) return f The interested reader should check out the proof gallery post for more details about this generator. For us, having a one-way permutation is the important part (and we’re going to defer the formal definition of “one-way” until next time, just think “hard to get inputs from outputs”). The other concept we need, which is related to a one-way permutation, is the notion of a hardcore predicate. Let $G(x)$ be a one-way permutation, and let $f(x) = b$ be a function that produces a single bit from a string. We say that $f$ is a hardcore predicate for $G$ if you can’t reliably compute $f(x)$ when given only $G(x)$. Hardcore predicates are important because there are many one-way functions for which, when given the output, you can guess part of the input very reliably, but not the rest (e.g., if $g$ is a one-way function, $(x, y) \mapsto (x, g(y))$ is also one-way, but the $x$ part is trivially guessable). So a hardcore predicate formally measures, when given the output of a one-way function, what information derived from the input is hard to compute. In the case of Blum-Blum-Shub, one hardcore predicate is simply the parity of the input bits. def parity(n): return sum(int(x) for x in bin(n)[2:]) % 2 ## Bit Commitment Schemes A core idea that will makes zero-knowledge proofs work for NP is the ability for the prover to publicly “commit” to a choice, and later reveal that choice in a way that makes it infeasible to fake their commitment. This will involve not just the commitment to a single bit of information, but also the transmission of auxiliary data that is provably infeasible to fake. Our pair of one-way permutation $G$ and hardcore predicate $f$ comes in very handy. Let’s say I want to commit to a bit $b \in \{ 0,1 \}$. Let’s fix a security parameter that will measure how hard it is to change my commitment post-hoc, say $n = 512$. My process for committing is to draw a random string $x$ of length $n$, and send you the pair $(G(x), f(x) \oplus b)$, where $\oplus$ is the XOR operator on two bits. The guarantee of a one-way permutation with a hardcore predicate is that if you only see $G(x)$, you can’t guess $f(x)$ with any reasonable edge over random guessing. Moreover, if you fix a bit $b$, and take an unpredictably random bit $y$, the XOR $b \oplus y$ is also unpredictably random. In other words, if $f(x)$ is hardcore, then so is $x \mapsto f(x) \oplus b$ for a fixed bit $b$. Finally, to reveal my commitment, I just send the string $x$ and let you independently compute $(G(x), f(x) \oplus b)$. Since $G$ is a permutation, that $x$ is the only $x$ that could have produced the commitment I sent you earlier. Here’s a Python implementation of this scheme. We start with a generic base class for a commitment scheme. class CommitmentScheme(object): def __init__(self, oneWayPermutation, hardcorePredicate, securityParameter): ''' oneWayPermutation: int -> int hardcorePredicate: int -> {0, 1} ''' self.oneWayPermutation = oneWayPermutation self.hardcorePredicate = hardcorePredicate self.securityParameter = securityParameter # a random string of length self.securityParameter used only once per commitment self.secret = self.generateSecret() def generateSecret(self): raise NotImplemented def commit(self, x): raise NotImplemented def reveal(self): return self.secret Note that the “reveal” step is always simply to reveal the secret. Here’s the implementation subclass. We should also note that the security string should be chosen at random anew for every bit you wish to commit to. In this post we won’t reuse CommitmentScheme objects anyway. class BBSBitCommitmentScheme(CommitmentScheme): def generateSecret(self): # the secret is a random quadratic residue self.secret = self.oneWayPermutation(random.getrandbits(self.securityParameter)) return self.secret def commit(self, bit): unguessableBit = self.hardcorePredicate(self.secret) return ( self.oneWayPermutation(self.secret), unguessableBit ^ bit, # python xor ) One important detail is that the Blum-Blum-Shub one-way permutation is only a permutation when restricted to quadratic residues. As such, we generate our secret by shooting a random string through the one-way permutation to get a random residue. In fact this produces a uniform random residue, since the Blum-Blum-Shub modulus is chosen in such a way that ensures every residue has exactly four square roots. Here’s code to check the verification is correct. class BBSBitCommitmentVerifier(object): def __init__(self, oneWayPermutation, hardcorePredicate): self.oneWayPermutation = oneWayPermutation self.hardcorePredicate = hardcorePredicate def verify(self, securityString, claimedCommitment): trueBit = self.decode(securityString, claimedCommitment) unguessableBit = self.hardcorePredicate(securityString) # wasteful, whatever return claimedCommitment == ( self.oneWayPermutation(securityString), unguessableBit ^ trueBit, # python xor ) def decode(self, securityString, claimedCommitment): unguessableBit = self.hardcorePredicate(securityString) return claimedCommitment[1] ^ unguessableBit and an example of using it if __name__ == "__main__": import blum_blum_shub securityParameter = 10 oneWayPerm = blum_blum_shub.blum_blum_shub(securityParameter) hardcorePred = blum_blum_shub.parity print('Bit commitment') scheme = BBSBitCommitmentScheme(oneWayPerm, hardcorePred, securityParameter) verifier = BBSBitCommitmentVerifier(oneWayPerm, hardcorePred) for _ in range(10): bit = random.choice([0, 1]) commitment = scheme.commit(bit) secret = scheme.reveal() trueBit = verifier.decode(secret, commitment) valid = verifier.verify(secret, commitment) print('{} == {}? {}; {} {}'.format(bit, trueBit, valid, secret, commitment)) Example output: 1 == 1? True; 524 (5685, 0) 1 == 1? True; 149 (22201, 1) 1 == 1? True; 476 (34511, 1) 1 == 1? True; 927 (14243, 1) 1 == 1? True; 608 (23947, 0) 0 == 0? True; 964 (7384, 1) 0 == 0? True; 373 (23890, 0) 0 == 0? True; 620 (270, 1) 1 == 1? True; 926 (12390, 0) 0 == 0? True; 708 (1895, 0) As an exercise, write a program to verify that no other input to the Blum-Blum-Shub one-way permutation gives a valid verification. Test it on a small security parameter like $n=10$. It’s also important to point out that the verifier needs to do some additional validation that we left out. For example, how does the verifier know that the revealed secret actually is a quadratic residue? In fact, detecting quadratic residues is believed to be hard! To get around this, we could change the commitment scheme reveal step to reveal the random string that was used as input to the permutation to get the residue (cf. BBSCommitmentScheme.generateSecret for the random string that needs to be saved/revealed). Then the verifier could generate the residue in the same way. As an exercise, upgrade the bit commitment an verifier classes to reflect this. In order to get a zero-knowledge proof for 3-coloring, we need to be able to commit to one of three colors, which requires two bits. So let’s go overkill and write a generic integer commitment scheme. It’s simple enough: specify a bound on the size of the integers, and then do an independent bit commitment for every bit. class BBSIntCommitmentScheme(CommitmentScheme): def __init__(self, numBits, oneWayPermutation, hardcorePredicate, securityParameter=512): ''' A commitment scheme for integers of a prespecified length numBits. Applies the Blum-Blum-Shub bit commitment scheme to each bit independently. ''' self.schemes = [BBSBitCommitmentScheme(oneWayPermutation, hardcorePredicate, securityParameter) for _ in range(numBits)] super().__init__(oneWayPermutation, hardcorePredicate, securityParameter) def generateSecret(self): self.secret = [x.secret for x in self.schemes] return self.secret def commit(self, integer): # first pad bits to desired length integer = bin(integer)[2:].zfill(len(self.schemes)) bits = [int(bit) for bit in integer] return [scheme.commit(bit) for scheme, bit in zip(self.schemes, bits)] And the corresponding verifier class BBSIntCommitmentVerifier(object): def __init__(self, numBits, oneWayPermutation, hardcorePredicate): self.verifiers = [BBSBitCommitmentVerifier(oneWayPermutation, hardcorePredicate) for _ in range(numBits)] def decodeBits(self, secrets, bitCommitments): return [v.decode(secret, commitment) for (v, secret, commitment) in zip(self.verifiers, secrets, bitCommitments)] def verify(self, secrets, bitCommitments): return all( bitVerifier.verify(secret, commitment) for (bitVerifier, secret, commitment) in zip(self.verifiers, secrets, bitCommitments) ) def decode(self, secrets, bitCommitments): decodedBits = self.decodeBits(secrets, bitCommitments) return int(''.join(str(bit) for bit in decodedBits)) A sample usage: if __name__ == "__main__": import blum_blum_shub securityParameter = 10 oneWayPerm = blum_blum_shub.blum_blum_shub(securityParameter) hardcorePred = blum_blum_shub.parity print('Int commitment') scheme = BBSIntCommitmentScheme(10, oneWayPerm, hardcorePred) verifier = BBSIntCommitmentVerifier(10, oneWayPerm, hardcorePred) choices = list(range(1024)) for _ in range(10): theInt = random.choice(choices) commitments = scheme.commit(theInt) secrets = scheme.reveal() trueInt = verifier.decode(secrets, commitments) valid = verifier.verify(secrets, commitments) print('{} == {}? {}; {} {}'.format(theInt, trueInt, valid, secrets, commitments)) And a sample output: 527 == 527? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 0), (5426, 0), (9124, 1), (23973, 0), (44832, 0), (33044, 0), (68501, 0)] 67 == 67? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 1), (63975, 1), (5426, 0), (9124, 1), (23973, 1), (44832, 1), (33044, 0), (68501, 0)] 729 == 729? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 0), (63975, 1), (5426, 0), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 0)] 441 == 441? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 0), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 0)] 614 == 614? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 1), (5426, 1), (9124, 1), (23973, 1), (44832, 0), (33044, 0), (68501, 1)] 696 == 696? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)] 974 == 974? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 0), (54363, 0), (63975, 1), (5426, 0), (9124, 1), (23973, 0), (44832, 0), (33044, 0), (68501, 1)] 184 == 184? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)] 136 == 136? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 0), (63975, 0), (5426, 0), (9124, 1), (23973, 0), (44832, 1), (33044, 1), (68501, 1)] 632 == 632? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 1), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)] Before we move on, we should note that this integer commitment scheme “blows up” the secret by quite a bit. If you have a security parameter $s$ and an integer with $n$ bits, then the commitment uses roughly $sn$ bits. A more efficient method would be to simply use a good public-key encryption scheme, and then reveal the secret key used to encrypt the message. While we implemented such schemes previously on this blog, I thought it would be more fun to do something new. ## A zero-knowledge proof for 3-coloring First, a high-level description of the protocol. The setup: the prover has a graph $G$ with $n$ vertices $V$ and $m$ edges $E$, and also has a secret 3-coloring of the vertices $\varphi: V \to \{ 0, 1, 2 \}$. Recall, a 3-coloring is just an assignment of colors to vertices (in this case the colors are 0,1,2) so that no two adjacent vertices have the same color. So the prover has a coloring $\varphi$ to be kept secret, but wants to prove that $G$ is 3-colorable. The idea is for the verifier to pick a random edge $(u,v)$, and have the prover reveal the colors of $u$ and $v$. However, if we run this protocol only once, there’s nothing to stop the prover from just lying and picking two distinct colors. If we allow the verifier to run the protocol many times, and the prover actually reveals the colors from their secret coloring, then after roughly $|V|$ rounds the verifier will know the entire coloring. Each step reveals more knowledge. We can fix this with two modifications. 1. The prover first publicly commits to the coloring using a commitment scheme. Then when the verifier asks for the colors of the two vertices of a random edge, he can rest assured that the prover fixed a coloring that does not depend on the verifier’s choice of edge. 2. The prover doesn’t reveal colors from their secret coloring, but rather from a random permutation of the secret coloring. This way, when the verifier sees colors, they’re equally likely to see any two colors, and all the verifier will know is that those two colors are different. So the scheme is: prover commits to a random permutation of the true coloring and sends it to the verifier; the verifier asks for the true colors of a given edge; the prover provides those colors and the secrets to their commitment scheme so the verifier can check. The key point is that now the verifier has to commit to a coloring, and if the coloring isn’t a proper 3-coloring the verifier has a reasonable chance of picking an improperly colored edge (a one-in-$|E|$ chance, which is at least $1/|V|^2$). On the other hand, if the coloring is proper, then the verifier will always query a properly colored edge, and it’s zero-knowledge because the verifier is equally likely to see every pair of colors. So the verifier will always accept, but won’t know anything more than that the edge it chose is properly colored. Repeating this $|V|^2$-ish times, with high probability it’ll have queried every edge and be certain the coloring is legitimate. Let’s implement this scheme. First the data types. As in the previous post, graphs are represented by edge lists, and a coloring is represented by a dictionary mapping a vertex to 0, 1, or 2 (the “colors”). # a graph is a list of edges, and for simplicity we'll say # every vertex shows up in some edge exampleGraph = [ (1, 2), (1, 4), (1, 3), (2, 5), (2, 5), (3, 6), (5, 6) ] exampleColoring = { 1: 0, 2: 1, 3: 2, 4: 1, 5: 2, 6: 0, } Next, the Prover class that implements that half of the protocol. We store a list of integer commitment schemes for each vertex whose color we need to commit to, and send out those commitments. class Prover(object): def __init__(self, graph, coloring, oneWayPermutation=ONE_WAY_PERMUTATION, hardcorePredicate=HARDCORE_PREDICATE): self.graph = [tuple(sorted(e)) for e in graph] self.coloring = coloring self.vertices = list(range(1, numVertices(graph) + 1)) self.oneWayPermutation = oneWayPermutation self.hardcorePredicate = hardcorePredicate self.vertexToScheme = None def commitToColoring(self): self.vertexToScheme = { v: commitment.BBSIntCommitmentScheme( 2, self.oneWayPermutation, self.hardcorePredicate ) for v in self.vertices } permutation = randomPermutation(3) permutedColoring = { v: permutation[self.coloring[v]] for v in self.vertices } return {v: s.commit(permutedColoring[v]) for (v, s) in self.vertexToScheme.items()} def revealColors(self, u, v): u, v = min(u, v), max(u, v) if not (u, v) in self.graph: raise Exception('Must query an edge!') return ( self.vertexToScheme[u].reveal(), self.vertexToScheme[v].reveal(), ) In commitToColoring we randomly permute the underlying colors, and then compose that permutation with the secret coloring, committing to each resulting color independently. In revealColors we reveal only those colors for a queried edge. Note that we don’t actually need to store the permuted coloring, because it’s implicitly stored in the commitments. It’s crucial that we reject any query that doesn’t correspond to an edge. If we don’t reject such queries then the verifier can break the protocol! In particular, by querying non-edges you can determine which pairs of nodes have the same color in the secret coloring. You can then chain these together to partition the nodes into color classes, and so color the graph. (After seeing the Verifier class below, implement this attack as an exercise). Here’s the corresponding Verifier: class Verifier(object): def __init__(self, graph, oneWayPermutation, hardcorePredicate): self.graph = [tuple(sorted(e)) for e in graph] self.oneWayPermutation = oneWayPermutation self.hardcorePredicate = hardcorePredicate self.committedColoring = None self.verifier = commitment.BBSIntCommitmentVerifier(2, oneWayPermutation, hardcorePredicate) def chooseEdge(self, committedColoring): self.committedColoring = committedColoring self.chosenEdge = random.choice(self.graph) return self.chosenEdge def accepts(self, revealed): revealedColors = [] for (w, bitSecrets) in zip(self.chosenEdge, revealed): trueColor = self.verifier.decode(bitSecrets, self.committedColoring[w]) revealedColors.append(trueColor) if not self.verifier.verify(bitSecrets, self.committedColoring[w]): return False return revealedColors[0] != revealedColors[1] As expected, in the acceptance step the verifier decodes the true color of the edge it queried, and accepts if and only if the commitment was valid and the edge is properly colored. Here’s the whole protocol, which is syntactically very similar to the one for graph isomorphism. def runProtocol(G, coloring, securityParameter=512): oneWayPermutation = blum_blum_shub.blum_blum_shub(securityParameter) hardcorePredicate = blum_blum_shub.parity prover = Prover(G, coloring, oneWayPermutation, hardcorePredicate) verifier = Verifier(G, oneWayPermutation, hardcorePredicate) committedColoring = prover.commitToColoring() chosenEdge = verifier.chooseEdge(committedColoring) revealed = prover.revealColors(*chosenEdge) revealedColors = ( verifier.verifier.decode(revealed[0], committedColoring[chosenEdge[0]]), verifier.verifier.decode(revealed[1], committedColoring[chosenEdge[1]]), ) isValid = verifier.accepts(revealed) print("{} != {} and commitment is valid? {}".format( revealedColors[0], revealedColors[1], isValid )) return isValid And an example of running it if __name__ == "__main__": for _ in range(30): runProtocol(exampleGraph, exampleColoring, securityParameter=10) Here’s the output 0 != 2 and commitment is valid? True 1 != 0 and commitment is valid? True 1 != 2 and commitment is valid? True 2 != 0 and commitment is valid? True 1 != 2 and commitment is valid? True 2 != 0 and commitment is valid? True 0 != 2 and commitment is valid? True 0 != 2 and commitment is valid? True 0 != 1 and commitment is valid? True 0 != 1 and commitment is valid? True 2 != 1 and commitment is valid? True 0 != 2 and commitment is valid? True 2 != 0 and commitment is valid? True 2 != 0 and commitment is valid? True 1 != 0 and commitment is valid? True 1 != 0 and commitment is valid? True 0 != 2 and commitment is valid? True 2 != 1 and commitment is valid? True 0 != 2 and commitment is valid? True 0 != 2 and commitment is valid? True 2 != 1 and commitment is valid? True 1 != 0 and commitment is valid? True 1 != 0 and commitment is valid? True 2 != 1 and commitment is valid? True 2 != 1 and commitment is valid? True 1 != 0 and commitment is valid? True 0 != 2 and commitment is valid? True 1 != 2 and commitment is valid? True 1 != 2 and commitment is valid? True 0 != 1 and commitment is valid? True So while we haven’t proved it rigorously, we’ve seen the zero-knowledge proof for graph 3-coloring. This automatically gives us a zero-knowledge proof for all of NP, because given any NP problem you can just convert it to the equivalent 3-coloring problem and solve that. Of course, the blowup required to convert a random NP problem to 3-coloring can be polynomially large, which makes it unsuitable for practice. But the point is that this gives us a theoretical justification for which problems have zero-knowledge proofs in principle. Now that we’ve established that you can go about trying to find the most efficient protocol for your favorite problem. ## Anticipatory notes When we covered graph isomorphism last time, we said that a simulator could, without participating in the zero-knowledge protocol or knowing the secret isomorphism, produce a transcript that was drawn from the same distribution of messages as the protocol produced. That was all that it needed to be “zero-knowledge,” because anything the verifier could do with its protocol transcript, the simulator could do too. We can do exactly the same thing for 3-coloring, exploiting the same “reverse order” trick where the simulator picks the random edge first, then chooses the color commitment post-hoc. Unfortunately, both there and here I’m short-changing you, dear reader. The elephant in the room is that our naive simulator assumes the verifier is playing by the rules! If you want to define security, you have to define it against a verifier who breaks the protocol in an arbitrary way. For example, the simulator should be able to produce an equivalent transcript even if the verifier deterministically picks an edge, or tries to pick a non-edge, or tries to send gibberish. It takes a lot more work to prove security against an arbitrary verifier, but the basic setup is that the simulator can no longer make choices for the verifier, but rather has to invoke the verifier subroutine as a black box. (To compensate, the requirements on the simulator are relaxed quite a bit; more on that next time) Because an implementation of such a scheme would involve a lot of validation, we’re going to defer the discussion to next time. We also need to be more specific about the different kinds of zero-knowledge, since we won’t be able to achieve perfect zero-knowledge with the simulator drawing from an identical distribution, but rather a computationally indistinguishable distribution. We’ll define all this rigorously next time, and discuss the known theoretical implications and limitations. Next time will be cuffs-off theory, baby! Until then! # Zero Knowledge Proofs — A Primer In this post we’ll get a strong taste for zero knowledge proofs by exploring the graph isomorphism problem in detail. In the next post, we’ll see how this relates to cryptography and the bigger picture. The goal of this post is to get a strong understanding of the terms “prover,” “verifier,” and “simulator,” and “zero knowledge” in the context of a specific zero-knowledge proof. Then next time we’ll see how the same concepts (though not the same proof) generalizes to a cryptographically interesting setting. ## Graph isomorphism Let’s start with an extended example. We are given two graphs $G_1, G_2$, and we’d like to know whether they’re isomorphic, meaning they’re the same graph, but “drawn” different ways. The problem of telling if two graphs are isomorphic seems hard. The pictures above, which are all different drawings of the same graph (or are they?), should give you pause if you thought it was easy. To add a tiny bit of formalism, a graph $G$ is a list of edges, and each edge $(u,v)$ is a pair of integers between 1 and the total number of vertices of the graph, say $n$. Using this representation, an isomorphism between $G_1$ and $G_2$ is a permutation $\pi$ of the numbers $\{1, 2, \dots, n \}$ with the property that $(i,j)$ is an edge in $G_1$ if and only if $(\pi(i), \pi(j))$ is an edge of $G_2$. You swap around the labels on the vertices, and that’s how you get from one graph to another isomorphic one. Given two arbitrary graphs as input on a large number of vertices $n$, nobody knows of an efficient—i.e., polynomial time in $n$—algorithm that can always decide whether the input graphs are isomorphic. Even if you promise me that the inputs are isomorphic, nobody knows of an algorithm that could construct an isomorphism. (If you think about it, such an algorithm could be used to solve the decision problem!) ## A game Now let’s play a game. In this game, we’re given two enormous graphs on a billion nodes. I claim they’re isomorphic, and I want to prove it to you. However, my life’s fortune is locked behind these particular graphs (somehow), and if you actually had an isomorphism between these two graphs you could use it to steal all my money. But I still want to convince you that I do, in fact, own all of this money, because we’re about to start a business and you need to know I’m not broke. Is there a way for me to convince you beyond a reasonable doubt that these two graphs are indeed isomorphic? And moreover, could I do so without you gaining access to my secret isomorphism? It would be even better if I could guarantee you learn nothing about my isomorphism or any isomorphism, because even the slightest chance that you can steal my money is out of the question. Zero knowledge proofs have exactly those properties, and here’s a zero knowledge proof for graph isomorphism. For the record, $G_1$ and $G_2$ are public knowledge, (common inputs to our protocol for the sake of tracking runtime), and the protocol itself is common knowledge. However, I have an isomorphism $f: G_1 \to G_2$ that you don’t know. Step 1: I will start by picking one of my two graphs, say $G_1$, mixing up the vertices, and sending you the resulting graph. In other words, I send you a graph $H$ which is chosen uniformly at random from all isomorphic copies of $G_1$. I will save the permutation $\pi$ that I used to generate $H$ for later use. Step 2: You receive a graph $H$ which you save for later, and then you randomly pick an integer $t$ which is either 1 or 2, with equal probability on each. The number $t$ corresponds to your challenge for me to prove $H$ is isomorphic to $G_1$ or $G_2$. You send me back $t$, with the expectation that I will provide you with an isomorphism between $H$ and $G_t$. Step 3: Indeed, I faithfully provide you such an isomorphism. If I you send me $t=1$, I’ll give you back $\pi^{-1} : H \to G_1$, and otherwise I’ll give you back $f \circ \pi^{-1}: H \to G_2$. Because composing a fixed permutation with a uniformly random permutation is again a uniformly random permutation, in either case I’m sending you a uniformly random permutation. Step 4: You receive a permutation $g$, and you can use it to verify that $H$ is isomorphic to $G_t$. If the permutation I sent you doesn’t work, you’ll reject my claim, and if it does, you’ll accept my claim. Before we analyze, here’s some Python code that implements the above scheme. You can find the full, working example in a repository on this blog’s Github page. First, a few helper functions for generating random permutations (and turning their list-of-zero-based-indices form into a function-of-positive-integers form) import random def randomPermutation(n): L = list(range(n)) random.shuffle(L) return L def makePermutationFunction(L): return lambda i: L[i - 1] + 1 def makeInversePermutationFunction(L): return lambda i: 1 + L.index(i - 1) def applyIsomorphism(G, f): return [(f(i), f(j)) for (i, j) in G] Here’s a class for the Prover, the one who knows the isomorphism and wants to prove it while keeping the isomorphism secret: class Prover(object): def __init__(self, G1, G2, isomorphism): ''' isomomorphism is a list of integers representing an isomoprhism from G1 to G2. ''' self.G1 = G1 self.G2 = G2 self.n = numVertices(G1) assert self.n == numVertices(G2) self.isomorphism = isomorphism self.state = None def sendIsomorphicCopy(self): isomorphism = randomPermutation(self.n) pi = makePermutationFunction(isomorphism) H = applyIsomorphism(self.G1, pi) self.state = isomorphism return H def proveIsomorphicTo(self, graphChoice): randomIsomorphism = self.state piInverse = makeInversePermutationFunction(randomIsomorphism) if graphChoice == 1: return piInverse else: f = makePermutationFunction(self.isomorphism) return lambda i: f(piInverse(i)) The prover has two methods, one for each round of the protocol. The first creates an isomorphic copy of $G_1$, and the second receives the challenge and produces the requested isomorphism. And here’s the corresponding class for the verifier class Verifier(object): def __init__(self, G1, G2): self.G1 = G1 self.G2 = G2 self.n = numVertices(G1) assert self.n == numVertices(G2) def chooseGraph(self, H): choice = random.choice([1, 2]) self.state = H, choice return choice def accepts(self, isomorphism): ''' Return True if and only if the given isomorphism is a valid isomorphism between the randomly chosen graph in the first step, and the H presented by the Prover. ''' H, choice = self.state graphToCheck = [self.G1, self.G2][choice - 1] f = isomorphism isValidIsomorphism = (graphToCheck == applyIsomorphism(H, f)) return isValidIsomorphism Then the protocol is as follows: def runProtocol(G1, G2, isomorphism): p = Prover(G1, G2, isomorphism) v = Verifier(G1, G2) H = p.sendIsomorphicCopy() choice = v.chooseGraph(H) witnessIsomorphism = p.proveIsomorphicTo(choice) return v.accepts(witnessIsomorphism) Analysis: Let’s suppose for a moment that everyone is honestly following the rules, and that $G_1, G_2$ are truly isomorphic. Then you’ll always accept my claim, because I can always provide you with an isomorphism. Now let’s suppose that, actually I’m lying, the two graphs aren’t isomorphic, and I’m trying to fool you into thinking they are. What’s the probability that you’ll rightfully reject my claim? Well, regardless of what I do, I’m sending you a graph $H$ and you get to make a random choice of $t = 1, 2$ that I can’t control. If $H$ is only actually isomorphic to either $G_1$ or $G_2$ but not both, then so long as you make your choice uniformly at random, half of the time I won’t be able to produce a valid isomorphism and you’ll reject. And unless you can actually tell which graph $H$ is isomorphic to—an open problem, but let’s say you can’t—then probability 1/2 is the best you can do. Maybe the probability 1/2 is a bit unsatisfying, but remember that we can amplify this probability by repeating the protocol over and over again. So if you want to be sure I didn’t cheat and get lucky to within a probability of one-in-one-trillion, you only need to repeat the protocol 30 times. To be surer than the chance of picking a specific atom at random from all atoms in the universe, only about 400 times. If you want to feel small, think of the number of atoms in the universe. If you want to feel big, think of its logarithm. Here’s the code that repeats the protocol for assurance. def convinceBeyondDoubt(G1, G2, isomorphism, errorTolerance=1e-20): probabilityFooled = 1 while probabilityFooled &gt; errorTolerance: result = runProtocol(G1, G2, isomorphism) assert result probabilityFooled *= 0.5 print(probabilityFooled) Running it, we see it succeeds \$ python graph-isomorphism.py 0.5 0.25 0.125 0.0625 0.03125 ... &amp;lt;SNIP&amp;gt; ... 1.3552527156068805e-20 6.776263578034403e-21 So it’s clear that this protocol is convincing. But how can we be sure that there’s no leakage of knowledge in the protocol? What does “leakage” even mean? That’s where this topic is the most difficult to nail down rigorously, in part because there are at least three a priori different definitions! The idea we want to capture is that anything that you can efficiently compute after the protocol finishes (i.e., you have the content of the messages sent to you by the prover) you could have computed efficiently given only the two graphs $G_1, G_2$, and the claim that they are isomorphic. Another way to say it is that you may go through the verification process and feel happy and confident that the two graphs are isomorphic. But because it’s a zero-knowledge proof, you can’t do anything with that information more than you could have done if you just took the assertion on blind faith. I’m confident there’s a joke about religion lurking here somewhere, but I’ll just trust it’s funny and move on. In the next post we’ll expand on this “leakage” notion, but before we get there it should be clear that the graph isomorphism protocol will have the strongest possible “no-leakage” property we can come up with. Indeed, in the first round the prover sends a uniform random isomorphic copy of $G_1$ to the verifier, but the verifier can compute such an isomorphism already without the help of the prover. The verifier can’t necessarily find the isomorphism that the prover used in retrospect, because the verifier can’t solve graph isomorphism. Instead, the point is that the probability space of “$G_1$ paired with an $H$ made by the prover” and the probability space of “$G_1$ paired with $H$ as made by the verifier” are equal. No information was leaked by the prover. For the second round, again the permutation $\pi$ used by the prover to generate $H$ is uniformly random. Since composing a fixed permutation with a uniform random permutation also results in a uniform random permutation, the second message sent by the prover is uniformly random, and so again the verifier could have constructed a similarly random permutation alone. Let’s make this explicit with a small program. We have the honest protocol from before, but now I’m returning the set of messages sent by the prover, which the verifier can use for additional computation. def messagesFromProtocol(G1, G2, isomorphism): p = Prover(G1, G2, isomorphism) v = Verifier(G1, G2) H = p.sendIsomorphicCopy() choice = v.chooseGraph(H) witnessIsomorphism = p.proveIsomorphicTo(choice) return [H, choice, witnessIsomorphism] To say that the protocol is zero-knowledge (again, this is still colloquial) is to say that anything that the verifier could compute, given as input the return value of this function along with $G_1, G_2$ and the claim that they’re isomorphic, the verifier could also compute given only $G_1, G_2$ and the claim that $G_1, G_2$ are isomorphic. It’s easy to prove this, and we’ll do so with a python function called simulateProtocol. def simulateProtocol(G1, G2): # Construct data drawn from the same distribution as what is # returned by messagesFromProtocol choice = random.choice([1, 2]) G = [G1, G2][choice - 1] n = numVertices(G) isomorphism = randomPermutation(n) pi = makePermutationFunction(isomorphism) H = applyIsomorphism(G, pi) return H, choice, pi The claim is that the distribution of outputs to messagesFromProtocol and simulateProtocol are equal. But simulateProtocol will work regardless of whether $G_1, G_2$ are isomorphic. Of course, it’s not convincing to the verifier because the simulating function made the choices in the wrong order, choosing the graph index before making $H$. But the distribution that results is the same either way. So if you were to use the actual Prover/Verifier protocol outputs as input to another algorithm (say, one which tries to compute an isomorphism of $G_1 \to G_2$), you might as well use the output of your simulator instead. You’d have no information beyond hard-coding the assumption that $G_1, G_2$ are isomorphic into your program. Which, as I mentioned earlier, is no help at all. In this post we covered one detailed example of a zero-knowledge proof. Next time we’ll broaden our view and see the more general power of zero-knowledge (that it captures all of NP), and see some specific cryptographic applications. Keep in mind the preceding discussion, because we’re going to re-use the terms “prover,” “verifier,” and “simulator” to mean roughly the same things as the classes Prover, Verifier and the function simulateProtocol. Until then!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 297, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7311580777168274, "perplexity": 1403.180097842266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00140.warc.gz"}
http://www.ijbnpa.org/content/10/1/21
Research # Looking at the label and beyond: the effects of calorie labels, health consciousness, and demographics on caloric intake in restaurants Brenna Ellison1*, Jayson L Lusk2 and David Davis3 Author Affiliations 1 University of Illinois at Urbana-Champaign, 321 Mumford Hall, 1301 W. Gregory Dr., 61801, Urbana, IL, USA 2 Oklahoma State University, 411 Ag Hall, 74078, Stillwater, OK, USA 3 Oklahoma State University, 210 Human Sciences West, OK, 74078, Stillwater, USA For all author emails, please log on. International Journal of Behavioral Nutrition and Physical Activity 2013, 10:21  doi:10.1186/1479-5868-10-21 Received: 8 May 2012 Accepted: 6 February 2013 Published: 8 February 2013 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ### Abstract #### Background Recent legislation has required calorie labels on restaurant menus as a means of improving Americans’ health. Despite the growing research in this area, no consensus has been reached on the effectiveness of menu labels. This suggests the possibility of heterogeneity in responses to caloric labels across people with different attitudes and demographics. The purpose of this study was to explore the potential relationships between caloric intake and diners’ socio-economic characteristics and attitudes in a restaurant field experiment that systematically varied the caloric information printed on the menus. #### Methods We conducted a field experiment in a full service restaurant where patrons were randomly assigned to one of three menu treatments which varied the amount of caloric information printed on the menus (none, numeric, or symbolic calorie label). At the conclusion of their meals, diners were asked to complete a brief survey regarding their socio-economic characteristics, attitudes, and meal selections. Using regression analysis, we estimated the number of entrée and extra calories ordered by diners as a function of demographic and attitudinal variables. Additionally, irrespective of the menu treatment to which a subject was assigned, our study identified which types of people are likely to be low-, medium-, and high-calorie diners. #### Results Results showed that calorie labels have the greatest impact on those who are least health conscious. Additionally, using a symbolic calorie label can further reduce the caloric intake of even the most health conscious patrons. Finally, calorie labels were more likely to influence the selection of the main entrée as opposed to supplemental items such as drinks and desserts. #### Conclusions If numeric calorie labels are implemented (as currently proposed), they are most likely to influence consumers who are less health conscious – probably one of the key targets of this legislation. Unfortunately, numeric labels did little for those consumers who were already more knowledgeable about health and nutrition. To reach a broader group of diners, a symbolic calorie label may be preferred as it reduced caloric intake across all levels of health consciousness. ##### Keywords: Numeric vs. symbolic calorie labeling; Health consciousness; Full service restaurant ### Background In 1980, about 32% of food expenditures occurred outside the home. By 2010, the figure had increased to nearly 44% [1]. This increase has incited policymakers at the local, state, and national levels to push for legislation to encourage more healthful food choices away from home, with the most prominent piece being housed in the 2010 healthcare bill. This legislation mandates chain restaurants to provide calorie information on all menu forms [2]. While the intent of this type of labeling policy is quite clear, its effects are not. In a growing body of literature, a consensus on labels’ (in)effectiveness has yet to be reached – some studies found calorie labeling influenced food choice while others said it had no significant effect (see Harnack and French [3] and Swartz, Braxton, and Viera [4] for a comprehensive review). The lack of consensus on the impacts of menu labeling suggests there may be more to the story. That previous studies have employed similar experimental designs yet reached different conclusions suggests the discrepancy may relate to differences in the types of people involved in the studies. People self-select into different types of restaurants, and it is possible menu labels are more influential for some groups of people than others. Consider health consciousness, for example. Highly health conscious individuals likely possess a large amount of health/nutrition awareness and knowledge; thus, the label will probably have minimal influence on their food choices because such individuals already know which foods are lower calorie. Low health conscious people, on the other hand, may find the label provides novel information which can be used to select a lower-calorie menu item. However, individuals (even health conscious dietitians) often struggle to estimate (typically underestimate) the number of calories in restaurant meals [5-7]. Thus, when diners are confronted with accurate calorie information, their attitudes toward specific menu items may change, especially for items not closely aligned with expectations. Burton et al. [7] argue “surprising” menu items (i.e., a high-calorie salad) will experience the most dramatic shifts in attitudes and purchase intentions. Differences in conclusions across studies might partially be explained by the fact that “surprises” may differ across people and restaurants The impact of menu labels may also vary with demographic factors, such as gender, income, age, and education. Glanz et al. [8] found that nutrition is more important to women and older individuals; thus, these groups may be more responsive to menu labels as opposed to young males. Surprisingly, the menu labeling literature has largely neglected the impacts of demographics and attitudinal characteristics. There have been several studies on the types of people who eat at fast food restaurants (see Rydell et al. [9] for a review), but little work has examined what people eat once inside the restaurant, a gap the present study aims to fill. In this paper, we also investigate the effect of the format in which calories are displayed on menu labels. The vast majority of labeling studies have provided the number of calories for each menu item. From the literature, it is clear this type of label has limited effectiveness, which leads us to ask: is there a better way to convey caloric information? Thorndike et al. [10] found using a traffic light symbol adjusted purchasing behavior among hospital cafeteria patrons; however, there was no comparison with other labeling formats. Alternatively, Ellison, Lusk, and Davis [11] compared the effectiveness of symbolic (also in the form of a traffic light) versus numeric menu labeling and found that symbolic labeling led to lower caloric intake, on average, than numeric labeling. An open question this study aims to answer is whether symbolic information might be more influential on consumers with limited nutrition knowledge. The overall purpose of this study is to gain a better understanding of restaurant patrons’ choices in the face of differing nutrition labels. More specifically, we will determine which types of people are most responsive to nutrition labeling on restaurant menus by examining the relationship between caloric intake and (1) menu labeling format, (2) health consciousness, and (3) demographic factors. ### Methods #### Data and experimental design Survey data were collected for two weeks during the 2010 Fall semester at a restaurant on the Oklahoma State University campus. 1 The restaurant was split into three sections, with each assigned to a unique menu treatment. Upon arrival, diners were randomly assigned to a table in one of the three sections. All treatments listed the name, description, and price for each menu item, but the caloric information differed across treatments. Diners in the control menu treatment received no nutritional information, patrons in the calorie-only menu treatment were provided the number of calories in parentheses before each item’s price, and individuals in the calorie+traffic light menu treatment were presented with a green, yellow, or red traffic light symbol (indicating specific calorie ranges) in addition to the numeric caloric information preceding each item’s price. Green light options contained 400 calories or less, yellow light options had between 401 and 800 calories, and red light options consisted of more than 800 calories. Diners could choose from 51 menu options. Major menu categories included soups and salads, burgers and sandwiches, pasta, vegetarian items, and prime and choice steaks. Additionally, diners had the option of a daily special, usually a ‘surf-and-turf’ combination. Upon completion of their meal, patrons were asked to complete a survey. Prior to this point, diners were unaware their dining choices had been recorded as part of the research study. Using the restaurant’s record-keeping system, we matched up diners’ actual choices with their survey responses. In total, there were 138 observations (see Table  1 for summary statistics). Table 1. Characteristics of survey respondents and definition of variables (N=138) The one-page survey contained 15 questions and asked about diners’: (1) demographic characteristics, (2) levels of health consciousness, (3) frequency of and reasons for dining at the restaurant, (4) method of item selection (i.e., was selection based on taste, price, healthfulness, etc.), and (5) menu label preference. On the back of the survey, participants were presented a menu and asked which item(s) and beverage they ordered and if they ordered dessert (see Additional file 1). A key variable in this analysis was health consciousness. Following Kraft and Goodell [12] and Berning, Chouinard, and McCluskey [13], we measured this construct by asking participants to answer three five-point Likert scale questions regarding their daily caloric intake, fat intake, and use of nutrition labels. Summing the values across the three questions provided a person’s level of health consciousness; scores could range from three to fifteen, with fifteen representing the most health conscious consumer. #### Model and data analysis The first part of our analysis utilized ordinary least squares (OLS) regressions to determine factors affecting diners’ caloric intake. We disaggregated total caloric intake into (1) main entrée calories consumed, and (2) extra calories derived from additional items consumed over the course of the meal (drinks, desserts, side items like soup or salad served before the main course, etc.). Some extra items (namely, daily dessert specials and drinks) were not listed on the menu, and in concordance with the new federal labeling law, were thus not required to possess a menu label. 2 The model for calorie intake type m (m = entrée calories, extra calories) by individual i is specified as follows: (1) where β0 is the intercept, β1, …, β7 are the effects of the calorie+traffic light (TLSi) and calorie-only (CALi) menu labeling formats, health consciousness (HCi), gender (Femalei), status as a current student (Studenti), college education (Bachelorsi), and party size (Partyi) on caloric intake, γ1 and γ2 are interaction effects between each menu labeling format and health consciousness on caloric intake, and εi ~ N(0, σε2) is a random error term. Despite mixed results from previous studies, we hypothesized lower caloric intake among those individuals who received menus providing nutritional information (the calorie+traffic light and calorie-only menus) compared to those individuals who received no nutritional information (i.e., β1 < 0 and β2 < 0). Research has shown consumers tend to underestimate the caloric contents of meals [5-7,14], so the label corrects the misperception and may lead to lower-calorie choices. Additionally, we expected these negative relationships to hold more strongly in the entrée calorie specification as opposed to the extra calorie specification since some extra calorie items (drinks and desserts) were not included on the menu. Secondly, we hypothesized a negative relationship between health consciousness and caloric intake. The more health conscious a person is (i.e., the more a person monitors his/her calorie and/or fat intake or spends time reading nutrition labels), the greater amount of nutrition knowledge/awareness the individual has, and thus, the fewer calories that individual is expected to order. However, we expected high levels of health consciousness will moderate the effect of menu labeling format such that highly health conscious individuals will derive little new information from calorie labels. Thus, we hypothesized that menu labeling format will lead to the greatest calorie reductions for individuals who were less health conscious. In the second portion of our analysis, we focused on answering the “who orders what” question. Here, we determined which types of people (male vs. female, older vs. younger, etc.) were low-, medium-, and high-calorie diners. For this, we again considered both entrée and extra calories ordered; however, instead of examining them as continuous variables, we segregated people into low, medium, and high categories. For the entrée calories, we used the intuitive cutoff points corresponding to our traffic light specifications. Thus, low-calorie diners ordered 400 entrée calories or less, medium-calorie diners ordered between 401 and 800 entrée calories, and high-calorie diners ordered more than 800 entrée calories. Defining the low, medium, and high levels of extra calories was more challenging. We opted to classify low-calorie diners as those people who ordered zero extra calories. These diners strictly adhered to their main entrée choice and did not supplement their meal. Medium-calorie diners were those who ordered between one and 250 extra calories (most likely diners who ordered one extra item), and high-calorie diners ordered more than 250 extra calories (most likely selected two or more extra items). Once the low-, medium-, and high-calorie categories were established for entrée and extra calories, we calculated the mean values for a host of variables under each category, including gender, age, income, and education. The average levels of health consciousness were also compared across the categories of diners as well as the proportion of people who responded that taste or health was the most important characteristic when making a menu selection. A dummy variable for the menu labeling treatment was also included to determine whether one format led to more low (or even high) calorie diners than another. Finally, we included variables relating to whether individuals were repeat diners and their reason for visiting the restaurant. Chi-squared and ANOVA tests were used to determine whether significant differences existed between low-, medium-, and high-calorie diners. ### Results We first compared the average number of entrée, extra, and total calories ordered across the three menu formats. Figure  1 reveals that, in terms of entrée calories, the calorie-only and calorie+traffic light labeling treatments resulted in lower caloric intake relative to the control menu with no information. The calorie+traffic light menu label led to significantly fewer entrée calories ordered (p = 0.033) compared to the other two labeling formats (114 and 129 entrée calories fewer, on average, than the calorie-only and control menus, respectively). However, there were no significant differences in extra calories ordered across treatments. Figure 1. Average number of entrée, extra, and total calories across three menu treatments. Combining the entrée and extra calorie measures gave us the average total calories ordered. Ultimately, neither label significantly changed total calories ordered relative to the control menu; 3 however, the calorie+traffic light label outperformed the calorie-only label as these diners ordered 121 calories fewer than those receiving the calorie-only menu (p = 0.063). #### Regression analysis First consider the regression results for entrée calories. Table  2 shows both the calorie+traffic light and calorie-only labels significantly reduced entrée calories ordered (by 496.34 and 610.69 calories, respectively), thus β1 < 0 and β2 < 0 as hypothesized. Based on Figure  1, one might have expected the calorie+traffic light label to have the greater reduction in entrée calories; however, the interactions between each menu treatment and health consciousness must also be considered when interpreting the mean effect of a menu treatment. Table  2 reveals both interactions between menu treatment and health consciousness were significantly positive, indicating the effects of the labels were less pronounced for more health conscious individuals. Comparing the two labels, we found that at low levels of health consciousness, the calorie-only label led to larger calorie reductions; however, as health consciousness increased, the calorie+traffic light was more effective at reducing entrée calories, all else held constant. Figure  2 illustrates this effect by plotting the predicted caloric intake as a function of HC score for the three menu treatments, while holding all other variables constant at the overall means. Figure 2. Relationship between health consciousness and entrée calories ordered in three menu treatments. Table 2. regression estimates for entrée calories ordered and extra calories ordered Table  2 also reveals that entrée calories were negatively related to health consciousness (p = 0.0002). Under the control menu, every one unit increase in health consciousness resulted in a 52.48 entrée calorie decrease, on average. However, under the calorie+traffic light and calorie-only label treatments, the effects of health consciousness were less pronounced. The marginal effect of health consciousness in the calorie+traffic light treatment was −52.48 + 38.16 = −14.32, so the negative relationship continued to hold but at a lower absolute magnitude. Alternatively, in the calorie-only treatment, the marginal effect was −52.48 + 55.79 = 3.31 – effectively zero. These results suggest the calorie-only label does not really tell the most health conscious individuals any new information; therefore, entrée calories were not further reduced. Figure  2 provides further evidence of this as the calorie-only line was relatively flat across all levels of health consciousness. The calorie+traffic light label, however, appeared to provide some new information as entrée calories were further reduced in this menu condition even among more health conscious individuals. In terms of demographics, women ordered significantly fewer (p = 0.026) entrée calories than men. This aligned with the finding by Glanz et al. [8] that nutrition was more important to women than men; thus, it is probable women will select more nutritious (lower calorie) entrées than men. A second explanation may be that women generally require fewer calories to maintain their body weight relative to men. Other demographic variables had no significant impact on entrée calories ordered. Turning to the extra calories regression estimates, Table  2 reveals the effects of the calorie+traffic light and calorie-only labels disappeared – neither was significantly different from zero. Education, however, marginally affected (p = 0.086) extra calories ordered, as people who held a bachelor’s degree ordered 91.91 extra calories fewer, on average, than those without a degree. Additionally, party size was negatively related (p = 0.003) to extra calories ordered. #### Characteristics of low-, medium-, and high-calorie diners Table  3 offers insight into the characteristics of low-, medium- and high-calorie diners in terms of entrée calories ordered. Table  3 shows that a significantly higher percentage (p=0.001) of females (75%) ordered low-calorie entrées compared to the percentages who ordered medium- or high-calorie entrées (56.5% and 33.3%, respectively). Additionally, current university students made up larger proportions of medium- and high-calorie diners (p = 0.100) whereas people who hold a bachelor’s degree made up a greater proportion of low-calorie diners (p = 0.099). Age also varied across categories as younger patrons (ages 18–34) were more likely to order medium- or high-calorie entrées; conversely, older patrons (ages 55 and older) were more likely to order low-calorie entrées. Table 3. Demographic characteristics of low-, medium-, and high-calories diners (based on entrée calories) Individuals who considered health to be the most important characteristic when making a menu selection were more likely to be low-calorie diners (p=0.001) as opposed to medium- or high-calorie diners. Health consciousness revealed a similar result. Low-calorie diners had a mean health consciousness score of 11.2, while the mean health consciousness scores for medium- and high-calorie diners declined to 10.29 and 9.389, respectively (p = 0.046). A final set of variables related to the reasons for eating at the restaurant. During our survey period, the top two reasons for visiting the restaurant were to have lunch with friends or some type of business/work-related meal. From the table, we see that people eating lunch with friends made up larger proportions of medium- and high-calorie diners. People visiting for business reasons were just the opposite, accounting for 30% of low-calorie diners but only 16.1% and 11.1% of medium- and high-calorie diners, respectively. Turning to Table  4, we also categorized people as low-, medium-, or high-calorie diners based on the number of extra calories ordered. Here, the effect of gender disappeared; however, there were still differences in terms of education variables. Current university students made up greater proportions of medium- and high-calorie diners. Additionally, 47% of low-calorie diners held a bachelor’s degree compared to 13.3% and 28.6% of medium- and high-calorie diners (p = 0.004). In terms of age, 90% of medium-calorie diners were 18–34 years old (p = 0.015). Table  4 also reveals low income diners (those with < $25,000 in annual household income) made up the greatest percentages of medium- and high-calorie diners (60% and 45.2%, respectively). Alternatively, higher income patrons (those with ≥$100,000 in annual household income) were more likely to be low-calorie diners (p = 0.024). Table 4. Demographic characteristics of low-, medium-, and high-calories diners (based on extra calories) Variables related to health had a much smaller role in classifying extra calorie diners. Health consciousness was only marginally significant (p = 0.090). Similar to the entrée calorie results, low-calorie diners had the highest health consciousness scores, on average, yet the difference in health consciousness scores across the three diner groups was much smaller. Finally, in terms of dining purpose, we again found that patrons visiting the restaurant for business or work-related purposes were more likely to be low-calorie diners as opposed to medium- or high-calorie diners (p = 0.038). ### Discussion The federal government passed a menu labeling law in the 2010 health care bill requiring chain restaurants to post caloric information for all menus. Increased attention to labeling laws has caused a surge in research related to the potential (and actual) effectiveness of calorie labels in restaurants. As these studies become more prevalent, one would expect the results to eventually converge on the impact of these labels; however, this has not been the case. Some studies found calorie labels significantly reduced intake while others concluded the labels had no effect. These inconclusive results led us to ask: are there factors beyond the label’s presence which influence caloric intake? Results of this study revealed menu labels have a greater effect on entrée calories than on extra calories. Both the calorie+traffic light and calorie-only labels significantly reduced entrée calories ordered but neither significantly reduced extra calories ordered. Though not statistically significant (p = 0.294), diners who received menus with nutritional information actually ordered more extra calories than those who received no nutritional information. This suggests diners who received calorie information may be experiencing a licensing effect such that ordering a lower-calorie entrée gave a diner license to order an extra side item or dessert [15,16]; however, we leave this issue to future research. Another possible explanation for the label’s lack of influence on extra calories ordered could be that some of the extra items like drinks and desserts were not presented on the menu, so diners were not exposed to their caloric contents.4 We also found a negative relationship between health consciousness and entrée calories ordered; however, the interactions between each calorie label and health consciousness were significantly positive. This means both labels were more effective among the least health conscious – precisely the people that menu labeling laws are often trying to influence. Moreover, our results suggest the calorie+traffic light menu was more effective than the calorie-only menu at reducing entrée calories ordered as health consciousness increased. Interestingly, despite the calorie+traffic light label’s effectiveness at reducing calories ordered, it was not the labeling format of choice. When asked which labeling format was preferred, only 27.5% of respondents wanted to see the calorie+traffic light label on their menus. Surprisingly, 42% preferred the calorie-only label which had virtually no influence on ordering behavior. These responses imply diners may want more information on their menus (the number of calories) but do not want to be told what they should or should not consume (i.e., green = good, red = bad). A key strength of this study was the experimental design. We compared two labeling treatments to a control group with no calorie labels in a real restaurant setting. Additionally, all treatments were examined simultaneously, meaning any differences in dining habits from day to day would be picked up across all treatment groups. Secondly, this paper examined restaurant patrons more closely by administering a survey in addition to collecting purchase data. One issue in the present study was the small sample size. While more observations are preferable, the authors have conducted a larger study comparing the same three menu labeling treatments (with purchase data only), and the effects were virtually the same [11]. In both studies, the calorie+traffic light label reduced total calories ordered by 69 calories, though the reduction was significant only with the larger sample. The calorie-only label, conversely, did not affect total calories ordered regardless of sample size. A second limitation was that not all items (particularly drinks and desserts) were listed on the menus, so diners were not provided their caloric contents. Unfortunately, this may be a limitation consumers face even when the legislation is enacted. As currently proposed by the Food and Drug Administration, restaurants will not be required to post caloric contents for daily special items which are not regularly offered. In this study, the desserts changed daily, making them exempt from calorie labels (drinks would require labels, but restaurant management was not open to adding them to the menu in this study). Thus, while lack of calorie posting on daily special items was a limitation, our design was consistent with the proposed legislation and mirrored the reality diners are likely to encounter. ### Conclusions Together our results suggest that calorie labels in restaurants can be effective, but only among those restaurant patrons who have lower levels of health consciousness. For highly health conscious diners, calorie labels provide little new information. However, our findings suggest the addition of a symbol (here, a traffic light symbol) to the calorie information could further reduce calories ordered, even for the most health conscious individuals. #### Endnotes aAll data were collected during the lunch meal (11:00 a.m. to 2:00 p.m.). bUnder the proposed legislation, only the daily dessert specials would be exempt from having a calorie label. Drinks would be required to be labeled; however, this restaurant did not list drinks on its menus (a feature not open to change at the time of this study), so consumers were not presented with calorie information for drink options. cIn the present study, we found that neither the calorie-only nor the calorie+traffic light label significantly affected total calories ordered. However, one could argue the lack of significance may be due to the small sample size (and thus, limited power) and that the reduction caused by the calorie+traffic light label (69 calorie reduction, on average) could still be significant from a public health standpoint. Fortunately, we have a larger data set (N = 946) which confirms this (see Ellison, Lusk, and Davis [11]). In the larger data set, we utilized the same three menu treatments and experimental design; however, no diner demographic and attitudinal profiles were available. Results from the larger data set showed the calorie+traffic light label leads to a nearly identical 68.7 calorie reduction (on average), a result which is statistically different than the control menu. It should be noted, though, that the calorie-only label did not significantly impact calories ordered in either data set. dWhile drinks and beverages were not listed on the menu (and thus had no nutritional information present for diners), it should be pointed out that less than 25% of diners ordered either a dessert or a caloric beverage; thus, the majority of extra items ordered were listed on the menu with the corresponding nutritional information. ### Competing interests Author disclosure: Brenna Ellison, Jayson L. Lusk, David Davis, no competing interests. ### Authors’ contributions All of the authors were involved in designing the research. BE and JLL conducted the research and DD oversaw management of the restaurant. BE had primary responsibility for analyzing the data and writing the paper, with all of the authors contributing by reviewing and editing drafts of the manuscript. All authors read and approved the final manuscript. ### References 1. Economic Research Service (ERS): Food CPI and Expenditures. United States Department of Agriculture. 2011. 2. Food and Drug Administration (FDA): FDA Proposes Draft Menu and Vending Machine Labeling Requirements, Invites Public to Comment on Proposals. United States Department of Health and Human Services. 2011. 3. Harnack LJ, French SA: Effect of Point-of-Purchase Calorie Labeling on Restaurant and Cafeteria Food Choices: A Review of the Literature. Int J Behav Nutr Phys Act 2008, 5:1-6. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 4. Swartz JJ, Braxton D, Viera AJ: Calorie Menu Labeling on Quick-service Restaurant Menus: An Updated Systematic Review of the Literature. Int J Behav Nutr Phys Act 2011, 8:135. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 5. Backstrand JR, Wootan MG, Young LR, Hurley J: Fat Chance: A Survey of Dietitians’ Knowledge of the Calories and Fat in Restaurant Meals. Washington, DC: Center for the Science in the Public Interest; 1997. 6. Burton S, Creyer EH: What Consumers Don’t Know Can Hurt Them: Consumer Evaluations and Disease Risk Perceptions of Restaurant Menu Items. J Consum Affairs 2004, 38:121-145. Publisher Full Text 7. Burton S, Creyer EH, Kees J, Huggins K: Attacking the Obesity Epidemic: The Potential Health Benefits of Providing Nutrition Information in Restaurants. Am J Pub Health 2006, 96:1669-1675. Publisher Full Text 8. Glanz K, Basil M, Maibach E, Goldberg J, Snyder D: Why Americans Eat What They Do: Taste, Nutrition, Cost, Convenience, and Weight Control Concerns as Influences on Food Consumption. J Am Diet Assoc 1998, 98:1118-1126. PubMed Abstract | Publisher Full Text 9. Rydell SA, Harnack LJ, Oakes JM, Story M, Jeffrey RW, French SA: Why Eat at Fast-Food Restaurants: Reported Reasons among Frequent Consumers. J Am Diet Assoc 2008, 108:2066-2070. PubMed Abstract | Publisher Full Text 10. Thorndike AN, Sonnenberg L, Riis J, Barraclough S, Levy DE: A 2-Phase Labeling and Choice Architecture Intervention to Improve Healthy Food and Beverage Choices. Am J Pub Health 2012, 102:527-533. Publisher Full Text 11. Ellison B, Lusk JL, Davis D: Effect of Menu Labeling on Caloric Intake and Restaurant Revenue in Full-Service Restaurants. Seattle: Selected paper for presentation at the AAEA Annual Meeting; 2012. 12–14 August 2012 12. Kraft FB, Goodell PW: Identifying the Health Conscious Consumer. J Health Care Mktg 1993, 13:18-25. 13. Berning JP, Chouinard HH, McCluskey JS: Consumer Preferences for Detailed versus Summary Formats of Nutritional Information on Grocery Store Shelf Labels. J Ag Food Ind Org 2008, 6:1-19. 14. Chandon P, Wansink B: The Biasing Health Halos of Fast-Food Restaurant Health Claims: Lower Calorie Estimates and Higher Side-Dish Consumption Intentions. J Cons Research 2007, 34:301-314. Publisher Full Text 15. Wilcox K, Vallen B, Block L, Fitzsimons GJ: Vicarious Goal Fulfillment: When the Mere Presence of a Healthy Option Leads to an Ironically Indulgent Decision. J Cons Research 2009, 36:380-393. Publisher Full Text 16. Vermeer WM, Steenhuis IHM, Leeuwis FH, Heymans MW, Seidell JC: Small Portion Sizes in Worksite Cafeterias: Do They Help Consumers to Reduce Their Food Intake? Int J Obesity 2011, 35:1200-1207. Publisher Full Text
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.297398179769516, "perplexity": 6433.563704955375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011129529/warc/CC-MAIN-20140305091849-00048-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.chemeurope.com/en/encyclopedia/Static_light_scattering.html
My watch list my.chemeurope.com # Static light scattering Static light scattering is a technique in physical chemistry that uses the intensity traces at a number of angles to derive information about the radius of gyration $\ R_g$, molecular mass $\ M_w$ of the polymer or polymer complexes, and the second virial coefficient $\ A_2$, for example, micellar formation (1-5). There are typically a number of analyses developed to analyze the scattering of particles in solution to derive the above named physical characteristics of particles. A simple static light scattering experiment entails the average intensity of the sample that is corrected for the scattering of the solvent will yield the Rayleigh ratio, $\ R$ as a function of the angle or the wave vector $\ q$ as follows: $\ R(\theta_{sample}) = R(\theta_{solvent})I_{sample}/I_{solvent}$ yielding the difference in the Rayleigh ratio, $\ \Delta R(\theta)$ between the sample and solvent: $\ \Delta R(\theta) = R(\theta_{sample})-R(\theta_{solvent})$ In addition, the setup of the laser light scattering is corrected with a liquid of a known refractive index and Rayleigh ratio e.g. toluene, benzene or decalin. This is applied at all angles to correct for the distance of the scattering volume to the detector. One must note that although data analysis can be performed without a so-called material constant $\ K$ defined below, the inclusion of this constant can lead to the calculation of other physical parameters of the system. $\ K=4\pi^2 n_0^2 (dn/dc)^2/N_A\lambda^4$ where $\ (dn/dc)$ is the refractive index increment, $\ n_0$ is the refractive index of the solvent, $\ N_A$ is Avogadro's number (6.023x1023) and $\ \lambda$ is the wavelength of the laser light reaching the detector. This equation is for linearly polarized light like the one from a He-Ne gas laser. ## Data Analyses ### Guinier plot The scattered intensity can be plotted as a function of the angle to give information on the $\ R_g$ which can simple be calculated using the Guinier approximation as follows: $\ ln(\Delta R(\theta)) = 1 - (R_g^2/3)q^2$ where $\ ln(\Delta R(\theta))=lnP(\theta)$ also known as the form factor with $\ q = 4\pi n_0 sin(\theta/2)/\lambda$. Hence a plot of the corrected Rayleigh ratio,$\ \Delta R(\theta)$ versus $\ sin(\theta/2)$ or $\ q^2$ will yield a slope $\ -R_g^2/3$. However, this approximation is only true for $\ qR_g<1$. Note that for a Guinier plot, the value of dn/dc and the concentration is not needed. ### Kratky plot The Kratky plot is typically used to analyze the conformation of proteins, but can be used to analyze the random walk model of polymers. A Kratky plot can be made by plotting $\ sin^2(\theta/2)\Delta R(\theta)$ versus $\ sin(\theta/2)$ or $\ q^2\Delta R(\theta)$ versus $\ q$. ### Debye plot This method is used to derive the molecular mass and 2nd virial coefficient,$\ A_2$, of the polymer or polymer complex system. The difference to the Zimm plot is that the experiments are performed using a single angle. Since only one angle is used (typically 90o), the $\ R_g$ cannot be determined as can be seen from the following equation: $\ Kc/\Delta R(\theta) = 1/M_w + 2A_2c$ ### Zimm plot For polymers and polymer complexes which are of a monodisperse nature $\ PDI<0.3$ as determined by dynamic light scattering, a Zimm plot is a conventional means of deriving the parameters such as $\ R_g$, molecular mass $\ M_w$ and the second virial coefficient $\ A_2$. One must note that if the material constant $\ K$ defined above is not implemented, a Zimm plot will only yield $\ R_g$. Hence implementing $\ K$ will yield the following equation: $\ Kc/\Delta R(\theta)=1/{M_wP(\theta)}+A_2c = 1/M_w(1+q^2(R_g^2/3))+2A_2c$ Experiments are performed at several angles and at least 4 concentrations. Performing a Zimm analysis on a single concentration is known as a partial Zimm analysis and is only valid for dilute solutions of strong point scatterers. The partial Zimm however, does not yield the second virial coefficient, due to the absence of the variable concentration of the sample. ## References 1. A. Einstein, Ann. Phys. 33 (1910), 1275 2. C.V. Raman, Indian J. Phys. 2 (1927), 1 3. P.Debye, J. Appl. Phys. 15 (1944), 338 4. B.H. Zimm, J. Chem. Phys 13 (1945), 141 5. B.H. Zimm, J. Chem. Phys 16 (1948), 1093
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8752846121788025, "perplexity": 689.907422462372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00682.warc.gz"}
http://mathematica.stackexchange.com/questions/20132/histogram3d-using-sums-instead-of-counts?answertab=active
# Histogram3D using sums instead of counts I'm trying to create a 3d histogram/ matrix plot pair. My data is in the form {{TTR1,TBF1},{TTR2,TBF2},....} What I would like is to modify the Histogram3D so instead of having the counts of each bin on the z axis I would have the sum of TTR in that bin. I would also like the modify the matrix plot in the same way. I've hacked an example in the help to get this far; hist := Histogram3D[Log[10, dateFilter[[All, 2]]], {-4, 4, 0.25}, Function[{xbins, ybins, counts}, Sow[counts]], AxesLabel -> {Style["TTR (log(hours))", 16], Style["TBF (log(hours))", 16]}, ImageSize -> Large, PlotLabel -> Style["Dryer 1 TBF and TTR Counts", 18], ChartStyle -> RGBColor[27/255, 121/255, 169/255], ViewPoint -> {Pi, Pi, 2}]; {g, {binCounts}} = Reap[hist]; mPlot := MatrixPlot[First@binCounts, ImageSize -> Large] Row[{g, mPlot}] In this case my data list is dateFilter[[All, 2]]={{TTR1,TBF1},{TTR2,TBF2}....} I would also like to fix this axis on the MatrixPlot so it has the same range labels as the histogram. Also is there a better way to do the log axes on the Histogram? I've just taken the Log of the data but it would be better if I could just modify the axes to show 1,10,100,.... Edit I've re-plotted by data using kgulers code. The function I used was Histogram3D[Log[10, dateFilter[[All, 2]]], {-4, 4, 0.25}, heightF[dateFilter[[All, 2]]][Total, First], styles, AxesLabel -> {Style["TTR (log(hours))", 16], Style["TBF (log(hours))", 16], Style["Sum TTR (hours)", 16]}, ViewPoint -> {Pi, Pi, 2}] It looks like my data has stratified into 3 groups of TTR (which may be what is really going on). I was kind of expecting the same plot with different z values but if that's what's going on then that's what's going on. Thanks kguler and belisarius. Give me a day to check that this is all ok and I'll tick this one off. - A custom height function for Histogram3D: Key ideas: (1) get the list of data points in each bin using BinLists, (2) Map func2 to each 2D data point and func1 to the results to define the heights for each bin: ClearAll[binListF, heightF]; binListF[data_][bins_, counts_] := BinLists[data, {bins[[1]]}, {bins[[2]]}]; heightF[data_][func1_: Total, func2_: First, binning_: Automatic] := Map[func1, Map[func2, (HistogramList[data, binning, binListF[data]][[2]] /. {} -> {0, 0}), {-2}], {-2}] & Data and styles: data = RandomVariate[NormalDistribution[0, 1], {100, 2}]; styles = Sequence @@ {BoxRatios -> 1, ChartStyle -> Opacity[.6], ChartElementFunction -> ChartElementDataFunction["SegmentScaleCube", "Segments" -> 12, "ColorScheme" -> 46]}; Usage examples: Histogram3D[data, Automatic, heightF[data][Total, First], styles] (* OP's example *) Update: Further examples: Bin specifications: Row[Column[{Row[{"binning: ", #}], Histogram3D[data, #, heightF2[data][Total, First, #], styles]}, Center] & /@ {{{-2, 2, 0.5}, {-3, 3,1.5}}, "Knuth", "Sturges", "FreedmanDiaconis", "Scott", "Wand"}] Various combinations of aggregation functions: Row[Column[{Row[{"heightF2[data][", #[[1]], ", " , #[[2]], "]"}], Histogram3D[data, Automatic, heightF2[data][#[[1]], #[[2]]], styles]}, Center] & /@ {{Total, Last}, {Total, Mean}, {Max, Mean}, {Min, Mean}, {Max, Min}, {Min, Max}}] - For the Histogram, you could do something like: data = RandomReal[NormalDistribution[0, 1], {200, 2}]; Histogram3D[data, {.5}, Function[{xbs, ybs, c}, Table[Total[ Select[data, x[[1]] <= #[[1]] < x[[2]] && y[[1]] <= #[[2]] < y[[2]] &][[All, 1]]], {x, xbs}, {y, ybs}] ] ] -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26258373260498047, "perplexity": 22630.288013940055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663718.7/warc/CC-MAIN-20140930004103-00083-ip-10-234-18-248.ec2.internal.warc.gz"}
https://mathblag.wordpress.com/2011/11/
Musings on mathematics and teaching. ## Month: November, 2011 ### On the determination of permutable primes Edit: I have to retract this proof. It was pointed out to me that 10^k covers only 16 of the 17 congruence classes modulo 17, so the last step fails. The other steps are valid, and they show that a permutable prime greater than 991 must be a near-repdigit (all but one of the digits are equal) or a repunit. A permutable prime (also called an absolute prime) is a prime number such that all of its digit permutations (in base 10) are prime. For example, 113 is a permutable prime, since each of the numbers 113, 131, and 311 are prime. Note that all permutable primes greater than 5 are composed of the digits 1, 3, 7, and 9. The following is a complete list of the permutable primes less than 1000. 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, 97, 113, 131, 199, 311, 337, 373, 733, 919, 991 A repunit prime is a prime number that contains only the digit 1. Every repunit prime is permutable, and it is conjectured (but not proved) that there exist infinitely many repunit primes. I will prove that the above list contains every permutable prime, with the exception of repunit primes greater than 11. The proof is based on the following observation. If N and m are integers greater than 1 such that the digit permutations of N cover all congruence classes modulo m, then every number containing the digits of N (as a multiset) is composite. We can find a finite number of pairs (N, m) that will cover every case; however, the number of cases is too large to check by hand. The complete proof was generated by a Python script, and the output is available here. The first step is to find all permutable primes less than 106 by brute force search. We use the Miller-Rabin primality test (code is here). This is a probabilistic test, but we are not overly concerned with false positives, because we would run a separate deterministic test on any new candidate primes that were identified. The second step is to verify that a permutable prime cannot contain all four of the digits 1, 3, 7, and 9, by showing that the permutations of 1379 cover all congruence classes modulo 7. The third step is to verify that a permutable prime cannot contain exactly three of the four digits 1, 3, 7, and 9. There are two sub-cases: the number contains two digits that are repeated at least twice, or the number contains a digit that is repeated five or more times These sub-cases are denoted aabbc and aaaaabc, respectively. As in the previous case, we show that the permutations cover all congruence classes modulo 7. These two sub-cases cover all of the remaining possibilities, since we checked all numbers having fewer than seven digits in step 1. The fourth step is to verify that a permutable prime cannot contain exactly two of the four digits 1, 3, 7, and 9, unless it is less than 106. There are two sub-cases: the number contains two digits that are repeated at least twice (such as 11133), or the number has all digits the same except for one (such as 11113). For the first sub-case, we consider permutations modulo 7 as before. Unfortunately, there is no proof of the second sub-case using modulo 7 arithmetic, so we must work harder. Suppose that N is a positive integer with at least 17 digits, all of which are equal to a, except for the last digit which is equal to b. Then the following 17 numbers are digit permutations of N. $N - (b-a) + (b-a) 10^k\quad (0 \le k \le 16)$ Since 10 is a primitive root modulo 17, these numbers cover all congruence classes. Therefore, N cannot be a permutable prime. It remains to check that there do not exist any permutable primes of the form aaa…ab that have between 7 and 16 digits. We do this by checking each candidate with a Miller-Rabin primality test. This exhausts all of the possible cases, so the proof is complete. Since 10 is a primitive root modulo 17, these numbers cover all congruence classes, except the one belonging to $N - (b-a)$. Therefore, N cannot be a permutable prime unless $N - (b-a) \equiv 0 \pmod{17}$, which happens when the number of digits of N is a multiple of 16. ### Generalized binomial coefficients The reader is probably familiar with factorials and binomial coefficients. The factorial of a number n is the product of all positive integers between 1 and n, and it is denoted by n!. For example, $\displaystyle 6! = 1 \cdot 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 = 720$. We define 0! = 1. Factorials are used to define the binomial coefficients. The symbol $\displaystyle \binom{n}{k}$ is defined by the equation $\displaystyle \binom{n}{k} = \frac{n!}{k!\,(n-k)!}$, provided that $0 \le k \le n$. It is not obvious that all binomial coefficients are integers, but this fact can be proved by induction on n using Pascal’s rule: $\displaystyle \binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}\ .$ We can generalize both factorials and binomial coefficients by replacing the sequence of all positive integers {1, 2, 3, 4, 5, …} by an arbitrary sequence of positive integers. Let $\displaystyle u = \{u_1, u_2, u_3, \ldots\}$ be any sequence of positive integers whatsoever, and define the u-factorial by the formula $\displaystyle n!_u = u_1 u_2 u_3 \cdots u_n, \quad 0!_u = 1.$ We use the u-factorial function to define the u-binomial coefficient. $\displaystyle\binom{n}{k}_u = \frac{n!_u}{k!_u\,(n-k)!_u}$ These definitions are very general, and they do not guarantee that our binomial coefficients are integers. To remedy this deficiency, we will assume that u is a strong divisibility sequence, which means that $d = \gcd(m,n)$ implies $u_d = \gcd(u_m, u_n)$. Here are some examples of strong divisibility sequences. 1. The identity sequence $u_n = n$. 2. The Fibonacci numbers. 3. The q-bracket $\displaystyle u_n = [n]_q = \frac{q^n - 1}{q - 1}$ where $q\ge2$ is an integer. I claim that if u is a strong divisibility sequence then the u-binomial coefficients are always integers, and I will explain how I discovered a proof of this fact. The key idea is to search for a generalization of Pascal’s rule: $\displaystyle \binom{n}{k}_u = r \binom{n-1}{k-1}_u + s \binom{n-1}{k}_u$ We write this equation out in full, and then simplify using the fact that $m!_u = u_m \cdot (m-1)!_u$. It remains to find r and s. Since u is a strong divisibility sequence, $\gcd(u_k, u_{n-k}) = u_d$ where $d = \gcd(k, n-k)$. But ud divides un, since d divides n. Therefore, Bézout’s identity implies that the integers r and s exist. This allows us to prove by induction on n that all u-binomial coefficients are integers. If we start with the Fibonacci numbers, then the numbers defined by this process are called Fibonomial coefficients. We can also define the q-binomial coefficients by starting with the q-brackets. The Catalan numbers are ubiquitous in combinatorics. The nth Catalan number is defined by $\displaystyle \frac{1}{n+1} \binom{2n}{n} = \frac{(2n)!}{n!\,(n+1)!}.$ Alexander Bogomolny has given a delightfully simple proof that this quantity is always an integer. I will leave it to the reader to check that the proof remains valid if the factorials are replaced with u-factorials. This produces integer sequences that are analogous to the Catalan numbers, such as the Fibonomial Catalan numbers when un is the nth Fibonacci number, and the q-Catalan numbers when $u_n = (q^n - 1)/(q - 1)$. ### Is mathematics useful in everyday life? To be honest, I don’t know the answer to this question. Teachers are supposed to teach students how they will use mathematics in daily life, but I find myself unconvinced by many of the supposed practical applications of mathematics. Here is a typical example of a practical math lesson, from Yummy Math. The students are asked to adjust a recipe for mashed potatoes to accommodate various numbers of guests for Thanksgiving dinner. The lesson appears to be well-designed, and I have no objections to any of it, except that the questions seem rather dull. But is it really necessary to be able to convert a recipe by hand? If you use any of the major websites for recipes, then they will convert the recipes for you with a click of a button.There are also websites that will allow you to enter your own recipes and convert them to different quantities. The average person no longer needs to know how to convert a recipe. The same is true for computing interest payments on a mortgage, or any kind of routine calculation. If it is a problem that people typically encounter in everyday life, then there’s an app for that. Of course, the people who write the apps need to know the math behind them, but for most people this is unnecessary. So, if we don’t need math to solve routine problems, then what is left? Non-routine problems! We learn math (in part) because we hope to solve problems that are unique to our situations; because we have questions that nobody has ever thought to ask, let alone answer; because we want to find better ways to do things. Also, understanding mathematics allows us to formulate questions that could not be asked (or even thought) without mathematics. The paradox is that we can’t tell students how they will use math, even though it is tremendously useful. As soon as we name an application of math that our students are likely to encounter, we know that other people have encountered the same problem and solved it before us, and there is little need to solve it again. The goal of a math teacher should be to prepare students to answer questions that we cannot even conceive. Or maybe not. I’m just rambling. Thanks for reading this far. ### Interesting integer sequence, or a crisis in rationalism? What is the next number in this sequence? 1, 12, 144, … You might think that these are powers of 12, and that the next number is 1728. But what if I told you that the sequence continues like this? 1, 12, 144, 1750, 23420, 303240, 3641100, 46113200, 575360400, 7346545000, … I will explain this sequence after the jump. ### Fibonacci Pigeons Here is a funny picture that has been circulating the Internet since September 2010. I think that analyzing this picture would be an interesting project for a high school math class. My own analysis is included below. This picture is amusing, but I wondered if it was real. Some people claim that the picture was Photoshopped. I don’t know how to tell if it was faked, but I do know how to count pixels. I copied the picture into Microsoft Paint, and I recorded the x-coordinates of the arrow tips. Magnifying the picture to 800% made this task much easier. I entered these coordinates into Microsoft Excel, and I also calculated the first differences. If the pigeons are truly spaced according to the Fibonacci numbers, then the first differences should be roughly proportional to 1.618x. To test this hypothesis, I made a scatter plot, and I fitted an exponential trend line to the data. The fit is good, but not spectacular (R2 = 0.9357). However, the base of exponents is not even close to 1.618. According to Excel, the exponential function of best fit is y = 5.9093 * exp(0.2453*x), which can also be written as y = 5.9093 * 1.278^x. If we omit the last two data points as outliers, then the correlation improves to R2 = 0.986, but the base of exponents is even smaller (b = 1.216). The spacing between pigeons is simply not increasing as rapidly as the Fibonacci numbers. Conclusion: the picture is funny, and perhaps one should not over-analyze a good joke, but it does not show any evidence that pigeons arrange themselves according to the Fibonacci numbers. ### Sums of consecutive integers Some numbers can be written as the sum of two or more consecutive positive integers, and some cannot. For example, 15 can be expressed in three different ways: 7+8, 4+5+6, and 1+2+3+4+5. But it is not possible to express 8 in this way. This raises two interesting questions: 1. Which numbers can be expressed as the sum of two or more consecutive positive integers? 2. In how many ways can a given number be expressed as the sum of two or more consecutive positive integers? These are excellent questions for students to investigate, and there have been many articles written about them. The NRICH website has a guide for using this problem in the classroom. I would like to offer a fresh perspective on the problem. Generalization is one of the most powerful tools that are available to a mathematician. We can generalize the problem by allowing the integers in the sum to be negative or zero, and also by allowing the sum to consist of a single term. With this generalization, there are now 8 ways to write 15 as a sum of consecutive integers. 15 −14 + … + 14 + 15 7 + 8 −6 + …  + 6 + 7 + 8 4 + 5 + 6 −3 + … + 3 + 4 + 5 + 6 1 + 2 + 3 + 4 + 5 0 + 1 + 2 + 3 + 4 + 5 Let us examine this table closely. The first column lists the solutions whose terms are all positive, and the second column lists the solutions that have at least one non-positive term. These columns have the same length; in fact, there is a one-to-one correspondence between the two sets of solutions. A + … + B   ↔   (−A+1) + … + B This transformation has another important property: it changes an expression having an even number of terms into an expression having an odd number of terms, and vice versa. This implies that half of the solutions have an even number of terms, and half of the solutions have an odd number of terms. But if half of the solutions have an odd number of terms, and half of the solutions have only positive terms, then it follows that the number of solutions with positive terms is equal to the number of solutions with an odd number of terms. It remains to count the solutions with an odd number of terms. If N is the sum of d consecutive integers, where d is odd, then the middle term must be N/d, hence N/d is an integer. Conversely, if d is odd and N/d is an integer, then the sum of d consecutive integers centered at N/d is equal to N. Therefore, the number of ways to write N as the sum of an odd number of consecutive integers is equal to the number of odd positive divisors of N. Consider the prime factorization of N where pn denotes the nth prime (so p1 = 2). Every odd divisor d of N can be written as where 0 ≤ ai ≤ ei for all i between 2 and k. Therefore, the number of odd divisors of N is equal to since there are (1 + ei) choices for each exponent ai in the prime factorization of d. Example 1: In how many ways can 1024 be expressed as the sum of two or more consecutive positive integers? Solution: Since 1024 is a power of two, it has only one odd divisor (namely 1). Therefore, it is not possible to write 1024 as the sum of two or more consecutive positive integers. Example 2 : In how many ways can 600 be expressed as the sum of two or more consecutive positive integers? Solution: By the preceding discussion, this is equal to the number of odd divisors of 600, minus one (to eliminate the trivial solution with one term.) The prime factorization of 600 is and the number of odd divisors of 600 is (1+1)*(1+2) = 6. Therefore, there are 5 ways to write 600 as the sum of two or more consecutive positive integers. Example 3: What is the smallest number that can be written as the sum of two or more consecutive integers in exactly 1000 ways? Solution: This is left as a challenge to the reader. Note: Nick Hobson arrived at a similar solution independently. ### A Simple Mathematical Model of Economic Inequality Most people acknowledge that economic inequality has been increasing in the United States. Statistics bear this out. In 2007, the top 1% of families received 24% of the nation’s income, but their share during the 1960s and 1970s was about 9%. Liberals and conservatives have vastly different explanations for economic inequality. Liberals generally believe that the rules of the game favor the wealthy, while the poor face discrimination and diminished opportunities. Conservatives generally believe that people are wealthy due to superior ability or effort, and that inequality is the natural result of giving people the opportunity to achieve their potential. Both of these explanations for inequality are deterministic; they discount the role of blind luck. But I claim that inequality can arise purely by chance, and I will explain how this occurs. Let us begin with a simple thought experiment. Imagine a society in which all citizens begin with equal wealth of 100 units. Every year, half of the population enjoys a 20% increase in wealth, and the other half suffers a 20% decrease in wealth. We suppose that the process is completely random, and it does not favor the rich over the poor. In this imaginary society, the (expected) average wealth remains constant at 100 units, but the median wealth will gradually decline. Years Median wealth 0 100.0 10 81.5 20 66.5 30 54.2 40 44.2 50 36.0 The reason for this decline is that the median person’s wealth will experience the same number of up years and down years; but an increase of 20% followed by a decline of 20% results in a decline of 4%, since 1.20 × 0.80 = 0.96. Nevertheless, the society’s total wealth remains the same, because the increases and decreases in wealth cancel each other out. Consequently, the majority of the society’s wealth is concentrated in fewer hands each year. Let’s generalize this model. We suppose that the distribution of wealth in a society is specified by a random variable X. (This does not mean that wealth is distributed randomly, only that it can vary between individuals.)  We need a mathematical measure of the inequality of a distribution, and we choose to define it by the following formula. I(X) = E[X2]/(E[X])2 What does this formula mean? E[X] denotes the expected value (or average) of X, so we are dividing the average value of X2 by the square of the average wealth. The quantity I(X) is equal to 1 in the case of perfect equality, and it is greater than 1 otherwise. Larger values of I(X) indicate greater income inequality. An equivalent formula is I(X) = 1 + (σ/μ)2 where σ is the standard deviation and μ is the mean. Now, we suppose that each person’s wealth increases or decreases by a random percentage. This is equivalent to multiplying X by a positive random variable Y. We assume that the percentage of increase in wealth is independent of a person’s current wealth; that is, X and Y are independent. But if X and Y are independent, then X2 and Y2 are also independent, which implies that I(XY) = E[X2Y2]/(E[XY])2 = E[X2]/(E[X])2 × E[Y2]/(E[Y])2 = I(X) I(Y). But I(Y) > 1, and so I(XY) > I(X), which means that social inequality has increased by a factor of I(Y). There are many objections that could be raised to this analysis. The model is too simple to capture the complexity of a national economy, and I make no claims to the contrary. If the model were literally true, then we would expect the distribution of wealth to follow a log-normal distribution; but as Ben Goldacre observed, the true distribution more closely resembles a power law. But I do argue that this simple model demonstrates that, in the absence of other factors, random chance will inexorably lead to unequal distribution of wealth. A more sophisticated treatment of this idea is discussed in the article Entrepreneurs, Chance, and the Deterministic Concentration of Wealth by Joseph E. Fargione, Clarence Lehman, and Stephen Polasky. See this article for a non-technical summary. ### Octagon puzzle Dr. Gordon Hamilton of MathPickle.Com posed a very interesting question about regular octagons. In this note, I will explain how I solved the problem, and I will generalize to other regular polygons. Here is the problem. Given a regular octagon in the plane, suppose that we join the vertices using line segments to form a continuous loop that visits each vertex exactly once. Is it necessarily the case that two or more of the line segments are parallel? I claim that every such circuit contains a pair of parallel line segments. Suppose to the contrary that there exists a circuit which has no parallel segments. In order to analyze the problem mathematically, we number the vertices consecutively from 0 to 7. We also label each edge with the sum of its vertices; for example, the edge from vertex 3 to vertex 7 is labeled with 10. Now we observe that two line segments are parallel if and only if their labels or the same, or they differ by 8. For example, the line segment between vertex 0 and vertex 4 is parallel to the line segment between vertex 1 and vertex 3, and it is also parallel to the line segment between vertex 5 and vertex 7. Since there are 8 possible sums, there are also 8 possible directions for the line segments, and a circuit with no parallel segments must have exactly one segment in each direction. Therefore, the sum of the edge labels is 1+2+3+4+5+6+7+8+8k = 36+8k for some integer k. But this sum must also equal 2*(0 + 1 + 2 + 3 + 4 + 5 + 6 + 7) = 56, because the sum of the edge labels includes every vertex twice.  Therefore, 36 + 8k = 56, or k = 2.5. This is a contradiction, since k is required to be an integer. Therefore, every circuit joining the vertices of a regular octagon must contain a pair of parallel edges. Exercise: Show that a similar argument works for any regular polygon having an even number of sides.  What happens if the polygon has an odd number of sides? ### What is a negative number? Negative numbers can be a difficult concept to understand. In fact, the widespread acceptance of negative numbers is a relatively recent event in the history of mathematics. There are many ways to understand and explain negative numbers; and some of these interpretations are listed below. 1. The opposite of a positive number 2. A number that is less than zero 3. A number that is to the left of zero on a number line 4. A value on a scale that extends beyond zero 5. The amount of a loss or absence 6. A directed quantity 7. A comparison between two quantities 8. The result of subtracting a larger number from a smaller number 9. An equivalence class of ordered pairs of natural numbers I will discuss my thoughts about these interpretations below. My purpose is not to tell the reader how to think about negative numbers, but to encourage the reader to think deeply about the matter and reach his or her own conclusions. My remarks are intended for other math teachers, so they might be too advanced for beginning learners. Interpretation 1: The opposite of a positive number. We know that the opposite of hot is cold, and the opposite of love is hate, but the opposite of a number might be an unfamiliar concept. Two numbers are opposites if their sum is zero. For example, the opposite of 6 is –6, because 6 + (–6) = 0. The same addition also shows that the opposite of –6 is 6. This interpretation can be modeled using black chips and red chips, where black chips are used to show positive numbers and red chips are used to show negative numbers. We adopt a rule that we can add or remove chips, as long as we add or remove equal numbers of black chips and red chips. For example, a pile containing 4 black chips and 7 red chips represents the number –3, because we may remove 4 black chips and 4 red chips from the pile, leaving 3 red chips. This interpretation is unsatisfying on its own, because it does not show how negative numbers are used in real life. Interpretations 2, 3, and 4: A number that is less than zero. This is self-explanatory, although it is not obvious how a number could be less than zero. Indeed, the concept makes no sense for many kinds of quantities. You can’t have less than zero enemies, eat less than zero servings of vegetables, or walk less than zero miles. A successful explanation of negative numbers has to explain why some quantities can be negative and some quantities cannot. We can show the natural numbers on a number line, and we interpret “less than” and “greater than” to mean “to the left of” and “to the right of”, respectively. The number line continues to the right without end. Negative numbers arise when we extend the line to the left of zero. A thermometer is a real-life example of a number line. The zero point is 0º on the Fahrenheit or Celsius scale. On a cold winter day, the temperature can go below zero on either scale. Students should be able to come up with other examples of scales that extend below zero. For example, elevations are defined in reference to sea level, and they can be negative. The lowest point in North America is the Badwater Basin in Death Valley, California. Its elevation is –86 meters, which means that it is 86 meters below sea level. What these scales have in common is that each has a reference point which is designated as zero, and the quantity can be greater or less than this reference point. In the Celsius scale, the reference point is the freezing temperature of water. When measuring elevation, the reference point is sea level. When the reference point represents an absolute minimum value (such as absolute zero temperature) then negative numbers have no meaning. Interpretation 5: Loss or absence. Negative numbers often indicate a loss or decrease in a quantity. A decrease in a quantity may be thought of as a negative change or negative increase. We often see negative numbers in financial news stories when a company loses money or a market index declines in value. An American football team may gain or lose yards on a play, and this can be represented by a positive or negative number. Negative numbers can also indicate the absence of something, such as a debt or deficit. In accounting, credits and debits are represented by positive and negative numbers respectively. Interpretation 6: A directed quantity. Negative numbers are useful for representing quantities that have two opposing directions. The directions can be literal (e.g. up or down, East or West) or metaphorical (e.g. profit or loss, winning or losing). In physics, one usually assumes that up is positive and down is negative. A falling object near Earth’s surface undergoes an acceleration of –9.8 meters per second per second; the acceleration is negative because the object is being pulled downwards by gravity. Latitude and longitude are measured with respect to the equator and the prime meridian. A location that is south of the equator has a negative latitude, and a location that is west of the prime meridian has a negative longitude. Interpretations 7 and 8:  A comparison between two quantities. Negative numbers usually arise when we are comparing two measurements of the same type. This is seen to be a generalization of the earlier interpretations. We may be comparing a quantity with an earlier value of the same quantity, which leads to an increase or decrease. Or we may be comparing the quantity to some reference point, such as the prime meridian or the freezing point of water. The operation of subtraction expresses the difference between two quantities. We may usually think of subtraction as “taking away”, but we also use subtraction to answer the question “how many more?”  The subtraction fact 100 – 86 = 14 tells us that if we take 86 away from 100 then 14 remain, but it also tells us that 100 is 14 more than 86. This allows us to make sense of subtracting a larger number from a smaller number.  We can’t take 5 from 2, but we can say that 2 is 3 less than 5, and this is expressed by the subtraction fact 2 – 5 = –3. Interpretation 9: An equivalence class of ordered pairs of natural numbers. In formal mathematics, we construct the integers by defining an equivalence relation on ordered pairs of natural numbers. This approach is much too abstract for beginning learners, but it is a valuable perspective for math teachers. The set of natural numbers is {0, 1, 2, 3, 4, …}. (Some people exclude 0.) Each integer is represented by an infinite number of different ordered pairs of natural numbers. Two ordered pairs (a,b) and (c,d) represent the same integer if and only if a+d = b+c. For example, the following ordered pairs all represent the same integer: (0,3), (1,4), (2,5), (3,6), (4,7), … The idea can be seen more clearly if we change our notation. If we write the ordered pair (a,b) as (a – b) instead, then we have (0 – 3) = (1 – 4) = (2 – 5) = (3 – 6) = (4 – 7) = … The standard notation for this integer is –3. What you should take away from this is that it is possible to define an integer as the result of subtracting two natural numbers. ### Graph Transformations and Daylight Saving Time Saturday was the last day of Daylight Saving Time in the United States. On this day, most of set our clocks back an hour, and we enjoyed an extra hour of sleep. I say “most of us” because Arizona, Puerto Rico, Hawaii, U.S. Virgin Islands and American Samoa do not observe Daylight Saving Time. Some people find Daylight Saving Time to be confusing, or at least hard to remember. The phrase “spring forward, fall back” is meant to remind us to set our clocks ahead one hour in the spring, and set it one hour back in the fall. One possible reason for the confusion is that there are two different ways to think about Daylight Saving Time. The usual way to express it is that we set the clocks ahead in the spring, and set them back in the fall. We might call this the clock’s point of view. The other point of view is that we must wake up an hour earlier in the spring, but we are permitted to sleep an hour later in the fall. We encounter a similar concept when we transform the graph of a function by shifting it to the left or right. In algebra, we learn that replacing x with x−1 in an equation will shift the graph one unit to the right, and replacing x with x+1 will shift the graph one unit to the left. This is very perplexing. Surely, subtracting 1 should be a move to the left, and adding 1 should be a move to the right. Why is everything backwards when it comes to graph transformations? But just as with Daylight Saving Time, there are two ways to think about graph transformations. When we replace x with x−1, we usually say that the graph moves one unit to the right. But an alternative interpretation is that the coordinate system moves one unit to the left, and the graph stays still! To put it another way, suppose you start with the graph of y = f(x), but then you subtract one to each of the number labels on the x-axis. After some thought, you will see that the same graph now describes the equation y = f(x−1). These two ways of viewing a transformation are called “alibi” and “alias”. An alibi transformation moves the points (“alibi” is Latin for “in another place”.) An alias transformation does not move the points, but only renames them. The difference between an alias and an alibi is illustrated by the hilarious train scene gag in the movie Top Secret (1984). Sleeping an hour later is an alibi transformation — we are shifting our “sleep curves” one hour to the right. Setting the clock an hour back is an alias transformation — the time coordinate is shifted to the left.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7697113752365112, "perplexity": 293.3409966462179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106358.80/warc/CC-MAIN-20170820073631-20170820093631-00279.warc.gz"}
https://math.stackexchange.com/questions/2558153/if-fx-to-y-is-continuous-x-is-compact-and-fx-y-then-y-is-compact
# If $f:X\to Y$ is continuous, $X$ is compact and $f(X)=Y,$ then $Y$ is compact. Let $$(X,\tau),(Y,\eta)$$ be topological spaces. If $$f:X\to Y$$ is continuous, $$f(X)=Y$$ and $$X$$ is compact, then $$Y$$ is compact. We want to show that for every open cover for $$Y$$ there exist a finite open subcover. So let $$\{U_\alpha:\alpha\in I\}$$ be ab open cover for $$Y$$. Thus $$Y=\bigcup_{\alpha\in I} U_\alpha$$ Now how can I use the hypothesis that $$f$$ is continuous? I was thinking about this property of continuity: $$\forall U\in\eta,f^{-1}(U)\in\tau$$ but I don't know how to make a relation with the compactness of $$X$$. As $$X$$ is compact, then $$\forall U\subset\tau,X\subset\bigcup U$$ and $$\exists u_1,...,u_n\in U:U\subset\bigcup_iU_i$$. Could anyone help me please? • To use compactness of $X$, you need a collection of open sets which cover $X$. Hmm... I wonder where you're going to get one of those? All you have is a collection of open sets which cover $Y$. How can you turn open sets in $Y$ into open sets in $X$? Hmm... – Alex Kruckman Dec 9 '17 at 5:57 Write $X=\displaystyle\bigcup_{\alpha\in I}f^{-1}(U_{\alpha})$. Continuity is for $f^{-1}(U_{\alpha})$. • – Peter Szilas Dec 9 '17 at 7:41 • Can you explain how the hypothesis of surjective is applied? I can see that $f^{-1}(U_{\alpha}),\forall \alpha\in I$ are all in $X$. When it says $f^{-1}(U_{\alpha}),\forall \alpha\in I$ it means that $f^{-1}$ is evaluated in every set of Y? – user486983 Dec 9 '17 at 20:12 • in other words, after the evaluation in $f^{-1}$, Y will be 'empty' since all it's elements have been evaluated? – user486983 Dec 9 '17 at 20:14 • In your below comment, you have written $f\left(\displaystyle\bigcup_{i}f^{-1}(U_{\alpha_{i}})\right)=\displaystyle\bigcup_{i}f(f^{-1}(U_{\alpha_{i}}))=\displaystyle\bigcup_{i}U_{\alpha_{i}}$. Surjective: $f(f^{-1}(A))=A$. – user284331 Dec 9 '17 at 20:16 • Sorry I don't quite follow your second question. – user284331 Dec 9 '17 at 20:17 Consider the collection of sets $f^{-1}(U_{\alpha})$ for all $\alpha$. They are open as $f$ is continuous and cover $X$ since $f$ is surjective. Since $X$ is compact take a finite subcover $f^{-1}(U_{\alpha_1}), \ldots, f^{-1}(U_{\alpha_n})$. What can you now deduce about $U_{\alpha_1}, \ldots, U_{\alpha_n}$? • As $X$ is compact, $X=\bigcup_if^{-1}(U_{\alpha_i} )$. Therefore $Y=f(X)\subset f (\bigcup_{i} f^{-1}(U_{\alpha_i}))$ $=\bigcup_i U_{\alpha_i}$ ? – user486983 Dec 9 '17 at 6:24 • Yes, almost correct, the only issue is that you need to write the index of the union clearly to indicate that is the finite union, if not, it is confusing if you are doing uncountably union or what. – user284331 Dec 9 '17 at 6:45 • @user284331 ok, thanks. – user486983 Dec 9 '17 at 7:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042215347290039, "perplexity": 240.8598627673066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00252.warc.gz"}
http://www.conservapedia.com/Monty_Hall_problem
# Monty Hall problem The Monty Hall Problem is a basic example problem in statistic and probability theory based on the premise of the television show Let's Make a Deal, originally hosted by Monty Hall. ## Problem Statement A contestant on a game show is presented with three doors. Behind one of the doors is a car, and behind the other two doors are goats. The contestant chooses door 1. The host must then open a door to reveal a goat; he opens door 3. The host then gives the contestant a chance to switch his choice to door 2. If the contestant is trying to win the car, is it to his advantage to switch his choice? ## Solution It may be tempting to say that the contestant neither gains nor loses anything if he switches. Since there are two closed doors, and one of them is the winning door, it may appear that the probability of winning is 1/2 whether the contestant switches or not. Such reasoning is incorrect; the contestant always has a higher probability of winning if he switches. ### Illustration using scenario outcomes There are three possible scenarios in the problem. 1. The contestant initially chooses the door hiding the car. The host reveals one goat, leaving the other goat behind the remaining door. In this case, switching loses. 2. The contestant initially chooses the door hiding goat 1. The host must reveal goat 2. Switching wins the car. 3. The contestant initially chooses the door hiding goat 2. The host must reveal goat 1. Switching wins the car. If the contestant switches, two scenarios can lead to wins; the other option loses. Hence, the contestant has a 2/3 probability of success if he switches, but only a 1/3 probability of winning if he does not. ### Solution using Bayes' theorem The problem can also be solved by using Bayes' theorem to evaluate the posterior probability that the car is behind the initially chosen door, given that the host has opened another door. Let "Prize x" be the event that the prize is behind door x, and let "Open x" be the event that Monty Hall opens door x. Then before the doors are open, P(Prize 1) = P(Prize 2) = P(Prize 3) = 1/3. P(Open 2 | Prize 1) = 1/2, as if the prize is behind door 1, Monty Hall has two doors he can open, as he must reveal a goat, not the prize behind door 1. P(Open 2) = P(Open 3) = 1/2, as there are two doors Monty Hall can open, both equally likely. Thus, using Bayes' theorem, we get: $P(Prize 1| Open 2) = \frac{P(Open 2| Prize 1)P(Prize 1)}{P(Open 2)} = \frac{\frac{1}{2}*\frac{1}{3}}{\frac{1}{2}} = \frac{1}{3}$ That is, the probability that the prize is behind door 1, given that Monty opens door 2, is 1/3, so the probability that it is behind door 3 is 2/3. Thus, the contestant should switch. The logic applies equally if Monty opens door 3.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076688289642334, "perplexity": 726.9365822352207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776428349.3/warc/CC-MAIN-20140707234028-00049-ip-10-180-212-248.ec2.internal.warc.gz"}
http://www.ma.utexas.edu/mediawiki/index.php?title=Starting_page&diff=cur&oldid=1022
(Difference between revisions) Revision as of 16:16, 17 November 2012 (view source)Nestor (Talk | contribs)← Older edit Latest revision as of 19:09, 23 September 2013 (view source)Tianling (Talk | contribs) (One intermediate revision not shown) Line 35: Line 35: * The denoising algorithms in [[nonlocal image processing]] are able to detect patterns in a better way than the PDE based models. A simple model for denoising is the [[nonlocal mean curvature flow]]. * The denoising algorithms in [[nonlocal image processing]] are able to detect patterns in a better way than the PDE based models. A simple model for denoising is the [[nonlocal mean curvature flow]]. * The [[Boltzmann equation]] models the evolution of dilute gases and it is intrinsically an integral equation. In fact, simplified [[kinetic models]] can be used to derive the [[fractional heat equation]] without resorting to stochastic processes. * The [[Boltzmann equation]] models the evolution of dilute gases and it is intrinsically an integral equation. In fact, simplified [[kinetic models]] can be used to derive the [[fractional heat equation]] without resorting to stochastic processes. - * In conformal geometry, the Paneitz operators encode information about the manifold, they include fractional powers of the Laplacian, which are nonlocal operators. + * In conformal geometry, the [[conformally invariant operators]] encode information about the manifold. They include fractional powers of the Laplacian. * In oceanography, the temperature on the surface may diffuse though the atmosphere giving rise to the [[surface quasi-geostrophic equation]]. * In oceanography, the temperature on the surface may diffuse though the atmosphere giving rise to the [[surface quasi-geostrophic equation]]. * Models for [[dislocation dynamics]] in crystals. * Models for [[dislocation dynamics]] in crystals. ## Latest revision as of 19:09, 23 September 2013 Welcome! This is the Nonlocal Equations Wiki (67 articles and counting) In this wiki we collect several results about nonlocal elliptic and parabolic equations. If you want to know what a nonlocal equation refers to, a good starting point would be the Intro to nonlocal equations. If you want to find information on a specific topic, you may want to check the list of equations or use the search option on the left. We also keep a list of open problems and of upcoming events. The wiki has an assumed bias towards regularity results and consequently to equations for which some regularization occurs. But we also include some topics which are tangentially related, or even completely unrelated, to regularity.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835800290107727, "perplexity": 1473.9041384721843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676599291.24/warc/CC-MAIN-20180723164955-20180723184955-00364.warc.gz"}
https://www.physicsoverflow.org/3521/when-citing-anything-provide-arxiv-or-doi-link?show=4127
When citing anything, provide arXiv (or doi) link + 21 like - 0 dislike 2151 views It's more about culture of site than anything else, but I think we should encourage people to give links to paper they cite. Either arXiv, or by doi. The first one (i.e. arXiv abs page link) should be recommended, as: • it is open access, • it contains doi as well. This post has been migrated from (A51.SE) Would linking to the abstract from a journal be acceptable in place of a link to the arXiv? Some times a pre-print is not available, especially for older papers. I know it is not necessarily open access, but most journals that I deal with provide doi's. This post has been migrated from (A51.SE) @rcollyer: My opinion is such: if both arXivID and DOI are available, then there should be link to arXiv. If there is only one - there is no choice anymore. This post has been migrated from (A51.SE) I'm all for encouraging, but strongly against insisting. I don't think we really want to do anything to increase the barrier for posting answers or interesting questions. This post has been migrated from (A51.SE) + 6 like - 0 dislike Is this really a problem? If I see a paper mentioned, SE provides reputation incentives to edit the post to provide links. This way a new user doesn't have to think about it, and will naturally notice the practice after some time participating. I prefer DOI and arXiv of course. DOI because it's robust against change and arXiv because it's open. Perhaps some standard way of including both should be used when possible. I suggest including both because the DOI seen in arXiv abstracts is supplied by the authors. Some arXiv documents may be published, yet still lack a DOI on the abstract page. Perhaps something along the lines of: which is kinda fiddly or If we're serious about formalising this, perhaps it would be worth chatting with the SE people to see if they can add a tool. I noticed that there has been a discussion on backtracks to the arXiv, and if we ask SE to make a tool then perhaps we should consider integrating the two. Bibliography management sites like CiteULike use a scraper which is rather complex, but I reckon if we just ask for the abstract page or the arXiv paper ID then a little javascript magic can do the rest of the work, and even pluck a DOI if it is available. This post has been migrated from (A51.SE) answered Nov 19, 2011 by (180 points) I think we should not be very serious about formalizing it. First, it rises the initial barrier. Second, it is not a journal and (I guess) most people won't care the form of the link, as long as the link is there. This post has been migrated from (A51.SE) @PiotrMigdal: I mean having a markup for it. Otherwise my first point stands, and citations can be put in by editors. This post has been migrated from (A51.SE) + 4 like - 0 dislike Since this is a theoretical physics resource, it might be useful to consider links to Spires/Inspire records. For eg: here. This forms the standard database for publications in high energy physics and will provide links to copies of the article on arxiv and a publication copy, if any. We can insist on Inspire, since it will replace Spires soon, and has a better interface. This post has been migrated from (A51.SE) answered Oct 9, 2011 by (720 points) To be honest, I don't see why its beneficial to use as a link _anything_ but arXivID/DOI. This post has been migrated from (A51.SE) The Inspire record for the document gives the arXiv link, doi, link to actual publication, citations for the document, references cited by the document, all in one place. The database is also searchable by author, title, etc. It is the de facto standard in the high energy physics community. It contains all the information about an article, more than arXiv. So it's nothing less than what you suggested, but in some cases, might be more helpful. This post has been migrated from (A51.SE) OK, but are you sure that it is robust enough? (E.g. in 10-20 years is it going to work in the same form.) And what is the percentage of all papers it covers (DOI covers virtually all papers published in respectable journals)? This post has been migrated from (A51.SE) @PiotrMigdal: Your point is well taken. Spires has served the HEP community very well for almost 40 years, so I think that they have a very good track record in future-proofing the database. Having said that, I wonder whether links to Spires will automatically redirect to INSPIRE once the transition takes place. This post has been migrated from (A51.SE) @Jose: Inspire is supposed to formally replace Spires sometime in October. So we might as well have people using Inspire right away. And Inspire is more useful than Spires due to the way they present all the data in their new interface. This post has been migrated from (A51.SE) @Siva: Yes, but my question was about the many links to Spires already in existence in other places. For instance, the arXiv links to Spires right now. I'm pretty sure that the transition has been well thought out and that the issue of dead links has been addressed, but I have not read anything about it. This post has been migrated from (A51.SE) @Piotr Migdal: "OK, but are you sure that it is robust enough? (E.g. in 10-20 years is it going to work in the same form.) " Are you sure that arXiv is going to work in the same form in 10-20 years? This post has been migrated from (A51.SE) @MarcinKotowski: I'm not (anyway, the life has proven it is hard to predict the lifespan of a certain web service). However, I can bet beer than in 15 years arXiv links (or IDs) will be backward compatible. When it comes to DOI, it _seems_ to be as stable as ISBN (so give DOI at least 30ys). This post has been migrated from (A51.SE) @siva: To be clear - I don't have anything _against_ using INSPIRE link. I just believe that arXiv/DOI is a safer solution. In any case, you can provide both, e.g. _arXiv:9999.9999 (or INSPIRE)_ (if you believe that INSPIRE links are more useful and worth promoting it may be a good solution). This post has been migrated from (A51.SE) The plans are to expand INSPIRE and make it more comprehensive and useful. There seems to be at least a 25 year vision plan to make it a full text service like arXiv, etc. Check [this](http://poynder.blogspot.com/2008/09/open-access-interviews-annette-holtkamp.html). They might probably even make documents searchable. Personally, I imagine this to be a huge opportunity for open science and it would be great if we can encourage it's use. The initiative has the support of the HEP community. This post has been migrated from (A51.SE) @Piotr: Either way. In essence, we should encourage people to link articles which they refer to (preferably open access). This post has been migrated from (A51.SE) Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\varnothing$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48839032649993896, "perplexity": 1158.982788889801}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00291.warc.gz"}
https://ora.ox.ac.uk/objects/uuid:1308effc-6fca-4daa-94f8-9a5c4fa44235
Journal article A HYPERBOLIC SYSTEM OF CONSERVATION LAWS FOR FLUID FLOWS THROUGH COMPLIANT AXISYMMETRIC VESSELS Abstract: We are concerned with the derivation and analysis of one-dimensional hyperbolic systems of conservation laws modelling uid ows such as the blood ow through compliant axisymmetric vessels. Early models derived are nonconservative and/or nonhomogeneous with measure source terms, which are endowed with infinitely many Riemann solutions for some Riemann data. In this paper, we derive a one-dimensional hyperbolic system that is conservative and homogeneous. Moreover, there exists a unique global R... Publication status: Published Authors Chen, G-QG More by this author Journal: ACTA MATHEMATICA SCIENTIA Volume: 30 Issue: 2 Pages: 391-427 Publication date: 2010-03-05 DOI: ISSN: 0252-9602 URN: uuid:1308effc-6fca-4daa-94f8-9a5c4fa44235 Source identifiers: 203601 Local pid: pubs:203601 Language: English Keywords:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111405372619629, "perplexity": 2279.8906163652728}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904039.84/warc/CC-MAIN-20201029095029-20201029125029-00666.warc.gz"}
http://scikit-bio.org/docs/0.5.6/generated/skbio.sequence.Protein.__deepcopy__.html
skbio.sequence.Protein.__deepcopy__¶ Protein.__deepcopy__(memo)[source] Return a deep copy of this sequence. State: Stable as of 0.4.0. See also copy() Notes This method is equivalent to seq.copy(deep=True).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040085077285767, "perplexity": 21962.397182838136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00306.warc.gz"}
https://de.maplesoft.com/support/help/Maple/view.aspx?path=LinearAlgebra/Modular/IntegerDeterminant
IntegerDeterminant - Maple Help LinearAlgebra[Modular] IntegerDeterminant determinant of an integer matrix using modular methods Calling Sequence IntegerDeterminant(M) Parameters M - Square Matrix with integer entries Description • The IntegerDeterminant function computes the determinant of the integer matrix M. This is a programmer level function, it does not perform argument checking. Thus, argument checking must be handled external to this function. • Note: The IntegerDeterminant command uses a probabilistic approach that achieves great gains for structured systems. Information on controlling the probabilistic behavior can be found in EnvProbabilistic. • This function is used by the Determinant function in the LinearAlgebra package when a Matrix is determined to contain only integer entries. • This command is part of the LinearAlgebra[Modular] package, so it can be used in the form IntegerDeterminant(..) only after executing the command with(LinearAlgebra[Modular]).  However, it can always be used in the form LinearAlgebra[Modular][IntegerDeterminant](..). Examples A 3x3 matrix > $\mathrm{with}\left(\mathrm{LinearAlgebra}\left[\mathrm{Modular}\right]\right):$ > $M≔\mathrm{Matrix}\left(\left[\left[2,1,3\right],\left[4,3,1\right],\left[-2,1,-3\right]\right]\right)$ ${M}{≔}\left[\begin{array}{ccc}{2}& {1}& {3}\\ {4}& {3}& {1}\\ {-2}& {1}& {-3}\end{array}\right]$ (1) > $\mathrm{IntegerDeterminant}\left(M\right)$ ${20}$ (2) A 100x100 matrix > $M≔\mathrm{LinearAlgebra}\left[\mathrm{RandomMatrix}\right]\left(100\right):$ > $\mathrm{tt}≔\mathrm{time}\left(\right):$ > $\mathrm{IntegerDeterminant}\left(M\right)$ ${38562295347802366242417909657285032281105091485000162871067163275296273582728190925949289361981964881806516849833008824879568403928373759144147382030798909099402726531205056808283212790472544339698767179236612577117605985054960334148934541347201762137455}$ (3) > $\mathrm{time}\left(\right)-\mathrm{tt}$ ${0.042}$ (4)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336393475532532, "perplexity": 1249.8378019275713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00086.warc.gz"}
http://tex.stackexchange.com/questions/122751/change-color-of-extra-ticks-grid
Change Color of extra ticks grid I got a nice an d tidy plot with the help of pgfplot using groupeplot. Now I want to insert some extra ticks with different color than the major grid...everything works pretty well except the color line of the grid...I know extra ticks are treated as major ticks and for those major grid style = {red} would change the color. But I can't find something like extra grid style = {red} in the manual. All I can actually find is extra tick line style={red} which only changes the color of the label entry of the extra tick. Does anybody knows a tiny and beautiful workaround? Thank in advance, CN - All the options that apply to ticks and grids can be specified separately for extra ticks by putting the options in extra tick style={...}. So if you want the major grid lines for the extra ticks to be red, you would set extra tick style={major grid style=red}: \documentclass[border=5mm]{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ grid=both, extra x ticks={1,3}, extra tick style={ major grid style=red, tick align=outside, tick style=red } ] \addplot [black, only marks] {rnd}; \end{axis} \end{tikzpicture} \end{document} - Thanks Jake, for this pretty fast and correct answer...So I was pretty close but still miles away..damn ;) –  CN19 Jul 6 '13 at 12:13 @user33304: it is better to accept the answer followed by up voting. –  stalking isn't tolerated Jul 6 '13 at 12:26 thanks for the advise...but upvoting requires a registration..and that's what it did right now ;) –  CN19 Jul 6 '13 at 13:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4680316150188446, "perplexity": 4119.376532751744}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469016.33/warc/CC-MAIN-20150226074109-00245-ip-10-28-5-156.ec2.internal.warc.gz"}
http://mathoverflow.net/users/7738/user7738?tab=activity
user7738 Reputation Next privilege 200 Rep. Dec 5 awarded Popular Question Oct 6 asked A singular value inequality Jan 16 awarded Supporter Dec 12 accepted Is there always a parallelogram cross-section of parallelepiped contained in the smallest box Nov 28 comment Is there always a parallelogram cross-section of parallelepiped contained in the smallest box I plotted $A(Q)$ for A={{34, 33, 33}, {33, 34, 33}, {33, 33, 34}} by using Mathematica and got an object more than 6 faces, which is strange to me. Did you plot $A(Q)$ of your example? Nov 28 awarded Commentator Nov 28 comment Is there always a parallelogram cross-section of parallelepiped contained in the smallest box For "parallelepiped", you can go to en.wikipedia.org/wiki/Parallelepiped Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box deleted 123 characters in body Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box added 28 characters in body Nov 28 awarded Cleanup Nov 28 comment Is there always a parallelogram cross-section of parallelepiped contained in the smallest box Dear Sergei, your example is very interesting. If you plot the image of $Q$ under $A$, $A(Q)$ is not a parallelepiped, as it has more than 6 faces. The question should be: If $A(Q)$ is a parallelepiped, is there always one of the planes $P_0$ such that the plane does not intersect with the interior of any two adjacent edges? Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box added 29 characters in body; added 2 characters in body Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box rolled back to a previous revision Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box added 127 characters in body Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box edited title Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box added 144 characters in body; edited title; added 4 characters in body; edited title; deleted 170 characters in body Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box edited title Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box added 16 characters in body Nov 28 comment Is there always a parallelogram cross-section of parallelepiped contained in the smallest box Dear Joe, thanks. I have revised it. Nov 28 revised Is there always a parallelogram cross-section of parallelepiped contained in the smallest box edited tags; deleted 149 characters in body
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5350707173347473, "perplexity": 732.2664091963042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461862134822.89/warc/CC-MAIN-20160428164854-00188-ip-10-239-7-51.ec2.internal.warc.gz"}
https://love2d.org/forums/viewtopic.php?f=4&t=13262&p=75854
## Knowing if userdata is really an image Questions about the LÖVE API, installing LÖVE and other support related questions go here. Forum rules Before you make a thread asking for help, read this. Ubermann Party member Posts: 146 Joined: Mon Nov 05, 2012 4:00 pm ### Knowing if userdata is really an image How? So I can predict and avoid app crashes? Because doing type(myImage) only tell me that it is userdata. I could run through all elements in userdata and check if all of them are {r, g, b, a}, but since i'm implementing support for CMYK and HSV colors, it would be too much time invested for just that. Santos Party member Posts: 383 Joined: Sat Oct 22, 2011 7:37 am ### Re: Knowing if userdata is really an image Assuming we're talking about Image objects, does this work? Code: Select all if image:typeOf("Image") then Ubermann Party member Posts: 146 Joined: Mon Nov 05, 2012 4:00 pm ### Re: Knowing if userdata is really an image Santos wrote:Assuming we're talking about Image objects, does this work? Code: Select all if image:typeOf("Image") then No. It is in fact ImageData type of object. And when doing type() it just returns "userdata". But I need to know if it really contains data that is image data or not. ejmr Party member Posts: 302 Joined: Fri Jun 01, 2012 7:45 am Location: South Carolina, U.S.A. Contact: ### Re: Knowing if userdata is really an image Ubermann wrote:But I need to know if it really contains data that is image data or not. If you want there to be zero doubt about the nature of the data I don't see any other option besides reading the entire file and verifying it, byte-by-byte if you have to. Popular image formats have header metadata you could look at, but if you base the test just on that you're trusting the rest of the data to be uncorrupted. ejmr :: Programming and Game-Dev Blog, GitHub ### Who is online Users browsing this forum: No registered users and 13 guests
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29878196120262146, "perplexity": 5996.321753540761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347428990.62/warc/CC-MAIN-20200603015534-20200603045534-00173.warc.gz"}
http://www.owltestprep.com/unofficial-guide-2016-ps-101/
Posted on Categories - GMAT QuantNo. of comments 0 # Unofficial Guide 2016 PS 101 I’m going to do a number of these posts. You can think of them as commentary on the 2016 Official Guide for GMAT Review. I’m going to use the abbreviations PS, DS, RC, CR, and SC for Problems Solving, Data Sufficiency, Reading Comprehension, Critical Reasoning, and Sentence Correction. Sometimes I’ll present alternative solutions, more detailed solutions, and, occasionally an example problem. I can’t reproduce the complete problem here, so if you don’t have your official guide handy, now would be the time to break it out – page 167 Working simultaneously at their respective constant rates, Machines A and B produce 800 nails… This is a classic combined rate problem with a VIC (variables in choices) twist. The algebraic solution isn’t crazy hard, so I’ll start with that as an appetizer, and of course I’ll show EVERY SINGLE STEP. For the main course I’m going to use back-solving to send the problem home crying to its mama “Mommy, that guy at Owl Test Prep hurt me.” Let’s get started. First it’s important to know that the combined rate is the sum of the rates. In other words Rate of A and B together = Rate of A + Rate of B Based on the time – $$x$$ hours – we can find the combined rate of A and B. By the same reasoning we can find the rate of machine A.. I’m a big fan of setting up rate problems with $$Work = Rate\times{Time}$$ or $$Dsitance = Rate\times{Time}$$ tables. $$Work =$$ $$Rate$$ $$\times$$ $$Time$$ $$800 =$$ $$Rate_{ab}$$ $$\times$$ $$x$$ $$800 =$$ $$Rate_{a}$$ $$\times$$ $$y$$ $$800 =$$ $$Rate_{b}$$ $$\times$$ $$T$$ A little bit of algebra gives us $$Work =$$ $$Rate$$ $$\times$$ $$Time$$ $$800 =$$ $$\Large\frac{800}{x}$$ $$\times$$ $$x$$ $$800 =$$ $$\Large\frac{800}{x}$$ $$\times$$ $$y$$ $$800 =$$ $$Rate_{b}$$ $$\times$$ $$T$$ Because combined rate is the sum of the rates, we know $$\Large\frac{800}{x}$$ $$=$$ $$\Large\frac{800}{y}$$$$+$$  $$Rate_{b}$$ $$\Large\frac{800}{x}$$ $$–$$ $$\Large\frac{800}{y}$$ $$=$$  $$Rate_{b}$$ The common denominator is $$xy$$: $$\Large\frac{800y}{xy}$$ $$–$$ $$\Large\frac{800x}{xy}$$ $$=$$  $$Rate_{b}$$ $$\Large\frac{800y – 800x}{xy}$$ $$=$$  $$Rate_{b}$$ Let’s factor our that 800: $$\Large\frac{800(y – x)}{xy}$$ $$=$$ $$Rate_{b}$$ Now we can stick this back into the table to find the time it takes machine B to produce 800 nails: $$Work =$$ $$Rate$$ $$\times$$ $$Time$$ $$800 =$$ $$\Large\frac{800(y – x)}{xy}$$ $$\times$$ $$T$$ To solve for $$T$$ we multiply by the reciprocal of  $$R_{b}$$ $$\Large\frac{800xy}{800(y – x)}$$ $$=$$ $$T$$ Cancel the $$800$$ and we get $$T =$$ $$\Large\frac{xy}{y – x}$$ BUT WE CAN DO BETTER!!! I think Pei Mai would agree with me when I tell you that the GMAT should fear you, not the other way around. Because this is a VIC, it might be a good idea to try back-solving. Let’s pick some numbers and see what happens. I’m going to go with $$x = 2$$ and $$y = 4$$ (be careful – $$y$$ must be greater than $$x$$ because $$x$$ is the time they take together and will be less than the time one machine takes alone). Now the table looks like this: $$Work =$$ $$Rate$$ $$\times$$ $$Time$$ $$800 =$$ $$400$$ $$\times$$ $$2$$ $$800 =$$ $$200$$ $$\times$$ $$4$$ $$800 =$$ $$Rate_{b}$$ $$\times$$ $$T$$ We calculate $$R_{b}$$ the same way we did before, but it’s a little bit easier this time: $$R_{b} = 400 – 200 = 200$$ Now let’s solve for $$T$$ $$800 = 200\times{T}$$ Divide by $$200$$ and we get $$T = 4$$ Now we plug our picks for $$x$$ and $$y$$ into the answer choices. The target number (the result we’re looking for) is 4. Remember that you have to check all of the answers because you might have picked some unfortunate numbers that produce two matches. $$(A) = \Large\frac{2}{2 + 4}$$ Nope. $$(B) = \Large\frac{4}{2 + 4}$$ Nope. $$(C) = \Large\frac{2\times{4}}{2 + 4}$$ Nope. $$(D) = \Large\frac{2\times{4}}{2 – 4}$$ Nope. $$(E) = \Large\frac{2\times{4}}{4 – 2}$$ Yep!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7388725876808167, "perplexity": 595.2367672940674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863626.14/warc/CC-MAIN-20180520151124-20180520171124-00470.warc.gz"}
http://openstudy.com/updates/558245e9e4b07028ea6120ac
Here's the question you clicked on: 55 members online • 0 viewing ## Babynini one year ago Solve the given equation 2cos^2(theta)+sin(theta)=1 Delete Cancel Submit • This Question is Closed 1. Babynini • one year ago Best Response You've already chosen the best response. 0 I've gotten up to 2sin^2(theta)+sin(theta)+1=0 but cann't figure out how to factor :/ 2. Babynini • one year ago Best Response You've already chosen the best response. 0 (2sin(theta) ? ___)(sin(theta) ? ____) 3. freckles • one year ago Best Response You've already chosen the best response. 1 testing what you said for my ownself: $2(1-\sin^2(\theta))+\sin(\theta)-1=0 \\ \text{ by pythagorean identitiy; also subtract 1 on both sides } \\ 2-2\sin^2(\theta)+\sin(\theta)-1=0 \\ \text{ by distribute property } \\ -2\sin^2(\theta)+\sin(\theta)+1=0 \text{ combining like terms }$ Think you left a sign off on the first term here. I'm also going to multiply -1 on both sides: $2 \sin^2(\theta)-\sin(\theta)-1=0$ 4. Babynini • one year ago Best Response You've already chosen the best response. 0 ah yeah I didn't notice the 2sin^2theta was negative. 5. freckles • one year ago Best Response You've already chosen the best response. 1 anyways if you are doing trial factors you don't have a lot of trials here since -1 is 1(-1) or -1(1) 6. Babynini • one year ago Best Response You've already chosen the best response. 0 wait so now what do i do for factoring =.= 7. freckles • one year ago Best Response You've already chosen the best response. 1 if you don't like factoring much you could use the quadratic formula 8. Babynini • one year ago Best Response You've already chosen the best response. 0 (2sin(theta) - 1)(sin(theta)+1) = 0 ? 9. Babynini • one year ago Best Response You've already chosen the best response. 0 I think factoring is usually nicer xD 10. Babynini • one year ago Best Response You've already chosen the best response. 0 dangit. that wouldn't give us the right answer. 11. freckles • one year ago Best Response You've already chosen the best response. 1 well if you want to do factoring if you multiply that out the middle term will be 2sin(theta)-sin(theta which will not be -sin(theta) 12. freckles • one year ago Best Response You've already chosen the best response. 1 so which where the -1 and 1 are 13. Babynini • one year ago Best Response You've already chosen the best response. 0 right. 14. freckles • one year ago Best Response You've already chosen the best response. 1 switch * 15. Babynini • one year ago Best Response You've already chosen the best response. 0 huh? wouldn't that give the same answer? 16. freckles • one year ago Best Response You've already chosen the best response. 1 $2 \sin^2(\theta)-\sin(\theta)-1=0 \\ (2 \sin(\theta)+1)(\sin(\theta)-1)=0$ 17. Babynini • one year ago Best Response You've already chosen the best response. 0 which would be 2sin^2(theta)-2sin(theta)+sin(theta)-1=0 which is what we want yeah? 18. freckles • one year ago Best Response You've already chosen the best response. 1 yep 19. Babynini • one year ago Best Response You've already chosen the best response. 0 I was confused about -2sin(theta) - sin(theta) = -sin(theta) but now I see hah 20. freckles • one year ago Best Response You've already chosen the best response. 1 no no -2sin(theta)+sin(theta)=-sin(theta) 21. freckles • one year ago Best Response You've already chosen the best response. 1 oh did you mean to write that 22. Babynini • one year ago Best Response You've already chosen the best response. 0 Shiz yeah, sorry. 23. Babynini • one year ago Best Response You've already chosen the best response. 0 Finals week man. Killing me. 24. Babynini • one year ago Best Response You've already chosen the best response. 0 K so now we have sin(theta)=1/2 and sin(theta)=1 25. freckles • one year ago Best Response You've already chosen the best response. 1 almost 26. Babynini • one year ago Best Response You've already chosen the best response. 0 -1/2? 27. freckles • one year ago Best Response You've already chosen the best response. 1 $2 \sin^2(\theta)-\sin(\theta)-1=0 \\ (2 \sin(\theta)+1)(\sin(\theta)-1)=0 \\ \sin(\theta)=\frac{-1}{2} \text{ or } \sin(\theta)=1$ last equation right just a sign off on the first 28. Babynini • one year ago Best Response You've already chosen the best response. 0 K and sin = -1/2 at 5pi/6 and 7pi/6 ....right 29. freckles • one year ago Best Response You've already chosen the best response. 1 can I ask if you want me to check your answers can you give me the intervals in which you want to solve them 30. Babynini • one year ago Best Response You've already chosen the best response. 0 hm? it just says solve the given equation. No interval. 31. freckles • one year ago Best Response You've already chosen the best response. 1 probably looking for all solutions then 32. Babynini • one year ago Best Response You've already chosen the best response. 0 so perhaps 5pi/6, 7pi/6, pi/2 33. freckles • one year ago Best Response You've already chosen the best response. 1 ok but sin(5pi/6)=1/2 not -1/2 34. Babynini • one year ago Best Response You've already chosen the best response. 0 crap. 11pi/6 35. freckles • one year ago Best Response You've already chosen the best response. 1 sin(7pi/6)=-1/2 so 7pi/6 is a solution sin(11pi/6)=-1/2 so 11pi/6 is a solution yep yep 36. freckles • one year ago Best Response You've already chosen the best response. 1 and yes sin(pi/2)=1 so pi/2 is a solution to the equation sin(u)=1 37. freckles • one year ago Best Response You've already chosen the best response. 1 but 38. freckles • one year ago Best Response You've already chosen the best response. 1 if they want all the solutions and since we are working with sin and cos just +2pi*n and say where n is an integer 39. freckles • one year ago Best Response You've already chosen the best response. 1 $\theta=\frac{7\pi}{6}+2 \pi n \\ \theta=\frac{11\pi}{6}+2 \pi n \\ \theta=\frac{\pi}{2}+2 \pi n \\ \text{ where } n \text{ is integer }$ 40. Babynini • one year ago Best Response You've already chosen the best response. 0 Ah ok. I remember this :) 41. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999730587005615, "perplexity": 14639.817630987609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00511-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physics.uoguelph.ca/events/2018/11/uncovering-dynamics-spacetime
# Uncovering the Dynamics of Spacetime MacN 415 ## Speaker William East, Perimeter Institute ## Abstract With the ground-breaking gravitational wave detections from LIGO/Virgo, we have entered a new era where we can actually observe the action of strongly curved spacetime originally predicted by Einstein.  Going hand in hand with this, there has been a renaissance in the theoretical and computational tools we use to understand and interpret the dynamics of gravity and matter in this regime.  I will describe some of the rich behavior exhibited by sources of gravitational waves such as the mergers of black holes and neutron stars. I will also discuss some of the open questions, and what these events could teach us, not only about the extremes of gravity, but about the behavior of matter at extreme densities, the solution of astrophysical mysteries, and even the existence of new types of particles.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495768904685974, "perplexity": 597.6038596037769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00032.warc.gz"}
https://www.lessonplanet.com/teachers/ratio-and-proportion
Ratio and Proportion This Ratio and Proportion lesson plan also includes: Middle and high schoolers analyze the formation of ratios as they develop comparisons of two known quantities. These comparisons are used to formulate proportions and solve problems. Learner worksheet and teacher exemplar resources are included, with keys.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9303954243659973, "perplexity": 3438.168809897138}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865468.19/warc/CC-MAIN-20180523082914-20180523102914-00171.warc.gz"}
http://mathhelpforum.com/calculus/194271-rules-thumb-divergence-convergence-comparisons.html
# Math Help - Rules of thumb for divergence/convergence comparisons? 1. ## Rules of thumb for divergence/convergence comparisons? Hey, I was wondering a little about determining convergence/divergence of improper integrals. I have no problems doing the calculations and such, but I still feel a little bit like I'm floundering in the dark when it comes to deciding exactly what function to choose for comparison integrals. For example, if we have the integral from 2 to infinity of x*sqrt(x)/x^2-1 ... I just happen to know that 1/sqrt(x) is suitable for comparison, but I'm not sure I could just guess or conclude that from something specific. So basically I was just wondering if there are any good rules of thumb for this, when it comes to somewhat more complicated functions, like the one above and worse? Or is it really just hitting your head against it until you work up plenty of experience at it? 2. ## Re: Rules of thumb for divergence/convergence comparisons? Originally Posted by Scurmicurv Hey, I was wondering a little about determining convergence/divergence of improper integrals. I have no problems doing the calculations and such, but I still feel a little bit like I'm floundering in the dark when it comes to deciding exactly what function to choose for comparison integrals. For example, if we have the integral from 2 to infinity of x*sqrt(x)/x^2-1 ... I just happen to know that 1/sqrt(x) is suitable for comparison, but I'm not sure I could just guess or conclude that from something specific. So basically I was just wondering if there are any good rules of thumb for this, when it comes to somewhat more complicated functions, like the one above and worse? Or is it really just hitting your head against it until you work up plenty of experience at it? note the degree of the numerator is 3/2 and the degree of the denominator is 2 ... equivalently, you're looking at an integral of the same degree as $\frac{1}{x^{1/2}}$ 3. ## Re: Rules of thumb for divergence/convergence comparisons? Heeey, that... ha, wow, I can't believe I haven't seen that myself before. But yea, I went back and looked over a few old assignments and had a fair amount of stuff clicking into place. Thanks a lot for pointing that out!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7337808012962341, "perplexity": 248.5224770160919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465599.34/warc/CC-MAIN-20150226074105-00228-ip-10-28-5-156.ec2.internal.warc.gz"}
http://codeforces.com/problemset/problem/639/A
A. Bear and Displayed Friends time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output Limak is a little polar bear. He loves connecting with other bears via social networks. He has n friends and his relation with the i-th of them is described by a unique integer ti. The bigger this value is, the better the friendship is. No two friends have the same value ti. Spring is starting and the Winter sleep is over for bears. Limak has just woken up and logged in. All his friends still sleep and thus none of them is online. Some (maybe all) of them will appear online in the next hours, one at a time. The system displays friends who are online. On the screen there is space to display at most k friends. If there are more than k friends online then the system displays only k best of them — those with biggest ti. • "1 id" — Friend id becomes online. It's guaranteed that he wasn't online before. • "2 id" — Check whether friend id is displayed by the system. Print "YES" or "NO" in a separate line. Are you able to help Limak and answer all queries of the second type? Input The first line contains three integers n, k and q (1 ≤ n, q ≤ 150 000, 1 ≤ k ≤ min(6, n)) — the number of friends, the maximum number of displayed online friends and the number of queries, respectively. The second line contains n integers t1, t2, ..., tn (1 ≤ ti ≤ 109) where ti describes how good is Limak's relation with the i-th friend. The i-th of the following q lines contains two integers typei and idi (1 ≤ typei ≤ 2, 1 ≤ idi ≤ n) — the i-th query. If typei = 1 then a friend idi becomes online. If typei = 2 then you should check whether a friend idi is displayed. It's guaranteed that no two queries of the first type will have the same idi becuase one friend can't become online twice. Also, it's guaranteed that at least one query will be of the second type (typei = 2) so the output won't be empty. Output For each query of the second type print one line with the answer — "YES" (without quotes) if the given friend is displayed and "NO" (without quotes) otherwise. Examples Input 4 2 8300 950 500 2001 32 42 31 11 22 12 22 3 Output NOYESNOYESYES Input 6 3 950 20 51 17 99 241 31 41 51 22 42 21 12 42 3 Output NOYESNOYES Note In the first sample, Limak has 4 friends who all sleep initially. At first, the system displays nobody because nobody is online. There are the following 8 queries: 1. "1 3" — Friend 3 becomes online. 2. "2 4" — We should check if friend 4 is displayed. He isn't even online and thus we print "NO". 3. "2 3" — We should check if friend 3 is displayed. Right now he is the only friend online and the system displays him. We should print "YES". 4. "1 1" — Friend 1 becomes online. The system now displays both friend 1 and friend 3. 5. "1 2" — Friend 2 becomes online. There are 3 friends online now but we were given k = 2 so only two friends can be displayed. Limak has worse relation with friend 1 than with other two online friends (t1 < t2, t3) so friend 1 won't be displayed 6. "2 1" — Print "NO". 7. "2 2" — Print "YES". 8. "2 3" — Print "YES".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25677213072776794, "perplexity": 1656.7775966343913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867217.1/warc/CC-MAIN-20180525200131-20180525220131-00104.warc.gz"}
https://www.mathworks.com/help/signal/ref/bartlett.html
# bartlett Bartlett window ## Syntax ``w = bartlett(L)`` ## Description example ````w = bartlett(L)` returns an `L`-point symmetric Bartlett window.``` ## Examples collapse all Create a 64-point Bartlett window. Display the result using `wvtool`. ```L = 64; bw = bartlett(L); wvtool(bw)``` ## Input Arguments collapse all Window length, specified as a positive integer. Data Types: `single` | `double` ## Output Arguments collapse all Bartlett window, returned as a column vector. ## Algorithms The following equation generates the coefficients of a Bartlett window: `$w\left(n\right)=\left\{\begin{array}{ll}\frac{2n}{N},\hfill & 0\le n\le \frac{N}{2},\hfill \\ 2-\frac{2n}{N},\hfill & \frac{N}{2}\le n\le N.\hfill \end{array}$` The window length $L=N+1$. The Bartlett window is very similar to a triangular window as returned by the `triang` function. However, the Bartlett window always has zeros at the first and last samples, while the triangular window is nonzero at those points. For odd values of `L`, the center `L-2` points of``` bartlett(L)``` are equivalent to `triang(L-2)`. Note If you specify a one-point window (`L = 1`), the value `1` is returned. ## References [1] Oppenheim, Alan V., Ronald W. Schafer, and John R. Buck. Discrete-Time Signal Processing. Upper Saddle River, NJ: Prentice Hall, 1999, pp. ## Extended Capabilities ### C/C++ Code GenerationGenerate C and C++ code using MATLAB® Coder™. Introduced before R2006a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87920743227005, "perplexity": 10414.21716726299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00561.warc.gz"}
http://www.clear-lines.com/blog/category/Probability.aspx
Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning. 8. July 2012 11:34 I recently spent some time thinking about code performance optimization problems, which led me to dust my old class notes on Queuing Theory. In general, Queuing studies the behavior of networks of Queues, where jobs arrive at various time intervals, and wait in line until they can be processed / served by a Node / Server, potentially propagating new Jobs going to other connected queues upon completion. The simplest case in Queues is known as the Single-Server Queue – a queue where all jobs wait in a single line, and are processed by a single server. Single queues are interesting in their own right, and can be found everywhere. They are also important, because they are the building blocks of Networks of Queues, and understanding what happens at a single Queue level helps understand how they work when connected together. The question we’ll be looking at today is the following: given the characteristics of a Queue, what can we say about its performance over time? For instance, can we estimate how often the Queue will be busy? How many jobs will be backed up waiting for service on average? How large can the queue get? How long will it take, on average, to process an incoming job? <Side note: the picture above is not completely random. It depicts the Marsupilami, a fictional creature who is an expert in managing long tails, known in French as “Queues”> We’ll use a simple Monte-Carlo simulation model to approach these questions, and see whether the results we observe match the results predicted by theory. What we are interested in is observing the state of the Queue over time. Two events drive the behavior of the queue: • a new Job arrives in the queue to be processed, and either gets processed immediately by the server, or is placed at the end of the line, • the server completes a Job, picks the next one in line if available and works on it until it’s done. From this description, we can identify a few elements that are important in modeling the queue: • whether the Server is Idle or Busy, • whether Jobs are waiting in the Queue to be processed, • how long it takes the Server to process a Job, • how new Jobs arrive to the Queue over time ## Modeling the Queue Let’s get coding! We create a small F# script (Note: the complete code sample is available at the bottom of the post, as well as on FsSnip), and begin by defining a Discriminated Union type Status, which represents the state of the Server at time T: type Status = Idle | Busy of DateTime * int Our server can be in two states: Idle, or Busy, in which case we also need to know when it will be done with its current Job, and how many Jobs are waiting to be processed. In order to model what is happening to the Queue, we’ll create a State record type: type State = { Start: DateTime; Status: Status; NextIn: DateTime } Start and Status should be self-explanatory – they represent the time when the Queue started being in that new State, and the Server Status. The reason for NextIn may not be as obvious - it represents the arrival time of the next Job. First, there is an underlying assumption here that there is ALWAYS a next job: we are modeling the Queue as a perpetual process, so that we can simulate it for as long as we want. Then, the reason for this approach is that it simplifies the determination of the transition of the Queue between states: let next arrival processing state = match state.Status with | Idle -> { Start = state.NextIn; NextIn = state.NextIn + arrival(); Status = Busy(state.NextIn + processing(), 0) } | Busy(until, waiting) -> match (state.NextIn <= until) with | true -> { Start = state.NextIn; NextIn = state.NextIn + arrival(); Status = Busy(until, waiting + 1) } | false -> match (waiting > 0) with | true -> { Start = until; Status = Busy(until + processing(), waiting - 1); NextIn = state.NextIn } | false -> { Start = until; Status = Idle; NextIn = state.NextIn } This somewhat gnarly function is worth commenting a bit. Its purpose is to determine the next state the Queue will enter in, given its current state, and 2 functions, arrival and processing, which have the same signature: val f : (unit -> TimeSpan) Both these functions take no arguments, and return a TimeSpan, which represents respectively how much time will elapse between the latest arrival in the Queue and the next one (the inter-arrival time), and how much time the Server will take completing its next Job. This information is sufficient to derive the next state of the system: • If the Server is Idle, it will switch to Busy when the next Job arrives, and the arrival of next Job is schedule based on job inter-arrival time, • If the Server is Busy, two things can happen: either the next Job arrives before the Server completes its work, or not. If a new Job arrives first, it increases the number of Jobs waiting to be processed, and we schedule the next Arrival. If the Server finishes first, if no Job is waiting to be processed, it becomes Idle, and otherwise it begins processing the Job in front of the line. We are now ready to run a Simulation. let simulate startTime arr proc = let nextIn = startTime + arr() let state = { Start = startTime; Status = Idle; NextIn = nextIn } Seq.unfold (fun st -> Some(st, next arr proc st)) state We initialize the Queue to begin at the specified start time, with a cold-start (Idle), and unfold an infinite sequence of States, which can go on if we please. ## Running the Simulation Let’s start with a Sanity check, and validate the behavior of a simple case, where Jobs arrive to the Queue every 10 seconds, and the Queue takes 5 seconds to process each Job. First, let’s write a simple function to pretty-display the state of the Queue over time: let pretty state = let count = match state.Status with | Idle -> 0 | Busy(_, waiting) -> 1 + waiting let nextOut = match state.Status with | Idle -> "Idle" | Busy(until, _) -> until.ToLongTimeString() let start = state.Start.ToLongTimeString() let nextIn = state.NextIn.ToLongTimeString() printfn "Start: %s, Count: %i, Next in: %s, Next out: %s" start count nextIn nextOut Now we can define our model: let constantTime (interval: TimeSpan) = let ticks = interval.Ticks fun () -> interval let arrivalTime = new TimeSpan(0,0,10); let processTime = new TimeSpan(0,0,5) let simpleArr = constantTime arrivalTime let simpleProc = constantTime processTime let startTime = new DateTime(2010, 1, 1) let constantCase = simulate startTime simpleArr simpleProc Let’s simulate 10 transitions in fsi: > Seq.take 10 constantCase |> Seq.iter pretty;; Start: 12:00:00 AM, Count: 0, Next in: 12:00:10 AM, Next out: Idle Start: 12:00:10 AM, Count: 1, Next in: 12:00:20 AM, Next out: 12:00:15 AM Start: 12:00:15 AM, Count: 0, Next in: 12:00:20 AM, Next out: Idle Start: 12:00:20 AM, Count: 1, Next in: 12:00:30 AM, Next out: 12:00:25 AM Start: 12:00:25 AM, Count: 0, Next in: 12:00:30 AM, Next out: Idle Start: 12:00:30 AM, Count: 1, Next in: 12:00:40 AM, Next out: 12:00:35 AM Start: 12:00:35 AM, Count: 0, Next in: 12:00:40 AM, Next out: Idle Start: 12:00:40 AM, Count: 1, Next in: 12:00:50 AM, Next out: 12:00:45 AM Start: 12:00:45 AM, Count: 0, Next in: 12:00:50 AM, Next out: Idle Start: 12:00:50 AM, Count: 1, Next in: 12:01:00 AM, Next out: 12:00:55 AM val it : unit = () Looks like we are doing something right – the simulation displays an arrival every 10 seconds, followed by 5 seconds of activity until the job is processed, and 5 seconds of Idleness until the next arrival. Let’s do something a bit more complicated – arrivals with random, uniformly distributed inter-arrival times: let uniformTime (seconds: int) = let rng = new Random() fun () -> let t = rng.Next(seconds + 1) new TimeSpan(0, 0, t) let uniformArr = uniformTime 10 let uniformCase = simulate startTime uniformArr simpleProc Here, arrival times will take any value (in seconds) between 0 and 10, included – with an average of 5 seconds between arrivals. A quick run in fsi produces the following sample: > Seq.take 10 uniformCase |> Seq.iter pretty;; Start: 12:00:00 AM, Count: 0, Next in: 12:00:02 AM, Next out: Idle Start: 12:00:02 AM, Count: 1, Next in: 12:00:03 AM, Next out: 12:00:07 AM Start: 12:00:03 AM, Count: 2, Next in: 12:00:11 AM, Next out: 12:00:07 AM Start: 12:00:07 AM, Count: 1, Next in: 12:00:11 AM, Next out: 12:00:12 AM Start: 12:00:11 AM, Count: 2, Next in: 12:00:11 AM, Next out: 12:00:12 AM Start: 12:00:11 AM, Count: 3, Next in: 12:00:16 AM, Next out: 12:00:12 AM Start: 12:00:12 AM, Count: 2, Next in: 12:00:16 AM, Next out: 12:00:17 AM Start: 12:00:16 AM, Count: 3, Next in: 12:00:24 AM, Next out: 12:00:17 AM Start: 12:00:17 AM, Count: 2, Next in: 12:00:24 AM, Next out: 12:00:22 AM Start: 12:00:22 AM, Count: 1, Next in: 12:00:24 AM, Next out: 12:00:27 AM val it : unit = () Not surprisingly, given the faster arrivals, we see the Queue getting slightly backed up, with Jobs waiting to be processed. How would the Queue look like after, say, 1,000,000 transitions? Easy enough to check: > Seq.nth 1000000 uniformCase |> pretty;; Start: 10:36:23 PM, Count: 230, Next in: 10:36:25 PM, Next out: 10:36:28 PM val it : unit = () Interesting – looks like the Queue is getting backed up quite a bit as time goes by. This is a classic result with Queues: the utilization rate, defined as arrival rate / departure rate, is saturated. When the utilization rate is strictly less than 100%, the Queue is stable, and otherwise it will build up over time, accumulating a backlog of Jobs. Let’s create a third type of model, with Exponential rates: let exponentialTime (seconds: float) = let lambda = 1.0 / seconds let rng = new Random() fun () -> let t = - Math.Log(rng.NextDouble()) / lambda let ticks = t * (float)TimeSpan.TicksPerSecond new TimeSpan((int64)ticks) let expArr = exponentialTime 10.0 let expProc = exponentialTime 7.0 let exponentialCase = simulate startTime expArr expProc The arrivals and processing times are exponentially distributed, with an average time expressed in seconds. In our system, we expect new Jobs to arrive on average every 10 seconds, varying between 0 and + infinity, and Jobs take 7 seconds on average to process. The queue is not saturated, and should therefore not build up, which we can verify: > Seq.nth 1000000 exponentialCase |> pretty;; Start: 8:55:36 PM, Count: 4, Next in: 8:55:40 PM, Next out: 8:55:36 PM val it : unit = () A queue where both arrivals and processing times follow that distribution is a classic in Queuing Theory, known as a M/M/1 queue. It is of particular interest because some of its characteristics can be derived analytically – we’ll revisit that later. ## Measuring performance We already saw a simple useful measurement for Queues, the utilization rate, which defines whether our Queue will explode or stabilize over time. This is important but crude – what we would really be interested in is measuring how much of a bottleneck the Queue creates. Two measures come to mind in this frame: how long is the Queue on average, and how much time does a Job take to go through the entire system (queue + processing)? Let’s begin with the Queue length. On average, how many Jobs should we expect to see in the Queue (including Jobs being currently processed)? The question is less trivial than it looks. We could naively simulate a Sequence of States, and average out the number of Jobs in each State, but this would be incorrect. To understand the issue, let’s consider a Queue with constant arrivals and processing times, where Jobs arrive every 10 seconds and take 1 second to process. The result will be alternating 0s and 1s – which would give a naïve average of 0.5 Jobs in queue. However, the system will be Busy for 1 seconds, and Idle for 9 seconds, with an average number of Jobs of 0.1 over time. To correctly compute the average, we need to compute a weighted average, counting the number of jobs present in a state, weighted by the time the System spent in that particular state. Let’s consider for illustration the example above, where we observe a Queue for 10 seconds, with 3 Jobs A, B, C arriving and departing. The average number of Jobs in the System is 3 seconds with 0, 5 seconds with 1 and 2 seconds with 2, which would give us (3x0 + 5x1 + 2x2)/10, i.e. 9/10 or 0.9 Jobs on average. We could achieve the same result by accumulating the computation over time, starting at each transition point: 2s x 0 + 2s x 1 + 1s x 2 + 2s x 1 + 1s x 2 + 1s x 1 + 1s x 0 = 9 “Jobs-seconds”, which over 10 seconds gives us the same result as before. Let’s implement this. We will compute the average using an Accumulator, using Sequence Scan: for each State of the System, we will measure how much time was spent, in Ticks, as well as how many Jobs were in the System during that period, and accumulate the total number of ticks since the Simulation started, as well as the total number of “Jobs-Ticks”, so that the average until that point will simply be: Average Queue length = sum of Job-Ticks / sum of Ticks. let averageCountIn (transitions: State seq) = // time spent in current state, in ticks let ticks current next = next.Start.Ticks - current.Start.Ticks // jobs in system in state let count state = match state.Status with | Idle -> (int64)0 | Busy(until, c) -> (int64)c + (int64)1 // update state = total time and total jobsxtime // between current and next queue state let update state pair = let current, next = pair let c = count current let t = ticks current next (fst state) + t, (snd state) + (c * t) // accumulate updates from initial state let initial = (int64)0, (int64)0 transitions |> Seq.pairwise |> Seq.scan (fun state pair -> update state pair) initial |> Seq.map (fun state -> (float)(snd state) / (float)(fst state)) Let’s try this on our M/M/1 queue, the exponential case described above: > averageCountIn exponentialCase |> Seq.nth 1000000 ;; val it : float = 2.288179686 According to theory, for an M/M/1 Queue, that number should be rho / (1-rho), i.e. (7/10) / (1-(7/10)), which gives 2.333. Close enough, I say. Let’s look at the Response Time now, that is, the average time it takes for a Job to leave the system once it entered the queue. We’ll use an idea similar to the one we used for the average number of Jobs in the System. On our illustration, we can see that A stays in for 3 seconds, B for 4s and C for 2 seconds. In that case, the average is time A + time B + time C / 3 jobs, i.e. (3 + 4 + 2)/3 = 3 seconds. But we can also decompose the time spent by A, B and C in the system by summing up not by Job, but by period between transitions. In this case, we would get Time spent by A, B, C = 2s x 0 jobs + 2s x 1 job + 1s x 2 jobs + 2s x 1 job + 1s x 2 jobs + 1s x 1 job + 1s x 0 jobs = 9 “job-seconds”, which would give us the correct total time we need. We can use that idea to implement the average time spent in the system in the same fashion we did the average jobs in the system, by accumulating the measure as the sequence of transition unfolds: let averageTimeIn (transitions: State seq) = // time spent in current state, in ticks let ticks current next = next.Start.Ticks - current.Start.Ticks // jobs in system in state let count state = match state.Status with | Idle -> (int64)0 | Busy(until, c) -> (int64)c + (int64)1 // count arrivals let arrival current next = if count next > count current then (int64)1 else (int64)0 // update state = total time and total arrivals // between current and next queue state let update state pair = let current, next = pair let c = count current let t = ticks current next let a = arrival current next (fst state) + a, (snd state) + (c * t) // accumulate updates from initial state let initial = (int64)0, (int64)0 transitions |> Seq.pairwise |> Seq.scan (fun state pair -> update state pair) initial |> Seq.map (fun state -> let time = (float)(snd state) / (float)(fst state) new TimeSpan((int64)time)) Trying this out on our M/M/1 queue, we theoretically expect an average of 23.333 seconds, and get 22.7 seconds: > averageTimeIn exponentialCase |> Seq.nth 1000000 ;; val it : TimeSpan = 00:00:22.7223798 {Days = 0; Hours = 0; Milliseconds = 722; Minutes = 0; Seconds = 22; Ticks = 227223798L; TotalDays = 0.0002629905069; TotalHours = 0.006311772167; TotalMilliseconds = 22722.3798; TotalMinutes = 0.37870633; TotalSeconds = 22.7223798;} Given the somewhat sordid conversions between Int64, floats and TimeSpan, this seems plausible enough. ## A practical example Now that we got some tools at our disposition, let’s look at a semi-realistic example. Imagine a subway station, with 2 turnstiles (apparently also known as “Baffle Gates”), one letting people in, one letting people out. On average, it takes 4 seconds to get a person through the Turnstile (some people are more baffled than others) – we’ll model the processing time as an Exponential. Now imagine that, on average, passengers arrive to the station every 5 seconds. We’ll model that process as an exponential too, even tough it’s fairly unrealistic to assume that the rate of arrival remains constant throughout the day. // turnstiles admit 1 person / 4 seconds let turnstileProc = exponentialTime 4.0 // passengers arrive randomly every 5s let passengerArr = exponentialTime 5.0 Assuming the Law of conservation applies to subway station passengers too, we would expect the average rate of exit from the station to also be one every 5 seconds. However, unlike passengers coming in the station, passengers exiting arrived there by subway, and are therefore likely to arrive in batches. We’ll make the totally realistic assumption here that trains are never late, and arrive like clockwork at constant intervals, bringing in the same number of passengers. If trains arrive every 30 seconds, to maintain our average rate of 1 passenger every 5 seconds, each train will carry 6 passengers: let batchedTime seconds batches = let counter = ref 0 fun () -> counter := counter.Value + 1 if counter.Value < batches then new TimeSpan(0, 0, 0) else counter := 0 new TimeSpan(0, 0, seconds) // trains arrive every 30s with 5 passengers let trainArr = batchedTime 30 6 How would our 2 Turnstiles behave? Let’s check: // passengers arriving in station let queueIn = simulate startTime passengerArr turnstileProc // passengers leaving station let queueOut = simulate startTime trainArr turnstileProc let prettyWait (t:TimeSpan) = t.TotalSeconds printfn "Turnstile to get in the Station" averageCountIn queueIn |> Seq.nth 1000000 |> printfn "In line: %f" averageTimeIn queueIn |> Seq.nth 1000000 |> prettyWait |> printfn "Wait in secs: %f" printfn "Turnstile to get out of the Station" averageCountIn queueOut |> Seq.nth 1000000 |> printfn "In line: %f" averageTimeIn queueOut |> Seq.nth 1000000 |> prettyWait |> printfn "Wait in secs: %f";; Turnstile to get in the Station In line: 1.917345 Wait in secs: 9.623852 Turnstile to get out of the Station In line: 3.702664 Wait in secs: 18.390027 The results fits my personal experience: the Queue at the exit gets backed up quite a bit, and passengers have to wait an average 18.4 seconds to exit the Station, while it takes them only 9.6 seconds to get in. It also may seem paradoxical. People are entering and exiting the Station at the same rate, and turnstiles process passengers at the same speed, so how can we have such different behaviors at the two turnstiles? The first point here is that Queuing processes can be counter-intuitive, and require thinking carefully about what is being measured, as we saw earlier with the performance metrics computations. The only thing which differs between the 2 turnstiles is the way arrivals are distributed over time – and that makes a lot of difference. Arrivals are fairly evenly spaced, and there is a good chance that a Passenger who arrives to the Station finds no one in the Queue, and in that case, he will wait only 4 seconds on average. By contrast, when passengers exit, they arrive in bunches, and only the first one will find no-one in the Queue – all the others will have to wait for that first person to pass through before getting their chance, and therefore have by default a much larger “guaranteed wait time”. That’s it for today! There is much more to Queuing than single-queues (if you are into probability and Markov chains, networks of Queues are another fascinating area), but we will leave that for another time. I hope you’ll have found this excursion in Queuing interesting and maybe even useful. I also thought this was an interesting topic illustrating F# Sequences – and I am always looking forward to feedback! Complete code (F# script) is also available on FsSnip.net open System // Queue / Server is either Idle, // or Busy until a certain time, // with items queued for processing type Status = Idle | Busy of DateTime * int type State = { Start: DateTime; Status: Status; NextIn: DateTime } let next arrival processing state = match state.Status with | Idle -> { Start = state.NextIn; NextIn = state.NextIn + arrival(); Status = Busy(state.NextIn + processing(), 0) } | Busy(until, waiting) -> match (state.NextIn <= until) with | true -> { Start = state.NextIn; NextIn = state.NextIn + arrival(); Status = Busy(until, waiting + 1) } | false -> match (waiting > 0) with | true -> { Start = until; Status = Busy(until + processing(), waiting - 1); NextIn = state.NextIn } | false -> { Start = until; Status = Idle; NextIn = state.NextIn } let simulate startTime arr proc = let nextIn = startTime + arr() let state = { Start = startTime; Status = Idle; NextIn = nextIn } Seq.unfold (fun st -> Some(st, next arr proc st)) state let pretty state = let count = match state.Status with | Idle -> 0 | Busy(_, waiting) -> 1 + waiting let nextOut = match state.Status with | Idle -> "Idle" | Busy(until, _) -> until.ToLongTimeString() let start = state.Start.ToLongTimeString() let nextIn = state.NextIn.ToLongTimeString() printfn "Start: %s, Count: %i, Next in: %s, Next out: %s" start count nextIn nextOut let constantTime (interval: TimeSpan) = let ticks = interval.Ticks fun () -> interval let arrivalTime = new TimeSpan(0,0,10); let processTime = new TimeSpan(0,0,5) let simpleArr = constantTime arrivalTime let simpleProc = constantTime processTime let startTime = new DateTime(2010, 1, 1) let constantCase = simulate startTime simpleArr simpleProc printfn "Constant arrivals, Constant processing" Seq.take 10 constantCase |> Seq.iter pretty;; let uniformTime (seconds: int) = let rng = new Random() fun () -> let t = rng.Next(seconds + 1) new TimeSpan(0, 0, t) let uniformArr = uniformTime 10 let uniformCase = simulate startTime uniformArr simpleProc printfn "Uniform arrivals, Constant processing" Seq.take 10 uniformCase |> Seq.iter pretty;; let exponentialTime (seconds: float) = let lambda = 1.0 / seconds let rng = new Random() fun () -> let t = - Math.Log(rng.NextDouble()) / lambda let ticks = t * (float)TimeSpan.TicksPerSecond new TimeSpan((int64)ticks) let expArr = exponentialTime 10.0 let expProc = exponentialTime 7.0 let exponentialCase = simulate startTime expArr expProc printfn "Exponential arrivals, Exponential processing" Seq.take 10 exponentialCase |> Seq.iter pretty;; let averageCountIn (transitions: State seq) = // time spent in current state, in ticks let ticks current next = next.Start.Ticks - current.Start.Ticks // jobs in system in state let count state = match state.Status with | Idle -> (int64)0 | Busy(until, c) -> (int64)c + (int64)1 // update state = total time and total jobsxtime // between current and next queue state let update state pair = let current, next = pair let c = count current let t = ticks current next (fst state) + t, (snd state) + (c * t) // accumulate updates from initial state let initial = (int64)0, (int64)0 transitions |> Seq.pairwise |> Seq.scan (fun state pair -> update state pair) initial |> Seq.map (fun state -> (float)(snd state) / (float)(fst state)) let averageTimeIn (transitions: State seq) = // time spent in current state, in ticks let ticks current next = next.Start.Ticks - current.Start.Ticks // jobs in system in state let count state = match state.Status with | Idle -> (int64)0 | Busy(until, c) -> (int64)c + (int64)1 // count arrivals let arrival current next = if count next > count current then (int64)1 else (int64)0 // update state = total time and total arrivals // between current and next queue state let update state pair = let current, next = pair let c = count current let t = ticks current next let a = arrival current next (fst state) + a, (snd state) + (c * t) // accumulate updates from initial state let initial = (int64)0, (int64)0 transitions |> Seq.pairwise |> Seq.scan (fun state pair -> update state pair) initial |> Seq.map (fun state -> let time = (float)(snd state) / (float)(fst state) new TimeSpan((int64)time)) // turnstiles admit 1 person / 4 seconds let turnstileProc = exponentialTime 4.0 // passengers arrive randomly every 5s let passengerArr = exponentialTime 5.0 let batchedTime seconds batches = let counter = ref 0 fun () -> counter := counter.Value + 1 if counter.Value < batches then new TimeSpan(0, 0, 0) else counter := 0 new TimeSpan(0, 0, seconds) // trains arrive every 30s with 5 passengers let trainArr = batchedTime 30 6 // passengers arriving in station let queueIn = simulate startTime passengerArr turnstileProc // passengers leaving station let queueOut = simulate startTime trainArr turnstileProc let prettyWait (t:TimeSpan) = t.TotalSeconds printfn "Turnstile to get in the Station" averageCountIn queueIn |> Seq.nth 1000000 |> printfn "In line: %f" averageTimeIn queueIn |> Seq.nth 1000000 |> prettyWait |> printfn "Wait in secs: %f" printfn "Turnstile to get out of the Station" averageCountIn queueOut |> Seq.nth 1000000 |> printfn "In line: %f" averageTimeIn queueOut |> Seq.nth 1000000 |> prettyWait |> printfn "Wait in secs: %f" 27. May 2012 10:20 Markov chains are a classic in probability model. They represent systems that evolve between states over time, following a random but stable process which is memoryless. The memoryless-ness is the defining characteristic of Markov processes,  and is known as the Markov property. Roughly speaking, the idea is that if you know the state of the process at time T, you know all there is to know about it – knowing where it was at time T-1 would not give you additional information on where it may be at time T+1. While Markov models come in multiple flavors, Markov chains with finite discrete states in discrete time are particularly interesting. They describe a system which is changes between discrete states at fixed time intervals, following a transition pattern described by a transition matrix. Let’s illustrate with a simplistic example. Imagine that you are running an Airline, AcmeAir, operating one plane. The plane goes from city to city, refueling and doing some maintenance (or whatever planes need) every time. Each time the plane lands somewhere, it can be in three states: early, on-time, or delayed. It’s not unreasonable to think that if our plane landed late somewhere, it may be difficult to catch up with the required operations, and as a result, the likelihood of the plane landing late at its next stop is higher. We could represent this in the following transition matrix (numbers totally made up): Current \ Next Early On-time Delayed Early 10% 85% 5% On-Time 10% 75% 15% Delayed 5% 60% 35% Each row of the matrix represents the current state, and each column the next state. The first row tells us that if the plane landed Early, there is a 10% chance we’ll land early in our next stop, an 80% chance we’ll be on-time, and a 5% chance we’ll arrive late. Note that each row sums up to 100%: given the current state, we have to end up in one of the next states. How could we simulate this system? Given the state at time T, we simply need to “roll” a random number generator for a percentage between 0% and 100%, and depending on the result, pick our next state – and repeat. Using F#, we could model the transition matrix as an array (one element per state) of arrays (the probabilities to land in each state), which is pretty easy to define using Array comprehensions: let P = [| [| 0.10; 0.85; 0.05 |]; [| 0.10; 0.75; 0.15 |]; [| 0.05; 0.60; 0.35 |] |] (Note: the entire code sample is also posted on fsSnip.net/ch) To simulate the behavior of the system, we need a function that given a state and a transition matrix, produces the next state according to the transition probabilities: // given a roll between 0 and 1 // and a distribution D of // probabilities to end up in each state // returns the index of the state let state (D: float[]) roll = let rec index cumul current = let cumul = cumul + D.[current] match (roll <= cumul) with | true -> current | false -> index cumul (current + 1) index 0.0 0 // given the transition matrix P // the index of the current state // and a random generator, // simulates what the next state is let nextState (P: float[][]) current (rng: Random) = let dist = P.[current] let roll = rng.NextDouble() state dist roll // given a transition matrix P // the index i of the initial state // and a random generator // produces a sequence of states visited let simulate (P: float[][]) i (rng: Random) = Seq.unfold (fun s -> Some(s, nextState P s rng)) i The state function is a simple helper; given an array D which is assumed to contain probabilities to transition to each of the states, and a “roll” between 0.0 and 1.0, returns the corresponding state. nextState uses that function, by first retrieving the transition probabilities for the current state i, “rolling” the dice, and using state to compute the simulated next state. simulate uses nextState to create an infinite sequence of states, starting from an initial state i. We need to open System to use the System.Random class – and we can now use this in the F# interactive window: > let flights = simulate P 1 (new Random());; val flights : seq<int> > Seq.take 50 flights |> Seq.toList;; val it : int list = [1; 0; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 0; 1; 1; 2; 2; 2; 1; 1; 1; 1; 1; 1; 1; 1; 2; 1; 1; 1; 1; 1; 1; 1; 1; 2; 1; 1; 1; 1; 1; 2; 1; 1; 1; 1; 1; 1; 1] > Our small sample shows us what we expect: mostly on-time (Fly AcmeAir!), with some episodical delayed or early flights. How many delays would we observe on a 1000-flights simulation? Let’s try: > Seq.take 1000 flights |> Seq.filter (fun i -> i = 2) |> Seq.length;; val it : int = 174 > We observe about 17% of delayed flights. This is relevant information, but a single simulation is just that – an isolated case. Fortunately, Markov chains have an interesting property: if it is possible to go from any state to any state, then the system will have a stationary distribution, which corresponds to its long term equilibrium. Essentially, regardless of the starting point, over long enough periods, each state will be observed with a stable frequency. One way to understand better what is going on is to expand our frame. Instead of considering the exact state of the system, we can look at it in terms of probability: at any point in time, the system has a certain probability to be in each of its states. For instance, imagine that given current information, we know that our plane will land at its next stop either early or on time, with a 50% chance of each. In that case, we can determine the probability that its next stop will be delayed by combining the transition probabilities: p(delayed in T+1) = p(delayed in T) x P(delayed in T+1 | delayed in T) +  p(on-time in T) x P(delayed in T+1 | on-time in T) + p(early in T) x P(delayed in T+1 | early in T) p(delayed in T+1) = 0.0 x 0.35 + 0.5 x 0.15 + 0.5 x 0.05 = 0.1 This can be expressed much more concisely using Vector notation. We can represent the state as a vector S, where each component of the vector is the probability to be in each state, in our case S(T) = [ 0.50; 0.50; 0.0 ] In that case, the state at time T+1 will be: S(T+1) = S(T) x P Let’s make that work with some F#. The product of a vector by a matrix is the dot-product of the vector with each column vector of the matrix: // Vector dot product let dot (V1: float[]) (V2: float[]) = Array.zip V1 V2 |> Array.map(fun (v1, v2) -> v1 * v2) |> Array.sum // Extracts the jth column vector of matrix M let column (M: float[][]) (j: int) = M |> Array.map (fun v -> v.[j]) // Given a row-vector S describing the probability // of each state and a transition matrix P, compute // the next state distribution let nextDist S P = P |> Array.mapi (fun j v -> column P j) |> Array.map(fun v -> dot v S) We can now handle our previous example, creating a state s with a 50/50 chance of being in state 0 or 1: > let s = [| 0.5; 0.5; 0.0 |];; val s : float [] = [|0.5; 0.5; 0.0|] > let s' = nextDist s P;; val s' : float [] = [|0.1; 0.8; 0.1|] > We can also easily check what the state of the system should be after, say, 100 flights: > let s100 = Seq.unfold (fun s -> Some(s, nextDist s P)) s |> Seq.nth 100;; val s100 : float [] = [|0.09119496855; 0.7327044025; 0.1761006289|] After 100 flights, starting from either early or on-time, we have about 17% of chance of being delayed. Note that this is consistent with what we observed in our initial simulation. Given that our Markov chain has a stationary distribution, this is to be expected: unless our simulation was pathologically unlikely, we should observe the same frequency of delayed flights in the long run, no matter what the initial starting state is. Can we compute that stationary distribution? The typical way to achieve this is to bust out some algebra and solve V = P x V, where V is the stationary distribution vector and P the transition matrix. Here we’ll go for a numeric approximation approach. Rather than solving the system of equations, we will start from a uniform distribution over the states, and apply the transition matrix until the distance between two consecutive states is under a threshold Epsilon: // Euclidean distance between 2 vectors let dist (V1: float[]) V2 = Array.zip V1 V2 |> Array.map(fun (v1, v2) -> (v1 - v2) * (v1 - v2)) |> Array.sum // Evaluate stationary distribution // by searching for a fixed point // under tolerance epsilon let stationary (P: float[][]) epsilon = let states = P.[0] |> Array.length [| for s in 1 .. states -> 1.0 / (float)states |] // initial |> Seq.unfold (fun s -> Some((s, (nextDist s P)), (nextDist s P))) |> Seq.map (fun (s, s') -> (s', dist s s')) |> Seq.find (fun (s, d) -> d < epsilon) Running this on our example results in the following stationary distribution estimation: > stationary P 0.0000001;; val it : float [] * float = ([|0.09118958333; 0.7326858333; 0.1761245833|], 1.1590625e-08) In short, in the long run, we should expect our plane to be early 9.1% of the time, on-time 73.2%, and delayed 17.6%. Note: the fixed point approach above should work if a unique stationary distribution exists. If this is not the case, the function may never converge, or may converge to a fixed point that depends on the initial conditions. Use with caution! Armed with this model, we could now ask interesting questions. Suppose for instance that we could improve the operations of AcmeAir, and reduce the chance that our next arrival is delayed given our current state. What should we focus on – should we reduce the probability to remain delayed after a delay (strategy 1), or should we prevent the risk of being delayed after an on-time landing (strategy 2)? One way to look at this is to consider the impact of each strategy on the long-term distribution. Let’s compare the impact of a 1-point reduction of delays in each case, which we’ll assume gets transferred to on-time. We can then create the matrices for each strategy, and compare their respective stationary distributions: > let strat1 = [|[|0.1; 0.85; 0.05|]; [|0.1; 0.75; 0.15|]; [|0.05; 0.61; 0.34|]|] let strat2 = [|[|0.1; 0.85; 0.05|]; [|0.1; 0.76; 0.14|]; [|0.05; 0.60; 0.35|]|];; val strat1 : float [] [] = [|[|0.1; 0.85; 0.05|]; [|0.1; 0.75; 0.15|]; [|0.05; 0.61; 0.34|]|] val strat2 : float [] [] = [|[|0.1; 0.85; 0.05|]; [|0.1; 0.76; 0.14|]; [|0.05; 0.6; 0.35|]|] > stationary strat1 0.0001;; val it : float [] * float = ([|0.091; 0.7331333333; 0.1758666667|], 8.834666667e-05) > stationary strat2 0.0001;; val it : float [] * float = ([|0.091485; 0.740942; 0.167573|], 1.2698318e-05) > The numbers tell the following story: strategy 2 (improve reduction of delays after on-time arrivals) is better: it results in 16.6% delays, instead of 17.6% for strategy 1. Intuitively, this makes sense, because most of our flights are on-time, so an improvement in this area will have a much larger impact in the overall results that a comparable improvement on delayed flights. There is (much) more to Markov chains than this, and there are many ways the code presented could be improved upon – but I’ll leave it at that for today, hopefully you will have picked up something of interest along the path of this small exploration! I also posted the complete code sample on fsSnip.net/ch. 20. May 2012 15:10 During a recent Internet excursions, I ended up on the Infinite Monkey Theorem wiki page. The infinite monkey is a somewhat famous figure in probability; his fame comes from the following question: suppose you gave a monkey a typewriter, what’s the likelihood that, given enough time randomly typing, he would produce some noteworthy literary output, say, the complete works of Shakespeare? Somewhat unrelatedly, this made me wonder about the following question: imagine that I had a noteworthy literary output and such a monkey – could I get my computer to distinguish these? For the sake of experimentation, let’s say that our “tolerable page” is the following paragraph by Jorge Luis Borges: Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true nature of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves—shelves that obliterate the day and on which chaos lies—ever reward them with a tolerable page. Assuming my imaginary typewriter-pounding monkey is typing each letter with equal likelihood, my first thought was that by comparison, a text written in English would have more structure and predictability – and we could use Entropy to measure that difference in structure. Entropy is the expected information of a message; the general idea behind it is that a signal where every outcome is equally likely is unpredictable, and has a high entropy, whereas a message where certain outcomes are more frequent than others has more structure and lower entropy. The formula for Entropy, lifted from Wikipedia, is given below; it corresponds to the average quantity of information of a message X, where X can take different values x: For instance, a series of coin tosses with the proverbial fair coin would produce about as many heads and tails, and the entropy would come out as –0.5 x log2(0.5) – 0.5 x log2(0.5) = 1.0, whereas a totally unfair coin producing only heads would have an entropy of –1.0 x log2(1.0) – 0.0 = 0.0, a perfectly predictable signal. How could I apply this to my problem? First, we need a mechanical monkey. Given a sample text (our benchmark), we’ll extract its alphabet (all characters used), and create a virtual typewriter where each key corresponds to one of these characters. The monkey will then produce monkey literature, by producing a string as long as the original text, “typing” on the virtual keyboard randomly: let monkey (text: string) = let rng = new System.Random() let alphabet = Seq.distinct text |> Seq.toArray let alphabetLength = Array.length alphabet let textLength = text.Length [| for i in 1 .. textLength -> alphabet.[rng.Next(0, alphabetLength)] |] We store the Borges paragraph as: let borges = "Everything would be in its blind volumes. (etc...) … and we can now run the Monkey on the Borges paragraph, > new string(monkey borges);; which produces a wonderful output (results may vary – you could, after all, get a paragraph of great literary value): ovfDG4,xUfETo4Sv1dbxkknthzB19Dgkphz3Tsa1L——w—w iEx-Nra mDs--k3Deoi—hFifskGGBBStU11-iiA3iU'S R9DnhzLForbkhbF:PbAUwP—ir-U4sF u w-tPf4LLuRGsDEP-ssTvvLk3NyaE f:krRUR-Gbx'zShb3wNteEfGwnuFbtsuS9Fw9lgthE1vL,tE4:Uk3UnfS FfxDbcLdpihBT,e'LvuaN4royz ,Aepbu'f1AlRgeRUSBDD.PwzhnA'y.s9:d,F'T3xvEbvNmy.vDtmwyPNotan3Esli' BTFbmywP.fgw9ytAouLAbAP':txFvGvBti Fg,4uEu.grk-rN,tEnvBs3uUo,:39altpBud3'-Aetp,T.chccE1yuDeUT,Pp,R994tnmidffcFonPbkSuw :pvde .grUUTfF1Flb4s cw'apkt GDdwadn-Phn4h.TGoPsyc'pcBEBb,kasl—aepdv,ctA TxrhRUgPAv-ro3s:aD z-FahLcvS3k':emSoz9NTNRDuus3PSpe-Upc9nSnhBovRfoEBDtANiGwvLikp4w—nPDAfknr—p'-:GnPEsULDrm'ub,3EyTmRoDkG9cERvcyxzPmPbD Fuit:lwtsmeUEieiPdnoFUlP'uSscy—Nob'st1:dA.RoLGyakGpfnT.zLb'hsBTo.mRRxNerBD9.wvFzg,,UAc,NSx.9ilLGFmkp—:FnpcpdP—-ScGSkmN9BUL1—uuUpBhpDnwS9NddLSiBLzawcbywiG—-E1DBlx—aN.D9u-ggrs3S4y4eFemo3Ba g'zeF:EsS-gTy-LFiUn3DvSzL3eww4NPLxT13isGb:—vBnLhy'yk1Rsip—res9t vmxftwvEcc::ezvPPziNGPylw:tPrluTl3E,T,vDcydn SyNSooaxaT llwNtwzwoDtoUcwlBdi',UrldaDFeFLk 3goos4unyzmFD9.vSTuuv4.wzbN.ynakoetb—ecTksm—-f,N:PtoNTne3EdErBrzfATPRreBv1:Rb.cfkELlengNkr'L1cA—lfAgU-vs9  Lic-m,kheU9kldUzTAriAg:bBUb'n—x'FL Adsn,kmar'p BE9akNr194gP,hdLrlgvbymp dplh9sPlNf'.'9 Does the entropy of these 2 strings differ? Let’s check. let I p = match p with | 0.0 -> 0.0 | _ -> - System.Math.Log(p, 2.0) let freq text letter = let count = Seq.fold (fun (total, count) l -> if l = letter then (total + 1.0, count + 1.0) else (total + 1.0, count)) (0.0, 0.0) text (letter, snd count / fst count) let H text = let alphabet = Seq.distinct text Seq.map (fun l -> snd (freq text l)) alphabet |> Seq.sumBy (fun p -> p * I(p)) I computes the self-information of a message of probability p, freq computes the frequency of a particular character within a string, and H, the entropy, proceeds by first extracting all the distinct characters present in the text into an “alphabet”, and then maps each character of the alphabet to its frequency and computes the expected self-information. We have now all we need – let’s see the results: > H borges;; val it : float = 4.42083025 > H monkeyLit;; val it : float = 5.565782825 Monkey lit has a higher entropy / disorder than Jorge Luis Borges’ output. This is reassuring. How good of a test is this, though? In the end, what we measured with Entropy is that some letters were more likely to come up than others, which we would expect from a text written in English, where the letter “e” has a 12% probability to show up. However, if we gave our Monkey a Dvorak keyboard, he may fool our test; we could also create an uber Mechanical Monkey which generates a string based on the original text frequency: let uberMonkey (text: string) = let rng = new System.Random() let alphabet = Seq.distinct text |> Seq.toArray let textLength = text.Length let freq = Array.map (fun l -> freq text l) alphabet let rec index i p cumul = let cumul = cumul + snd freq.[i] if cumul >= p then i else index (i+1) p cumul [| for i in 1 .. textLength -> let p = rng.NextDouble() alphabet.[index 0 p 0.0] |] This somewhat ugly snippet computes the frequency of every letter in the original text, and returns random chars based on the frequency. The ugly part is the index function; given a probability p, it returns the index of the first char in the frequency array such that the cumulative probability of all chars up to that index is greater than p, which will return each index based on its frequency. Running the uberMonkey produces another milestone of worldwide literature: lk  aeew omauG dga rlhhfatywrrg   earErhsgeotnrtd utntcnd  o,  ntnnrcl gltnhtlu eAal yeh uro  it-lnfELiect eaurefe Busfehe h f1efeh hs eo.dhoc , rbnenotsno, e et tdiobronnaeonioumnroe  escr l hlvrds anoslpstr'thstrao lttamxeda iotoaeAB ai sfoF,hfiemnbe ieoobGrta dogdirisee nt.eba   t oisfgcrn  eehhfrE' oufepas Eroshhteep snodlahe sau  eoalymeonrt.ss.ehstwtee,ttudtmr ehtlni,rnre  ae h  e chp c crng Rdd  eucaetee gire dieeyGhr a4ELd  sr era tndtfe rsecltfu  t1tefetiweoroetasfl bnecdt'eetoruvmtl ii fi4fprBla Fpemaatnlerhig  oteriwnEaerebepnrsorotcigeotioR g  bolrnoexsbtuorsr si,nibbtcrlte uh ts ot  trotnee   se rgfTf  ibdr ne,UlA sirrr a,ersus simf bset  guecr s tir9tb e ntcenkwseerysorlddaaRcwer ton redts— nel ye oi leh v t go,amsPn 'e  areilynmfe ae  evr  lino t, s   a,a,ytinms   elt i :wpa s s hAEgocetduasmrlfaar  de cl,aeird fefsef E  s se hplcihf f  cerrn rnfvmrdpo ettvtu oeutnrk —toc anrhhne  apxbmaio hh  edhst, mfeveulekd. vrtltoietndnuphhgp rt1ocfplrthom b gmeerfmh tdnennletlie hshcy,,bff,na nfitgdtbyowsaooomg , hmtdfidha l aira chh olnnehehe acBeee  n  nrfhGh dn toclraimeovbca atesfgc:rt  eevuwdeoienpttdifgotebeegc ehms ontdec e,ttmae llwcdoh … and, as expected, if we run our Entropy function on uberMonkeyLit, we get > H uberMonkeyLit;; val it : float = 4.385303632 This is pretty much what we got with the Borges original. The uberMonkey produced a paragraph just as organized as Borges, or so it seems. Obviously, the raw Entropy calculation is not cutting it here. So what are we missing? The problem is that we are simply looking at the frequency of characters, which measures a certain form of order / predictability; however, there is “more” order than that in English. If I were to tell you that the 2 first characters of a text are “Th”, chances are, you would bet on “e” for the third position – because some sequences are much more likely than others, and “The” much more probable than “Thw”. The “raw” Entropy would consider the two following sequences “ABAABABB” and “ABABABAB” equally ordered (each contains 4 As and 4 Bs), whereas a human eye would consider that the second one, with its neat succession of A and Bs, may follow a pattern, where knowing the previous observations of the sequence conveys some information about what’s likely to show up next. We’ll leave it at that for today, with an encouraging thought for those of you who may now worry that world literature could be threaten by offshore monkey typists. According to Wikipedia again, In 2003, lecturers and students from the University of Plymouth MediaLab Arts course used a £2,000 grant from the Arts Council to study the literary output of real monkeys. They left a computer keyboard in the enclosure of six Celebes Crested Macaques in Paignton Zoo in Devon in England for a month, with a radio link to broadcast the results on a website. Not only did the monkeys produce nothing but five pages consisting largely of the letter S, the lead male began by bashing the keyboard with a stone, and the monkeys continued by urinating and defecating on it. Phillips said that the artist-funded project was primarily performance art, and they had learned "an awful lot" from it. £2,000 may seem a bit steep to watch monkeys defecating on typewriters; on the other hand, it seems that human writers can sleep quietly, without worrying about their jobs. 10. April 2011 12:09 One of my initial goals for 2011 was to get my feet wet with Python, but after the last (and excellent) San Francisco F# user group meetup, dedicated to F# for Python developers, I got all excited about F# again, and dug back my copy of Programming F#. The book contains a Sequence example which I found inspiring: open System let RandomSequence = let random = new Random() seq { while true do yield random.NextDouble() } What’s nice about this is that it is a lazy sequence; each element of the Sequence will be pulled in memory “on demand”, which makes it possible to work with Sequences of arbitrary length without running into memory limitation issues. This formulation looks a lot like a simulation, so I thought I would explore that direction. What about modeling the weather, in a fictional country where 60% of the days are Sunny, and the others Rainy? Keeping our weather model super-simple, we could do something along these lines: we define a Weather type, which can be either Sunny or Rainy, and a function WeatherToday, which given a probability, returns the adequate Weather. type Weather = Sunny | Rainy let WeatherToday probability = if probability < 0.6 then Sunny else Rainy More... 13. August 2010 12:37 Today is Friday the 13th, the day when more accidents happen because Paraskevidekatriaphobics are concerned about accidents. Or is it the day when less accidents take place, because people stay home to avoid accidents? Not altogether clear, it seems. Whether safe or dangerous, how often do these Friday the 13th take place, exactly? Are there years without it, or with more than one? That’s a question which should have a clearer answer. Let’s try to figure out the probability to observe N such days in a year picked at random. First, note that if you knew what the first day of that year was, you could easily verify if the 13th day for each month was indeed a Friday. Would that be sufficient? Not quite – you would also need to know whether the year was a leap year, these years which happen every 4 years and have an extra day, February the 29th. Imagine that this year started a Monday. What would next year start with? If we are in a regular year, 365 days = 52 x 7 + 1; in other words, 52 weeks will elapse, the last day of the year will also be a Monday, and next year will start a Tuesday. If this is a leap year, next year will start on a Wednesday. Why do I care? Because now we can show that every 28 years, the same cycle of Friday the 13th will take place again. Every four consecutive years, the start day shifts by 5 positions (3 “regular” years and one leap year), and because 5 and 7 have no common denominator, after 7 4-year periods, we will be back to starting an identical 28-years cycle, where each day of the week will appear 4 times as first day of the year. More... #### Need help with F#? The premier team for F# training & consulting.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.572554349899292, "perplexity": 4487.65771519492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146033.50/warc/CC-MAIN-20200225045438-20200225075438-00556.warc.gz"}
https://www.physicsoverflow.org/40933/worldline-diffeomorphism-invariance-signature-spacetime
# Worldline Diffeomorphism invariance as the origin of the signature of the spacetime metric + 1 like - 0 dislike 111 views For a particle moving in a Spacetime with $n_{-}$ timelike dimensions and $n_{+}$ Spacelike dimensions , One can formulate actions that are invariant under the worldline diffeomorphism symmetry. The system is a constrained system of course. The constraint $0=P^{2}$ , arises for the simplest kind of such actions. Now , This has the trivial 0 solution if all signs are positive or negative. It also has has nontrivial solutions if some of the signs are positive and some are negative. Ghosts will eliminate all but the 1T dimension case. So the question is : Is gauge symmetry really the origin of special relativity on the target space manifold ? That is mathematical consistency chooses just $O(1,n_{+})$ as the only possible target space symmetry "At least locally". If so , does this suggests that it is possible to find consistency conditions that determines the number of spacetime dimensions ? What about symplectic and the other classical groups ?  Also what about the exceptional groups ? edited Feb 16, 2018 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverf$\varnothing$owThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6816851496696472, "perplexity": 847.9330967686572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00026.warc.gz"}
http://mathhelpforum.com/calculus/80699-angle-between-vectors.html
# Math Help - angle between the vectors 1. ## angle between the vectors Please check my work. Thank you. Find the dot product of the following vectors. Find the angle between the vectors. v=3i+2j, w=-4i-5j Dot product v x w=-12-10=-22 cos of angle = -22/sq. root of 13 x sq. root of 41=-0.9529 the angle = 17.64 degrees. Please tell me where I made a mistake, because when I graph the vectors, the angle looks like it is about 100 degrees. Thank you very much. 2. Originally Posted by oceanmd Please check my work. Thank you. Find the dot product of the following vectors. Find the angle between the vectors. v=3i+2j, w=-4i-5j Dot product v x w=-12-10=-22 cos of angle = -22/sq. root of 13 x sq. root of 41=-0.9529 the angle = 17.64 degrees. Please tell me where I made a mistake, because when I graph the vectors, the angle looks like it is about 100 degrees. Thank you very much. a bit larger than 100 degrees, but it is obtuse ... $\theta = \arccos(-0.9529) = 162.3^{\circ}$ inverse cosine of a negative value yields a quad II angle. 3. Thank you very much
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392210245132446, "perplexity": 1168.5778530514187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657139314.4/warc/CC-MAIN-20140914011219-00301-ip-10-234-18-248.ec2.internal.warc.gz"}
http://hardrockfm.com/music-catalog/either-way/
Either Way 392 Either Way – Snakehips, Anne-Marie ft. Joey Bada\$\$ Lyrics If you want it like that you should show me Uh, bring it back like you owe me If you want it like that you should show me Uh, hold up, just like you owe me some [Verse 1: Anne-Marie] What’s going on? Whatcha doing tonight? ‘Cause I wanna see what it’s like You decide where we go if ya wanna My place or yours My place or yours What’s going on? If you made up the plans then you gotta [Pre-Chorus: Anne-Marie] Help me now, let me know Help me now, let me know C’mon let me know, let me know Help me now, let me know I’m good either way, either way Good either way, either way Um whatcha say? If you want it like that you should show me Um whatcha say? Whatcha say? Gonna bring it back just like she owe me Either way If you want it like that you should show me Either way Uh, hold up [Verse 2: Anne-Marie] Barely a friend I can play one-two if you wanna ‘Cause I got a few things in mind but you gotta [Pre-Chorus: Anne-Marie] Help me now, let me know Help me now, let me know C’mon let me know, let me know Help me now, let me know I’m good either way If you want it like that you should show me Either way Gonna bring it back just like she owe me Good either way If you want it like that you should show me Either way Uh, hold up Um whatcha say? If you want it like that you should show me Um whatcha say? Whatcha say? Gonna bring it back just like she owe me Either way If you want it like that you should show me Either way Uh, hold up Now she come over, don’t wanna leave In my crew, she must have thought she had a steady gee! Let it fist me up a blade and even roll my weed We could call it load dick, girl it’s fine with me (it’s cool, it’s cool) If you wanna play it safe, rock you right to sleep Go out on a date, take you out to eat Twenty-five bands on the shopping spree She ain’t never got no plans, she just follow me, just swallow me [Bridge: Anne-Marie] Baby, just tell me what you wanna do I’m easy, I’m easy, I’m easy I’m easy, I’m easy, believe me Baby, just tell me I won’t disapprove I’m easy, I’m easy, I’m easy I’m easy, I’m easy, believe me [Pre-Chorus: Anne-Marie] C’mon let me know, let me know Help me now, let me know C’mon let me know, let me know Help me now, let me know I’m good either way If you want it like that you should show me Either way Gonna bring it back just like she owe me Good either way If you want it like that you should show me Either way Girl, just bring it back like you owe me Um whatcha say? (Aye, whatcha say?) Um whatcha say? Whatcha say? Good either way (Good either way) I’m good (Either way, either way homie) Either way Either way, either way, either way shawty
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9408501982688904, "perplexity": 12410.414213056833}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806066.5/warc/CC-MAIN-20171120130647-20171120150647-00086.warc.gz"}
http://slideplayer.com/slide/3191529/
# Computational Genomics Lecture #3a ## Presentation on theme: "Computational Genomics Lecture #3a"— Presentation transcript: Computational Genomics Lecture #3a Multiple sequence alignment Background Readings: Chapters 2.5, 2.7 in the text book, Biological Sequence Analysis, Durbin et al., 2001. Chapters , in Introduction to Computational Molecular Biology, Setubal and Meidanis, 1997.  Chapter 15 in Gusfield’s book. p. 81 in Kanehisa’s book Much of this class has been edited from Nir Friedman’s lecture which is available at Changes made by Dan Geiger, then Shlomo Moran, and finally Benny Chor. Ladies and Gentlemen Boys and Girls the holy grail Multiple Sequence Alignment Multiple Sequence Alignment S1=AGGTC Possible alignment A - T G C S2=GTTCG S3=TGAAC Possible alignment A G - T C Multiple Sequence Alignment Aligning more than two sequences. Definition: Given strings S1, S2, …,Sk a multiple (global) alignment map them to strings S’1, S’2, …,S’k that may contain blanks, where: |S’1|= |S’2|=…= |S’k| The removal of spaces from S’i leaves Si Multiple alignments We use a matrix to represent the alignment of k sequences, K=(x1,...,xk). We assume no columns consists solely of blanks. The common scoring functions give a score to each column, and set: score(K)= ∑i score(column(i)) x1 x2 x3 x4 M Q _ I L R - K P V For k=10, a scoring function has 2k -1 > 1000 entries to specify. The scoring function is symmetric - the order of arguments need not matter: score(I,_,I,V) = score(_,I,I,V). SUM OF PAIRS M Q _ I L R - K P V A common scoring function is SP – sum of scores of the projected pairwise alignments: SPscore(K)=∑i<j score(xi,xj). M Q _ I L R - K P V Note that we need to specify the score(-,-) because a column may have several blanks (as long as not all entries are blanks). In order for this score to be written as ∑i score(column(i)), we set score(-,-) = 0. Why ? Because these entries appear in the sum of columns but not in the sum of projected pairwise alignments (lines). SUM OF PAIRS M Q _ I L R - K P V Definition: The sum-of-pairs (SP) value for a multiple global alignment A of k strings is the sum of the values of all projected pairwise alignments induced by A where the pairwise alignment function score(xi,xj) is additive. M Q _ I L R - K P V Example Consider the following alignment: a c - c d b - 3 3 +4 - c - a d b d = 12 a - b c d a d Using the edit distance and for , this alignment has a SP value of Multiple Sequence Alignment Given k strings of length n, there is a natural generalization of the dynamic programming algorithm that finds an alignment that maximizes SP-score(K) = ∑i<j score(xi,xj). Instead of a 2-dimensional table, we now have a k-dimensional table to fill. For each vector i =(i1,..,ik), compute an optimal multiple alignment for the k prefix sequences x1(1,..,i1),...,xk(1,..,ik). The adjacent entries are those that differ in their index by one or zero. Each entry depends on 2k-1 adjacent entries. The idea via K=2 V[i,j] V[i+1,j] V[i,j+1] V[i+1,j+1] Recall the notation: and the following recurrence for V: V[i,j] V[i+1,j] V[i,j+1] V[i+1,j+1] Note that the new cell index (i+1,j+1) differs from previous indices by one of 2k-1 non-zero binary vectors (1,1), (1,0), (0,1). Multiple Sequence Alignment Given k strings of length n, there is a generalization of the dynamic programming algorithm that finds an optimal SP alignment. Computational Cost: Instead of a 2-dimensional table we now have a k-dimensional table to fill. Each dimension’s size is n+1. Each entry depends on 2k-1 adjacent entries. Number of evaluations of scoring function : O(2knk) Complexity of the DP approach Number of cells nk. Number of adjacent cells O(2k). Computation of SP score for each column(i,b) is o(k2) Total run time is O(k22knk) which is totally unacceptable ! Maybe one can do better? But MSA is Intractable Not much hope for a polynomial algorithm because the problem has been shown to be NP complete (proof is quite Tricky and recent. Some previous proofs were bogus). Look at Isaac Elias presentation of NP completeness proof. Need heuristic or approximation to reduce time. Multiple Sequence Alignment – Approximation Algorithm Now we will see an O(k2n2) multiple alignment algorithm for the SP-score that approximate the optimal solution’s score by a factor of at most 2(1-1/k) < 2. Star-score(K) = ∑j>0score(S1,Sj). Star Alignments Rather then summing up all pairwise alignments, select a fixed sequence S1 as a center, and set Star-score(K) = ∑j>0score(S1,Sj). The algorithm to find optimal alignment: at each step, add another sequence aligned with S1, keeping old gaps and possibly adding new ones (i.e. keeping old alignment intact). Multiple Sequence Alignment – Approximation Algorithm Polynomial time algorithm: assumption: the function δ is a distance function: (triangle inequality) Let D(S,T) be the value of the minimum global alignment between S and T. Multiple Sequence Alignment – Approximation Algorithm (cont.) Polynomial time algorithm: The input is a set Γ of k strings Si. 1. Find “center string” S1 that minimizes 2. Call the remaining strings S2, …,Sk. 3. Add a string to the multiple alignment that initially contains only S1 as follows: Suppose S1, …,Si-1 are already aligned as S’1, …,S’i-1. Add Si by running dynamic programming algorithm on S’1 and Si to produce S’’1 and S’i. Adjust S’2, …,S’i-1 by adding gaps to those columns where gaps were added to get S’’1 from S’1. Replace S’1 by S’’1. Multiple Sequence Alignment – Approximation Algorithm (cont.) Time analysis: Choosing S1 – running dynamic programming algorithm times – O(k2n2) When Si is added to the multiple alignment, the length of S1 is at most i* n, so the time to add all k strings is Multiple Sequence Alignment – Approximation Algorithm (cont.) Performance analysis: M - The alignment produced by this algorithm. d(i,j) - the distance M induces on the pair Si,Sj. M* - optimal alignment. For all i, d(1,i)=D(S1,Si) (we performed optimal alignment between S’1 and Si and ) Multiple Sequence Alignment – Approximation Algorithm (cont.) Performance analysis: Triangle inequality Definition of S1 Multiple Sequence Alignment – Approximation Algorithm Algorithm relies heavily on scoring function being a distance. It produced an alignment whose SP score is at most twice the minimum. What if scoring function was similarity? Can we get an efficient algorithm whose score is half the maximum? Third of maximum? … We dunno ! Tree Alignments Assume that there is a tree T=(V,E) whose leaves are the input sequences. Want to associate a sequence in each internal node. Tree-score(K) = ∑(i,j)Escore(xi,xj). Finding the optimal assignment of sequences to the internal nodes is NP Hard. We will meet this problem again in the study of phylogenetic trees (it is related to the parsimony problem). Multiple Sequence Alignment Heuristics Example - 4 sequences A, B, C, D. A. B D A C A B C D Perform all 6 pair wise alignments. Find scores. Build a “similarity tree”. distant similar B. Multiple alignment following the tree from A. B Align most similar pairs allowing gaps to optimize alignment. D A Align the next most similar pair. C Now, “align the alignments”, introducing gaps if necessary to optimize alignment of (BD) with (AC). (modified from Speed’s ppt presentation, see p. 81 in Kanehisa’s book) The tree-based progressive method for multiple sequence alignment, used in practice (Clustal) (a) a tree (dendrogram) obtained by “cluster analysis” (b) pairwise alignment of sequences’ alignments. (a) (b) L W R D G R G A L Q L W R G G R G A A Q D W R - G R T A S G DEHUG3 DEPGG3 DEBYG3 DEZYG3 DEBSGF L R R - A R T A S A L - R G A R A A A E (modified from Speed’s ppt presentation, see p. 81 in Kanehisa’s book) Visualization of Alignment
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8498950004577637, "perplexity": 3309.0196551731824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00578.warc.gz"}
https://www.physicsforums.com/threads/op-amp-voltage-follower-question.231541/
# Op-Amp: voltage-follower question 1. Apr 27, 2008 ### WalkingInMud Op-Amp: "voltage-follower" question Hi all, ... If a simple op-amp circuit is described as: -> being in a "voltage-follower with-gain" configuration, and -> having a resistor ratio of 9:1 ...Where are the two resistors positioned with respect to the op-amp circuit element, and ...How, if at all, doe they come in to play when calculating the configuration's "closed-loop gain"? Thanks heaps! 2. Apr 27, 2008
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090895414352417, "perplexity": 14602.05328751118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/warc/CC-MAIN-20161020183837-00346-ip-10-171-6-4.ec2.internal.warc.gz"}
https://blender.stackexchange.com/questions/173405/speaker-sound-does-not-play-when-set-with-python
Speaker sound does not play when set with Python I am trying to import and set the sound of a freshly created speaker using Python. Strangely the sound plays normally when I import it manually, but when I use Python to do the same thing it does not. Even when imported via script, the sound still shows up normally in the speaker-menu. This is my code: bpy.ops.object.speaker_add() bpy.ops.sound.open_mono(filepath=my_file_path) bpy.data.speakers["Speaker"].sound = bpy.data.sounds[os.path.basename(path)] I have made sure my file path is correct. • Good question! When adding the speaker using the add menu, Blender adds a NLA Track as well. However, I'm not sure why there is no option to set that up automatically. Apr 8 '20 at 14:53 • Actually it adds an NLA track in - and I've also tried to delete that and add one in manually but that doesn't work either. Apr 9 '20 at 8:41 If my_file_path is a relative path, use the absolute file path instead. Be aware that bpy.ops.sound.open_mono gives no warning or error message even if the audio file is not found (or does not exist). The sound still shows up normally in the speaker-menu but you can't hear anything. I also had to call update_tag() and relink the speaker to the collection. I have no clue why this is necessary, but the following code works in Blender 2.83: my_file_path = "/absoulute/path/to/test.mp3"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39093852043151855, "perplexity": 1613.9621158842715}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00060.warc.gz"}
https://danielfilan.com/2021/07/05/simple_example_conditional_orthogonality_ffs.html
# Daniel Filan ## A simple example of conditional orthogonality in finite factored sets Reader’s note: It looks like the math on my website is all messed up. To read it better, I suggest checking it out on the Alignment Forum. Recently, MIRI researcher Scott Garrabrant has publicized his work on finite factored sets. It allegedly offers a way to understand agency and causality in a set-up like the causal graphs championed by Judea Pearl. Unfortunately, the definition of conditional orthogonality is very confusing. I’m not aware of any public examples of people demonstrating that they understand it, but I didn’t really understand it until an hour ago, and I’ve heard others say that it went above their heads. So, I’d like to give an example of it here. In a finite factored set, you have your base set (S), and a set $B$ of ‘factors’ of your set. In my case, the base set $S$ will be four-dimensional space - I’m sorry, I know that’s one more dimension than the number that well-adjusted people can visualize, but it really would be a much worse example if I were restricted to three dimensions. We’ll think of the points in this space as tuples $(x_1, x_2, x_3, x_4)$ where each $x_i$ is a real number between, say, -2 and 2[^1]. We’ll say that $X_1$ is the ‘factor’, aka partition, that groups points together based on what their value of $x_1$ is, and similarly for $X_2$, $X_3$, and $X_4$, and set $B = {X_1, X_2, X_3, X_4}$. I leave it as an exercise for the reader to check whether this is in fact a finite factored set. Also, I’ll talk about the ‘value’ of partitions and factors - technically, I suppose you could say that the ‘value’ of some partition at a point is the set in the partition that contains the point, but I’ll use it to mean that, for example, the ‘value’ of $X_1$ at point $(x_1, x_2, x_3, x_4)$ is $x_1$. If you think of partitions as questions where different points in $S$ give different answers, the ‘value’ of a partition at a point is the answer to the question. [EDIT: for the rest of the post, you might want to imagine $S$ as points in space-time, where $x_4$ represents the time, and $(x_1, x_2, x_3)$ represent spatial coordinates - for example, inside a room, where you’re measuring from the north-east corner of the floor. In this analogy, we’ll imagine that there’s a flat piece of sheet metal leaning on the floor against two walls, over that corner. We’ll try conditioning on that - so, looking only at points in space-time that are spatially located on that sheet - and see that distance left is no longer orthogonal to distance up, but that both are still orthogonal to time.] Now, we’ll want to condition on the set $E = {(x_1, x_2, x_3, x_4) \mid x_1 + x_2 + x_3 = 1}$. The thing with $E$ is that once you know you’re in $E$, $x_1$ is no longer independent of $x_2$, like it was before, since they’re linked together by the condition that $x_1 + x_2 + x_3 = 1$. However, $x_4$ has nothing to do with that condition. So, what’s going to happen is that conditioned on being in $E$, $X_1$ is orthogonal to $X_4$ but not to $X_2$. In order to show this, we’ll check the definition of conditional orthogonality, which actually refers to this thing called conditional history. I’ll write out the definition of conditional history formally, and then try to explain it informally: the conditional history of $X$ given $E$, which we’ll write as $h(X \mid E)$, is the smallest set of factors $H \subseteq B$ satisfying the following two conditions: 1. For all $s,t \in E$, if $s \sim_b t$ for all $b \in H$, then $s \sim_X t$. 2. For all $s, t \in E$ and $r \in S$, if $r \sim_b s$ for all $b \in H$ and $r \sim_{b’} t$ for all $b’ \in B \setminus H$, then $r \in E$. Condition 1 means that, if you think of the partitions as carving up the set $S$, then the partition $X$ doesn’t carve $E$ up more finely than if you carved according to everything in $h(X \mid E)$. Another way to say that is that if you know you’re in $E$, knowing everything in the conditional history of $X$ in $E$ tells you what the ‘value’ of $X$ is, which hopefully makes sense. Condition 2 says that if you want to know if a point is in $E$, you can separately consider the ‘values’ of the partitions in the conditional history, as well as the other partitions that are in $B$ but not in the conditional history. So it’s saying that there’s no ‘entanglement’ between the partitions in and out of the conditional history regarding $E$. This is still probably confusing, but it will make more sense with examples. Now, what’s conditional orthogonality? That’s pretty simple once you get conditional histories: $X$ and $Y$ are conditionally orthogonal given $E$ if the conditional history of $X$ given $E$ doesn’t intersect the conditional history of $Y$ given $E$. So it’s saying that once you’re in $E$, the things determining $X$ are different to the things determining $Y$, in the finite factored sets way of looking at things. Let’s look at some conditional histories in our concrete example: what’s the history of $X_1$ given $E$? Well, it’s got to contain $X_1$, because otherwise that would violate condition 1: you can’t know the value of $X_1$ without being told the value of $X_1$, even once you know you’re in $E$. But that can’t be the whole thing. Consider the point $s = (0.5, 0.4, 0.4, 0.7)$. If you just knew the value of $X_1$ at $s$, that would be compatible with $s$ actually being $(0.5, 0.25, 0.25, 1)$, which is in $E$. And if you just knew the values of $X_2$, $X_3$, and $X_4$, you could imagine that $s$ was actually equal to $(0.2, 0.4, 0.4, 0.7)$, which is also in $E$. So, if you considered the factors in ${X_1}$ separately to the other factors, you’d conclude that $s$ could be in $E$ - but it’s actually not! This is exactly the thing that condition 2 is telling us can’t happen. In fact, the conditional history of $X_1$ given $E$ is ${X_1, X_2, X_3}$, which I’ll leave for you to check. I’ll also let you check that the conditional history of $X_2$ given $E$ is ${X_1, X_2, X_3}$. Now, what’s the conditional history of $X_4$ given $E$? It has to include $X_4$, because if someone doesn’t tell you $X_4$ you can’t figure it out. In fact, it’s exactly ${X_4}$. Let’s check condition 2: it says that if all the factors outside the conditional history are compatible with some point being in $E$, and all the factors inside the conditional history are compatible with some point being in $E$, then it must be in $E$. That checks out here: you need to know the values of all three of $X_1$, $X_2$, and $X_3$ at once to know if something’s in $E$, but you get those together if you jointly consider those factors outside your conditional history, which is ${X_1, X_2, X_3}$. So looking at $(0.5, 0.4, 0.4, 0.7)$, if you only look at the values that aren’t told to you by the conditional history, which is to say the first three numbers, you can tell it’s not in $E$ and aren’t tricked. And if you look at $(0.5, 0.25, 0.25, 0.7)$, you look at the factors in ${X_4}$ (namely $X_4$), and it checks out, you look at the factors outside ${X_4}$ and that also checks out, and the point is really in $E$. Hopefully this gives you some insight into condition 2 of the definition of conditional history. It’s saying that when we divide factors up to get a history, we can’t put factors that are entangled by the set we’re conditioning on on ‘different sides’ - all the entangled factors have to be in the history, or they all have to be out of the history. In summary: $h(X_1 \mid E) = h(X_2 \mid E) = {X_1, X_2, X_3}$, and $h(X_4 \mid E) = {X_4}$. So, is $X_1$ orthogonal to $X_2$ given $E$? No, their conditional histories overlap - in fact, they’re identical! Is $X_1$ orthogonal to $X_4$ given $E$? Yes, they have disjoint conditional histories. Some notes: • In this case, $X_1$ was already orthogonal to $X_4$ before conditioning. It would be nice to come up with an example where two things that weren’t already orthogonal become so after conditioning. [EDIT: see my next post] • We didn’t really need the underlying set to be finite for this example to work, suggesting that factored sets don’t really need to be finite for all the machinery Scott discusses. • We did need the range of each variable to be bounded for this to work nicely. Because all the numbers need to be between -2 and 2, once you’re in $E$, if $x_1 = 2$ then $x_2$ can’t be bigger than 1, otherwise $x_3$ can’t go negative enough to get the numbers to add up to 1. But if they could all be arbitrary real numbers, then even once you were in $E$, knowing $x_1$ wouldn’t tell you anything about $x_2$, but we’d still have that $X_1$ wasn’t orthogonal to $X_2$ given $E$, which would be weird. [^1] I know what you’re saying - “That’s not a finite set! Finite factored sets have to be finite!” Well, if you insist, you can think of them as only the numbers between -2 and 2 with two decimal places. That makes the set finite and doesn’t really change anything. (Which suggests that a more expansive concept could be used instead of finite factored sets.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085917830467224, "perplexity": 207.4681699031321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00588.warc.gz"}
https://networks.skewed.de/net/reality_mining
# Netzschleuder network catalogue, repository and centrifuge Problems with this dataset? Open an issue. You may also take a look at the source code. The network in this dataset can be loaded directly from graph-tool with: import graph_tool.all as gt g = gt.collection.ns["reality_mining"] # reality_mining — Reality mining proximity network (2004) Description A network of human proximities among students at Massachusetts Institute of Technology (MIT), as measured by personal mobile phones. Nodes represent people (students from the Media Lab and the Sloan Business School) and an edge connects a pair if the two devices made a Bluetooth handshake at the time. Edges are timestamped.1 1. Description obtained from the ICON project. Tags Social Offline Unweighted Timestamps Citation Upstream URL OK http://konect.cc/networks/mit Networks Tip: hover your mouse over a table header to obtain a legend. Name Nodes Edges $\left<k\right>$ $\sigma_k$ $\lambda_h$ $\tau$ $r$ $c$ $\oslash$ $S$ Kind Mode NPs EPs gt GraphML GML csv reality_mining 96 1,086,404 22633.42 21814.46 58.67 19.34 0.47 0.84 3 1.00 Undirected Unipartite weight time 339 KiB 2.4 MiB 2.0 MiB 866 KiB Ridiculograms
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16665887832641602, "perplexity": 6397.708950824301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064520.8/warc/CC-MAIN-20210411144457-20210411174457-00019.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-6-section-6-6-logarithmic-and-exponential-equations-6-6-assess-your-understanding-page-465/2
## College Algebra (10th Edition) $\color{blue}{x = \left\{-2, 0\right\}}$ Let $u = x+3$ Replacing $x+3$ with $u$ gves: $(x+3)^2-4(x+3)+3=0 \\u^2-4u+3=0$ Factor the trinomial to obtain: $(u-3)(u-1)=0$ Use the Zero-Product Property (which states that if $xy=0$, then either $x=0$ or $y=0$ or both are zero) by equating each factor to zero to obtain: $u-3=0 \text{ or } u-1=0$ Solve each equation to obtain: $u=3$ or $u=1$ Replace $u$ with $x+3$ to obtain: \begin{array}{ccc} &u=3 &\text{or} &u=1 \\&x+3=3 &\text{or} &x+3=1 \\&x=3-3 &\text{or} &x=1-3 \\&x=0 &\text{or} &x=-2 \end{array} Therefore, the solutions to the given equation are: $\color{blue}{x = \left\{-2, 0\right\}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082357287406921, "perplexity": 1489.1328319558772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.65/warc/CC-MAIN-20180923000827-20180923021227-00154.warc.gz"}
https://bibbase.org/network/publication/koch-kauppi-chen-candidatesfordrugrepurposingtoaddressthecognitivesymptomsinschizophrenia-2022
Candidates for Drug Repurposing to Address the Cognitive Symptoms in Schizophrenia. Koch, E., Kauppi, K., & Chen, C. Technical Report Genetics, March, 2022. In the protein-protein interactome, we have previously identified a significant overlap between schizophrenia risk genes and genes associated with cognitive performance. Here, we further studied this overlap to identify potential candidate drugs for repurposing to treat the cognitive symptoms in schizophrenia. We first defined a cognition-related schizophrenia interactome from network propagation analyses, and identified drugs known to target more than one protein within this network. Thereafter, we used gene expression data to further select drugs that could counteract schizophrenia-associated gene expression perturbations. Additionally, we stratified these analyses by sex to identify sex-specific pharmacological treatment options for the cognitive symptoms in schizophrenia. After excluding drugs contraindicated in schizophrenia, we identified eight drug candidates, most of which have anti-inflammatory and neuroprotective effects. Due to gene expression differences in male and female patients, four of those drugs were also selected in our male-specific analyses, and the other four in the female-specific analyses. Based on our bioinformatics analyses of disease genetics, we suggest eight candidate drugs that warrant further examination for repurposing to treat the cognitive symptoms in schizophrenia, and suggest that these symptoms could be addressed by sex-specific pharmacological treatment options. @techreport{koch_candidates_2022, type = {preprint}, title = {Candidates for {Drug} {Repurposing} to {Address} the {Cognitive} {Symptoms} in {Schizophrenia}}, url = {http://biorxiv.org/lookup/doi/10.1101/2022.03.07.483231}, abstract = {In the protein-protein interactome, we have previously identified a significant overlap between schizophrenia risk genes and genes associated with cognitive performance. Here, we further studied this overlap to identify potential candidate drugs for repurposing to treat the cognitive symptoms in schizophrenia. We first defined a cognition-related schizophrenia interactome from network propagation analyses, and identified drugs known to target more than one protein within this network. Thereafter, we used gene expression data to further select drugs that could counteract schizophrenia-associated gene expression perturbations. Additionally, we stratified these analyses by sex to identify sex-specific pharmacological treatment options for the cognitive symptoms in schizophrenia. After excluding drugs contraindicated in schizophrenia, we identified eight drug candidates, most of which have anti-inflammatory and neuroprotective effects. Due to gene expression differences in male and female patients, four of those drugs were also selected in our male-specific analyses, and the other four in the female-specific analyses. Based on our bioinformatics analyses of disease genetics, we suggest eight candidate drugs that warrant further examination for repurposing to treat the cognitive symptoms in schizophrenia, and suggest that these symptoms could be addressed by sex-specific pharmacological treatment options.}, language = {en}, urldate = {2022-05-31}, institution = {Genetics}, author = {Koch, Elise and Kauppi, Karolina and Chen, Chi-Hua}, month = mar, year = {2022}, doi = {10.1101/2022.03.07.483231}, } Downloads: 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.307740718126297, "perplexity": 11049.887757771132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00281.warc.gz"}
https://stats.stackexchange.com/questions/89926/what-do-you-do-when-a-centroid-doesnt-attract-any-points-k-means-empty-cluste?noredirect=1
# What do you do when a centroid doesn't attract any points? (K-means empty cluster problem) I am solving a clustering problem on a trivial dataset with the k-means algorithm. I am running a couple of tests and, from time to time, one centroid doesn't attract any points, i.e. I've got an empty cluster (see the purple "x" in the picture). What should I do? Shall I delete it or just stop updating its value? Why? I am aware that built-in functions (e.g., kmeans() in R) have automatic ways of dealing with this situation, but I am trying to write the standard algorithm from scratch. As soon as I fix it I'll be able to compare my results to built-in functions. At this moment I'm looking for some theoretical reasons why I should prefer one solution or another. In the picture each colour represents a cluster according to the current iteration and each "X" is its centroid (old ones have been kept and marked with the number of the iteration in which they were computed). • What are the points that are plotted? Does each x represent a cluster centroid? What are the os? What do the 0, 1, & 2 represent? Mar 13, 2014 at 18:50 • First post edited. Thank you for pointing that out. Mar 13, 2014 at 18:59 • Thanks, are you trying to implement the standard k-means algorithm by hand, or code your own implementation from scratch? I believe any standard k-means function (eg, kmeans() in R) has methods for dealing w/ this situation automatically. Mar 13, 2014 at 19:05 • I am aware of the built-in kmeans() function, but I am trying to write the standard algorithm from scratch. As soon as I fix it I'll be able to compare the results. At this moment I'm looking for some theoretical reasons why I should prefer one solution or another. Mar 13, 2014 at 19:18 • Flippant answer: Play Lonely won't leave me alone in the background, have a sip of your favorite beverage, and appreciate the plot, take it all in. Mar 14, 2014 at 1:20 This can naturally happen in Lloyds algorithm; don't try to prevent it. Instead, implement one of the workarounds (e.g. choosing the most-distant point as additional cluster centroid, or simply allowing empty clusters). You may want to put some safeguards in place - for example, when k is chosen larger than the data set size, there just is no solution that doesn't involve empty clusters. Note that this mostly happens when you have really bad starting centroids. E.g. when you initialized them by randomly placing centriods on your data space. Choosing existing data points as initial centroids is much more robust (but this may still happen) MacQueens k-means should be safe from this effect, too. • +1. May I ask you to unwrap, in some detail, what is the difference between Lloyd and MacQueen versions? Mar 14, 2014 at 5:29 • MacQueen uses an iterative single-pass approach. The first k objects are used as initial centroids, then new object are assigned to their neares center (which is immediately updated). As no reassignments happen, a cluster cannot lose all its points. Mar 14, 2014 at 7:12 • +1, I just implemented one by choosing existing points as initial centroids, and yes that's more robust. Kmeans++ might also be good. May 4, 2014 at 4:39 • choosing the most-distant point as additional cluster centroid Most distant from what? From that (empty) cluster's last valid centroid? From the centroid of of the currently biggest (most populated) cluster? Else? Jan 20, 2017 at 12:32 • Whatever you prefer. I don't see why one of these variations would be wrong. Benchmark it on a number of data sets to find out what works for you. Jan 20, 2017 at 12:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2739042043685913, "perplexity": 1225.4086283453967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00708.warc.gz"}
https://www.neetprep.com/question/60439-ideal-monoatomic-gas-taken-round-cycle-shown-following-PV-diagram-work-done-during-cycle--PV--PV--PV-Zero/55-Physics--Thermodynamics/687-Thermodynamics
# NEET Physics Thermodynamics Questions Solved An ideal monoatomic gas is taken round the cycle  as shown in following P-V diagram. The work done during the cycle is - (1) PV (2) 2 PV (3) 4 PV (4) Zero (3) Work done = Area of curve enclosed $=2V×2P=4PV$ Difficulty Level: • 9% • 28% • 50% • 14% Crack NEET with Online Course - Free Trial (Offer Valid Till August 28, 2019)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037797808647156, "perplexity": 15827.00545284572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00082.warc.gz"}
https://nus.kattis.com/sessions/feca55/problems/shetalkstoangel
OpenKattis CS3233 Final Team Contest Mirror #### Start 2020-04-21 03:00 AKDT ## CS3233 Final Team Contest Mirror #### End 2020-04-21 08:00 AKDT The end is near! Contest is over. Not yet started. Contest is starting in -218 days 5:27:32 5:00:00 0:00:00 # Problem EShe Talks to Angel Fluttershy’s relationship with her pet bunny Angel has hit the skids. Zecora has brewed a solution—literally—that will help them solve their problem and understand each other better. Zecora realized that the problem is each one’s lack of appreciation for the difficulties faced by the other, and so she concocted a body-swapping potion with a spell that will let them live a day in each other’s horseshoes. Fluttershy and Angel, in each other’s bodies, must complete all the chores in the Sweet Feather Sanctuary. Only after completing all the chores, and experiencing the burdens carried by each other, will the lesson be learned and the spell be lifted. Sweet Feather Sanctuary is a network of $N$ junctions, labeled $1$ to $N$, connected by $N-1$ paths, such that each path connects two distinct junctions and from any junction it is possible to reach any other junction by treading a series of one or more paths. The $i^\text {th}$ of these paths connects junctions $A_ i$ and $B_ i$, is one kilometer long, and can be traveled in either direction. There are $C$ chores, labeled $1$ to $C$, that need to be accomplished. The chore labeled $i$ is located at the junction labeled $P_ i$. Note that there can be multiple chores at the same junction. Sweet Feather Sanctuary is very large, and compared to the amount of time it takes to get around, the time it takes to actually do the chores is negligible. Hence, assume for simplicity that it takes zero time to do a chore. On the other hoof, or paw, Fluttershy travels at the speed of $K$ kilometers per hour, while Angel travels at the speed of $L$ kilometers per hour. Fluttershy and Angel begin at the central square at the junction labeled $1$. They will first split the chores between themselves, such that both of them get at least one chore and every chore is done by exactly one of them. Then, they will independently go around the sanctuary to do their chores before returning to the central square to express the difficulty of the task and their gratitude for the other. Fluttershy and Angel are eager to finish the chores as fast as possible and return to their original bodies. How should they split the chores between themselves such that if they both took the optimal route, they would finish the chores and return to the central square as fast as possible? Note that if one of them has finished and arrived at the central square before the other, she or he will have to wait. ## Input The first line of input contains four integers, $N$ ($1 \leq N \leq 4\, 000$), $C$ ($2 \leq C \leq 8\, 000$), $K$ and $L$ ($1 \leq K, L \leq 10^9$), the number of junctions, the number of chores, Fluttershy’s speed in kilometers per hour and Angel’s speed in kilometers per hour, respectively. The next line of input contains $C$ integers $P_1, P_2, \dots , P_ C$ ($1 \leq P_ i \leq N$), the junctions the chores are located. The next $N-1$ lines of input contain the descriptions of the paths. In particular, the $i^\text {th}$ of these lines contains two integers $A_ i$ and $B_ i$ ($1 \leq A_ i, B_ i \leq N$; $A_ i \neq B_ i$), denoting that the $i^\text {th}$ path connects junctions $A_ i$ and $B_ i$. It is guaranteed that from any junction it is possible to reach any other junction by treading a series of one or more paths. ## Output On the first line, output two integers $c_ f$ and $c_ a$ ($1 \leq c_ f, c_ a \leq C-1$; $c_ f + c_ a = C$), the number of chores Fluttershy should do and the number of chores Angel should do, respectively. On the second line, output $c_ f$ integers $p_1, p_2, \dots , p_{c_ f}$ ($1 \leq p_ i \leq C$), denoting the labels of the chores that Fluttershy should do. On the third line, output $c_ a$ integers $q_1, q_2, \dots , q_{c_ a}$ ($1 \leq q_ i \leq C$), denoting the labels of the chores that Angel should do. All $p_ i$ and $q_ i$ should be distinct and partitioning the chores in this way should, among all such partitions, result in the minimum possible time taken, assuming both Fluttershy and Angel take the optimal routes. You can output the chores in any order. If there are multiple correct answers, you can output any of them. Sample Input 1 Sample Output 1 7 4 7 2 3 4 6 7 1 2 1 3 1 4 1 5 5 6 5 7 3 1 1 3 4 2 Sample Input 2 Sample Output 2 10 9 7 2 2 3 4 5 6 7 8 9 10 1 2 1 4 2 3 4 5 5 6 6 7 7 8 8 9 9 10 7 2 3 4 5 6 7 8 9 1 2 Sample Input 3 Sample Output 3 4 4 1 1 2 2 3 4 1 2 2 3 1 4 2 2 1 3 2 4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5706606507301331, "perplexity": 777.4682013754025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141183514.25/warc/CC-MAIN-20201125154647-20201125184647-00126.warc.gz"}
https://physics.stackexchange.com/questions/496498/solid-in-liquid-heat-transfer-temperatures-entropy-changes
# Solid in Liquid Heat Transfer (Temperatures/Entropy Changes) Suppose we have a solid of temperature $$T_s$$ and heat capacity $$C_p$$ submerged into a pool of water that has temperature $$T_w$$. If $$T_s \gt T_w$$ and the pressure of the isolated pool-solid system is constant, how much heat will the solid lose, how much heat will the pool gain and what will the entropy change be for every part? Edit I do understand that the format of the question resembles that of a plain exercise. However, aid in a question as such will mostly help me understand what kind of a process this heat transmission is and how it could be described mathematically. That said, I have indeed worked on the question and reached a certain point of progress but would appreciate some help. • Please show us what you got so far. Also, please understand that the final state and the changes in temperature and entropy of the solid and the water for this irreversible process are independent of the details of the process. – Chet Miller Aug 12 at 23:55 • Here is a link to a cookbook recipe for determining the change in entropy for an irreversible process such as this: physicsforums.com/insights/grandpa-chets-entropy-recipe – Chet Miller Aug 12 at 23:57 • For a start, thank you for your answer. Well, concerning my progress, from the formula δq=Cp*dT (for the solid), I got via integration that Δq=Cp*(Ts'-Ts)<0, where Ts' is the occuring temperature of the solid after its immersion in the lake. Moreover, I know that for the solid dS=δq/Τ which, via integration, gives the formula ΔS=Cp=lnT – WannaBeScientist Aug 13 at 0:37 • I assume that you are looking for the steady state answer? If not, are you looking for a function of heat transfer vs. time? – David White Aug 13 at 0:45 • Well, @ChetMiller, I did grasp the idea, so thank you for your analysis. It is evident that by use of these formulas and the energy balance equations, I shall be able to determine the final temperatures thus solving the problem. – WannaBeScientist Aug 17 at 12:59 First the general case: When matter is heated or cooled under constant pressure, the amount of heat is $$Q = \Delta H$$ and the entropy change is $$\Delta S = \int \frac{dH}{T}$$ In the special case that there is no phase change and the heat capacity is constant, then $$Q = m\, C_P (T_f - T_i)$$ and $$\Delta S = m\, C_P \ln\frac{T_f}{T_i}$$ where $$T_i$$ and $$T_f$$ is the initial and final temperature. The last two equations will solve your problem. First write the energy balance to find out what is the final temperature. Once you know the final temperature, calculate the heat and entropy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692294359207153, "perplexity": 378.8631531207314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00348.warc.gz"}
http://www.pearltrees.com/cleverchris/subfields/id2067529
# Subfields Thermodynamics. Annotated color version of the original 1824 Carnot heat engine showing the hot body (boiler), working body (system, steam), and cold body (water), the letters labeled according to the stopping points in Carnot cycle Thermodynamics applies to a wide variety of topics in science and engineering. Historically, thermodynamics developed out of a desire to increase the efficiency and power output of early steam engines, particularly through the work of French physicist Nicolas Léonard Sadi Carnot (1824) who believed that the efficiency of heat engines was the key that could help France win the Napoleonic Wars.[1] Irish-born British physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854:[2] Statics. Statics is the branch of mechanics that is concerned with the analysis of loads (force and torque, or "moment") on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity. When in static equilibrium, the system is either at rest, or its center of mass moves at constant velocity. Vectors Example of a beam in static equilibrium. The sum of force and moment is zero. A scalar is a quantity, such as mass or temperature, which only has a magnitude. A bold faced character VAn underlined character VA character with an arrow over it . Vectors can be added using the parallelogram law or the triangle law. Force Theory of relativity. The theory of relativity, or simply relativity in physics, usually encompasses two theories by Albert Einstein: special relativity and general relativity.[1] Concepts introduced by the theories of relativity include: Quantum mechanics. In advanced topics of quantum mechanics, some of these behaviors are macroscopic (see macroscopic quantum phenomena) and emerge at only extreme (i.e., very low or very high) energies or temperatures (such as in the use of superconducting magnets). For example, the angular momentum of an electron bound to an atom or molecule is quantized. Plasma (physics) Plasma (from Greek πλάσμα, "anything formed"[1]) is one of the four fundamental states of matter (the others being solid, liquid, and gas). When air or gas is ionized plasma forms with similar conductive properties to that of metals. Plasma is the most abundant form of matter in the Universe, because most stars are in plasma state.[2][3] Artist's rendition of the Earth's plasma fountain, showing oxygen, helium, and hydrogen ions that gush into space from regions near the Earth's poles. The faint yellow area shown above the north pole represents gas lost from Earth into space; the green area is the aurora borealis, where plasma energy pours back into the atmosphere.[6] Plasma is loosely described as an electrically neutral medium of positive and negative particles (i.e. the overall charge of a plasma is roughly zero). Optics. Optics is the branch of physics which involves the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it.[1] Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.[1] Some phenomena depend on the fact that light has both wave-like and particle-like properties. Explanation of these effects requires quantum mechanics. When considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, and medicine (particularly ophthalmology and optometry). History . Where and . Mechanics. Classical versus quantum The major division of the mechanics discipline separates classical mechanics from quantum mechanics. Historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newton's laws of motion in Principia Mathematica; Quantum Mechanics was discovered in 1925. Both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Quantum mechanics is of a wider scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. Mathematical physics. Mathematical Physics refers to development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as: "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories".[1] Scope There are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. Kinematics. Fluid dynamics. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. Electromagnetism. Electromagnetism, or the electromagnetic force is one of the four fundamental interactions in nature, the other three being the strong interaction, the weak interaction, and gravitation. This force is described by electromagnetic fields, and has innumerable physical instances including the interaction of electrically charged particles and the interaction of uncharged magnetic force fields with electrical conductors. The word electromagnetism is a compound form of two Greek terms, ἢλεκτρον, ēlektron, "amber", and μαγνήτης, magnetic, from "magnítis líthos" (μαγνήτης λίθος), which means "magnesian stone", a type of iron ore. Dynamics (mechanics) Generally speaking, researchers involved in dynamics study how a physical system might develop or alter over time and study the causes of those changes. In addition, Newton established the fundamental physical laws which govern dynamics in physics. By studying his system of mechanics, dynamics can be understood. In particular, dynamics is mostly related to Newton's second law of motion. Physical cosmology. Physical cosmology is the study of the largest-scale structures and dynamics of the Universe and is concerned with fundamental questions about its formation, evolution, and ultimate fate.[1] For most of human history, it was a branch of metaphysics and religion. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed us to understand those physical laws. Condensed matter physics. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists identify themselves as condensed matter physicists,[2] and The Division of Condensed Matter Physics (DCMP) is the largest division of the American Physical Society.[3] The field overlaps with chemistry, materials science, and nanotechnology, and relates closely to atomic physics and biophysics. Theoretical condensed matter physics shares important concepts and techniques with theoretical particle and nuclear physics.[4] References to "condensed" state can be traced to earlier sources. Classical mechanics. Aerodynamics. A vortex is created by the passage of an aircraft wing, revealed by smoke. Acoustics.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729956746101379, "perplexity": 982.5029302010075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989897.84/warc/CC-MAIN-20150728002309-00001-ip-10-236-191-2.ec2.internal.warc.gz"}
https://brilliant.org/problems/dont-use-a-calculator-2-3/
# I Can Do This Division Without A Calculator What is the value of $$\frac{41}{29}$$ to 5 decimal places? Hint: Read the Mental Math Tricks wiki. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7913661003112793, "perplexity": 1361.577675165373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578586680.51/warc/CC-MAIN-20190423035013-20190423060151-00045.warc.gz"}
https://rd.springer.com/chapter/10.1007/978-1-4614-0391-3_4
# Continuous Random Variables and Probability Distributions • Jay L. Devore • Kenneth N. Berk Chapter Part of the Springer Texts in Statistics book series (STS) ## Abstract As mentioned at the beginning of Chapter 3, the two important types of random variables are discrete and continuous. In this chapter, we study the second general type of random variable that arises in many applied problems. Sections 4.1 and 4.2 present the basic definitions and properties of continuous random variables, their probability distributions, and their moment generating functions. In Section 4.3, we study in detail the normal random variable and distribution, unquestionably the most important and useful in probability and statistics. Sections 4.4 and 4.5 discuss some other continuous distributions that are often used in applied work. In Section 4.6, we introduce a method for assessing whether given sample data is consistent with a specified distribution. Section 4.7 discusses methods for finding the distribution of a transformed random variable. ## Bibliography 1. Bury, Karl, Statistical Distributions in Engineering, Cambridge University Press, Cambridge, England, 1999. A readable and informative survey of distributions and their properties.Google Scholar 2. Johnson, Norman, Samuel Kotz, and N. Balakrishnan, Continuous Univariate Distributions, vols. 1–2, Wiley, New York, 1994. These two volumes together present an exhaustive survey of various continuous distributions.Google Scholar 3. Nelson, Wayne, Applied Life Data Analysis, Wiley, New York, 1982. Gives a comprehensive discussion of distributions and methods that are used in the analysis of lifetime data. 4. Olkin, Ingram, Cyrus Derman, and Leon Gleser, Probability Models and Applications (2nd ed.), Macmillan, New York, 1994. Good coverage of general properties and specific distributions.Google Scholar ## Authors and Affiliations 1. 1.Statistics DepartmentCalifornia Polytechnic State UniversitySan Luis ObispoUSA 2. 2.Department of MathematicsIllinois State UniversityNormalUSA
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110976457595825, "perplexity": 1001.3223485924938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156622.36/warc/CC-MAIN-20180920214659-20180920235059-00477.warc.gz"}