url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://math.stackexchange.com/questions/294214/exchanging-integrals-with-series-when-the-monotone-convergence-theorem-does-not
Exchanging integrals with series when the monotone convergence theorem does not apply Let $g$ and $h$ measurable functions on $X$, and assume $g\in L^{1}(X)$ and $h\geq 0$. Show that if $$F(t)=\int g(x)e^{-th(x)}\,d\mu(x)$$ then for $t >0$, $$F(t)=\sum\limits_{n=0}^{\infty} \left[\int g(x)h^{n}(x)\,d\mu(x)\right] \frac{(-t)^{n}}{n!}$$ The question is, what convergence theorem can be used, as the monotone convergence theorem can not be used? - So let $F_n(t)$ be the lower representation summed from $1$ to $n$, i.e. $$F_n(t) = \sum_{i=0}^n\int g(x)h^n(x)d\mu(x)\frac{(-t)^n}{n!}$$ When $n$ is finite we are able to rearrange the summation and integration to give an equivalent: $$F_n(t) = \int g(x) \sum_{i=o}^{n}\frac{(-th(x))^n}{n!}\,d\mu(x)$$ So to use Lebesgue dominated convergence we need to find some integrable function $l(x)$ such that $$\left|g(x)\right|\left| \sum_{i=o}^{n}\frac{(-th(x))^n}{n!}\right| \le l(x)$$ For all $n$. But we also have that $\exp(-x) \le 1$ for all $x \in \Bbb{R}^+$. And that for every $\epsilon > 0$ there exists an $N$ such that $$\sum_{n>N}\frac{(-th(x))^n}{n!}<\epsilon$$ for fixed $t$ and all $x$. We have $$\sum_{i=o}^{\infty}\frac{(-th(x))^n}{n!} \le 1$$ So for $n>N$ we have $$\sum_{i=o}^{n}\frac{(-th(x))^n}{n!} \le 1+\epsilon$$ Andthis gives us $l(x) = |g(x)|(1+\epsilon)$ as a bound if we reindex $\{F_n\}$ to $\{F_{n'}\}$ for $n > N$ this converges to $$g(x)\exp(-th(x))$$ And LDCT shows that the integral converges to our first definition of $F$ as well, giving the desired result. Sorry, by mistake, you're not trying to find a function that bounds $F_n$, you're trying to find a function that bounds the integrand. Make use to use absolute values where appropriate and think about convergence of the power series and what that implies. I'll add more to my answer. –  Sam DeHority Feb 4 '13 at 12:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939654469490051, "perplexity": 79.21841243184151}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929205.63/warc/CC-MAIN-20150521113209-00189-ip-10-180-206-219.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/282541/packing-number-of-ell-1-ball-in-ell-infty-metric/282561#282561
# Packing number of $\ell_1$ ball in $\ell_{\infty}$ metric Consider the $d$-dimensional $\ell_1$ ball $\mathbb B_d=\{x\in\mathbb R^d: \|x\|_1\leq 1\}$, where $\|x\|_1=\sum_{i=1}^d{|x_i|}$. I'm interested in the maximum size of the (finite) subset $S\subseteq\mathbb B_d$ such that for any two distinct elements $x,y\in S$, $\|x-y\|_{\infty}\geq \delta$, where $\|z\|_{\infty}=\max_{1\leq i\leq d}|z_i|$. $\delta$ is, of course, strictly between 0 and 1. My main point of reference is Lemma 5.2 of the following note: https://www.stat.berkeley.edu/~wainwrig/nachdiplom/Chap5_Sep10_2015.pdf If I understand it correctly, the (lower bound) side of Lemma 5.2 implies that 1. If $\delta\lesssim 1/d$, then $\log |S|\asymp d\log(1/\delta)$ as $d\to\infty$, and this bound is almost tight; 2. However, if $d\delta\to\infty$ it seems to me that the lower bound of $\log |S|$ becomes $\Omega(1)$ as $d\to\infty$, which clearly sounds loose to me. My question is hence the following: What is the asymptotic scaling of $\log |S|$ as $d\to\infty$ in the regime of $\delta\to 0$ and $d\delta\to\infty$? More concretely, if $\delta\asymp d^{-\alpha}$ for some $\alpha\in(0,1)$, how should $\log |S|$ scale with $d$ as $d\to\infty$? And furthermore, what if we consider the general $\ell_p$ ball $\{x\in\mathbb R^d: \|x\|_p\leq 1\}$ for $1\leq p<\infty$? I address only the $\ell_1$ question. Let $k=\lfloor 1/\delta\rfloor$. Then you can take all points in $\mathbb B_d$ with coordinates in $\frac1k\mathbb Z$ (let their number be $f_k(d)$). On the other hand, the $\ell_\infty$-balls centered at the points of $\mathbb B_d$ with coordinates in $\frac1{2k}\mathbb Z$ cover $\mathbb B_d$, so an example cannot contain more than $f_{2k}(d)$ points. Now we are interested in an asymptotics of $f_k(d)$ (luckily, it is close enough to that of $f_d(2k)$. We have $$f_k(d)=1+\sum_{i=1}^k2^i{k\choose i}{d\choose i}$$ (here we count separately the points with $i$ nonzero coordinates). If, say, $1/\delta\sim k=o(d^{1/2})$, then the main term is with $i=k$, so $f_k(d)=\Theta((2d)^k/k!)$ and hence $\log|S|=\Theta( k\log d)$. If $k=o(d)$ but $k\neq o(d^{1/2})$ then the index $i_0=k-o(k)$ of the maximal term satisfies $(k-i_0)d\approx k^2$, so $i_0\approx k-k^2/d$. If $k\sim d^\alpha$, then $$f_k(d)\approx 2^{d^\alpha-d^{2\alpha-1}}{d^\alpha\choose d^{2\alpha-1}}{d\choose d^{2\alpha-1}},$$ and for $1>\alpha>1/2$ we have $$\log|S|=\Theta(d^\alpha)=\Theta(k).$$ The other cases can be treated similarly: we only need to realize what is the asymptotics of the main term in $f_k(d)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990779459476471, "perplexity": 62.59244224603388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00008.warc.gz"}
https://math.stackexchange.com/questions/922056/curiosity-about-a-simple-identity
# Curiosity about a simple identity Consider the well known formula $$1^3 + 2^3 +\cdots+ n^3 = (1+\cdots+n)^2 , n \in N$$ Now suppose that for all $n \in N$ the identity is true : $$1^k + 2^k +\cdots+ n^k = (1+\cdots+n)^{k-1}$$ with $k \in N$ fixed. Which is the possible values of $k$ ? Someone could point me a reference? Sorry for my english, it is not good • $1 + 2^k = 3^k$ so $k=1$ – Will Jagy Sep 7 '14 at 2:57 The dominant term of the left hand side is $$\frac{1}{k+1} n^{k+1}.$$ The right hand side is $$\left( \frac{n^2 + n}{2} \right)^{k-1},$$ dominant term $$\frac{2}{2^k} n^{2k - 2}.$$ So $$k+1 = 2k - 2$$ and $$3 = k$$ Consider the case when $n = 2$. Then we have that $1 + 2^k = 3^{k-1}$. By Catalan's Conjecture (Mihăilescu's theorem), we have that $k = 3$ (the theorem basically states that the only two consecutive perfect powers are $2^3$ and $3^2$.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649096727371216, "perplexity": 159.19462388052725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986682998.59/warc/CC-MAIN-20191018131050-20191018154550-00115.warc.gz"}
https://mathhelpboards.com/threads/linear-transformation-with-no-eigenvector.6946/
# [SOLVED]Linear Transformation with No Eigenvector #### Sudharaka ##### Well-known member MHB Math Helper Feb 5, 2012 1,621 Hi everyone, This is one of those questions I encountered when trying to do a problem. I know that a eigenvector of a linear transformation should be non-zero by definition. So does that mean every linear transformation has eigenvectors? What if there's some linear transformation where no eigenvectors exist. Here's a question which I came across which motivated me to think about this. Find eigenvectors and eigenvalues of the following linear transformation. $T:\,f(x)\rightarrow\int_{0}^{x}f(t)\,dt$ in the linear span over $$\Re$$. So we have to find functions such that, $T(f(x))=\lambda\,f(x)$ $\Rightarrow \int_{0}^{x}f(t)\,dt=\lambda \,f(x)$ Now differentiating both sides we get, $f(x)=\lambda\,f'(x)$ Solving for $$f$$ we finally obtain, $f(x)=Ae^{x/\lambda}$ where $$A$$ is a constant. To find the value of $$A$$ we plug this to our original equation. And then something out of the ordinary happens.... $\int_{0}^{x}Ae^{t/\lambda}\,dx=\lambda\,Ae^{x/\lambda}$ $\Rightarrow A=0$ So we get, $$f(x)=0$$ which is not a eigenvector. And hence I came to the conclusion that this linear transformation doesn't have any eigenvectors. Am I correct? #### Ackbach ##### Indicium Physicus Staff member Jan 26, 2012 4,197 $\int_{0}^{x}Ae^{t/\lambda}\,dx=\lambda\,Ae^{x/\lambda}$ $\Rightarrow A=0$ Why does $\lambda \, A e^{x/ \lambda}=\lambda \, A e^{x/ \lambda}$ require $A=0$? #### Sudharaka ##### Well-known member MHB Math Helper Feb 5, 2012 1,621 Why does $\lambda \, A e^{x/ \lambda}=\lambda \, A e^{x/ \lambda}$ require $A=0$? Thanks for the reply. Well if I am not wrong here, I think you have calculated the integration incorrectly. $\int_{0}^{x}Ae^{t/\lambda}\,dx=\lambda\,Ae^{x/\lambda}$ $\Rightarrow \lambda \, A e^{x/ \lambda}-\lambda\,A=\lambda\,Ae^{x/\lambda}$ $\therefore A=0$ #### Ackbach ##### Indicium Physicus Staff member Jan 26, 2012 4,197 Thanks for the reply. Well if I am not wrong here, I think you have calculated the integration incorrectly. $\int_{0}^{x}Ae^{t/\lambda}\,dx=\lambda\,Ae^{x/\lambda}$ $\Rightarrow \lambda \, A e^{x/ \lambda}-\lambda\,A=\lambda\,Ae^{x/\lambda}$ $\therefore A=0$ Oh, you're right. I think I misinterpreted your equation as saying that's what the integral was; I didn't bother to "check" it. Yeah, if the exponential function doesn't work, nothing will. Thinking of it in terms of series expansions, the integration operator will always raise the power by one. So it seems unlikely that the resulting series would be a multiple of the original. #### Opalg ##### MHB Oldtimer Staff member Feb 7, 2012 2,725 I came to the conclusion that this linear transformation doesn't have any eigenvectors. Am I correct? Yes, you are correct. See here. #### Sudharaka ##### Well-known member MHB Math Helper Feb 5, 2012 1,621 Yes, you are correct. See here. Thanks so much. I didn't know that the linear transformation is called the Volterra operator and was confused by the fact that it didn't have any eigenvalues. I have never encountered such a thing before. Thanks again for your valuable input. #### HallsofIvy ##### Well-known member MHB Math Helper Jan 29, 2012 1,151 If V is a vector space over the complex numbers then any linear transformation has eigenvalues and so non-zero eigenvectors. If V is a vector space of the real numbers it may not have eigenvalues (solutions to the characteristic equation are all complex) and so no eigenvectors. Yes, the standard definition of "eigenvector" in most texts require that it be non-zero. I personally don't like that and prefer, as you will see in some texts: "$$\lambda$$ is an eigenvalue for linear transformation A if and only if there exist a non-zero vector, v, such that $$Av= \lambda v$$." "An eigenvector, corresponding to eigenvalue $$\lambda$$ is any vector, v, such that $$Av= \lambda v$$" That allows the 0 vector as an eigenvector (for any eigenvalue) so that we can say "The set of all eigenvectors corresponding to eigenvalue $$\lambda$$ form a vector space" rather than having to say "The set of all eigenvectors corresponding to eigenvalue $$\lambda$$, together with the 0 vector, form a vector space" A very minor detail either way! #### Klaas van Aarsen ##### MHB Seeker Staff member Mar 5, 2012 8,874 If V is a vector space over the complex numbers then any linear transformation has eigenvalues and so non-zero eigenvectors. If V is a vector space of the real numbers it may not have eigenvalues (solutions to the characteristic equation are all complex) and so no eigenvectors. I can't find any complex eigenvalues for this particular problem... #### Opalg ##### MHB Oldtimer Staff member Feb 7, 2012 2,725 If V is a vector space over the complex numbers then any linear transformation has eigenvalues and so non-zero eigenvectors. If V is a vector space of the real numbers it may not have eigenvalues (solutions to the characteristic equation are all complex) and so no eigenvectors. I can't find any complex eigenvalues for this particular problem... HallsofIvy's comments refer to linear transformations on finite-dimensional vector spaces. The integral operator in this thread acts on a space of functions that is infinite-dimensional, and the theory is quite different in that situation. A continuous linear operator on a normed vector space has associated with it a set of complex numbers called its spectrum. If the space is finite-dimensional then the spectrum consists exactly of the eigenvalues. In the infinite-dimensional case, the spectrum is always a nonempty set, but it does not necessarily consist of eigenvalues. For the Volterra integral operator, the spectrum consists of the single point $\{0\}$ (which is not an eigenvalue). #### HallsofIvy ##### Well-known member MHB Math Helper Jan 29, 2012 1,151 Yes, thank you. My mind tends to go foggy when I think about infinite dimensional vector spaces so I just ignore them!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9764578938484192, "perplexity": 412.3580160832611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00574.warc.gz"}
http://www.cfd-online.com/W/index.php?title=Combustion&oldid=9001
# Combustion ## What is combustion -- physics versus modelling Combustion phenomena consist of many physical and chemical processes which exhibit a broad range of time and length scales. A mathematical description of combustion is not always trivial, although some analytical solutions exist for simple situations of laminar flame. Such analytical models are usually restricted to problems in zero or one-dimensional space. # Fundamental Aspects ## Main Specificities of Combustion Chemistry Combustion can be split into two processes interacting with each other: thermal, and chemical. The chemistry is highly exothermal (this is the reason of its use) but also highly temperature dependent, thus highly self-accelerating. In a simplified form, combustion can be represented by a single irreversible reaction involving 'a' fuel and 'an' oxidizer: $\frac{\nu_F}{\bar M_F}Y_F + \frac{\nu_O}{\bar M_O} Y_O \rightarrow Product + Heat$ Althgough very simplified compared to real chemistry involving hundreds of species (and their individual transport properties) and elemental reactions, this rudimentary chemistry has been the cornerstone of combustion analysis and modelling. The most widely used form for the rate of the above reaction is the Arrhénius law: $\dot\omega = \rho A \left ( \frac{Y_F}{\bar M_F} \right )^{n_F} \left (\frac{Y_O}{\bar M_O}\right )^{n_O} \exp^{-T_a/T}$ $T_a$ is the activation temperature, high in combustion, consistently with the temperature dependence. This is where the high non-linearity in temperature is modelled. A is the pre-exponential constant. One of the interpretation of the Arrhénius law comes from gas kinetic theory: the number of molecules whose kinetic energy is larger than the minimum value allowing a collision to be energetic enough to trig a reaction is proportional to the exponential term introduced above divided by the square root of the temperature. This interpretation allows one to think that the temperature dependence of A is very weak compared to the exponential term. A is eventually considered as constant. The reaction rate is also naturally proportional to the molecular density of each of the reactant. Nonetheless, the orders of reaction $n_i$ are different from the stoichiometric coefficients as the single-step reaction is global, not governed by collision for it represents hundreds of elementary reactions. If one goes into the details, combustion chemistry is based on chain reactions, decomposed into three main steps: (i) generation (where radicals are created from the fresh mixture), (ii) branching (where products and new radicals appear from interaction of radicals with reactants), and (iii) termination (where radicals collide and turn into products). The branching step tends to accelerate the production of active radicals (autocatalytic). The impact is nevertheless small compared to the high non-linearity in temperature. This explains why single-step chemistry has been sufficient for most of the combustion modelling work up to now. The fact that a flame is a very thin reaction zone separating, and making the transition between, a frozen mixture and an equilibrium is explained by the high temperature dependence of the reaction term, modelled by a large activation temperature, and a large heat release (the ratio of the burned and fresh gas temperatures is about 7 for typical hydrocarbon flames) leading to a sharp self-acceleration in a very narrow area. To evaluate the order of magnitude of the quantities, the terms in the exponential argument are normalized: $\beta=\alpha\frac{T_a}{T_s} \qquad \alpha=\frac{T_s-T_f}{T_s}$ $\beta$ is named the Zeldovitch number and $\alpha$ the heat release factor. Here, $T_s$ has been used instead of $T_b$, the conventional notation for burned gas temperature (at final equilibrium). $T_s$ is actually $T_b$ for a mixture at stoichiometry and when the flame is adiabatic, i.e. this is the reference highest temperature that can be obtained in the system. That said, typical value for $\beta$ and $\alpha$ are 10 and 0.9, giving a good taste of the level of non-linearity of the combustion process with respect to temperature. Actually, the reaction rate is rewritten as: $\dot\omega = \rho A \left ( \frac{Y_F}{\bar M_F} \right )^{n_F} \left (\frac{Y_O}{\bar M_O}\right )^{n_O} \exp^{-\frac{\beta}{\alpha}}\exp^{-\beta\frac{1-\theta}{1-\alpha(1-\theta)}}$ where the non-dimensionalized temperature is: $\theta=\frac{T-T_f}{T_s-T_f}$ The non-linearity of the reaction rate is seen from the exponential term: • ${\mathcal O}(\exp^{-\beta})$ for $\theta$ far from unity (in the fresh gas) • ${\mathcal O}(1)$ for $\theta$ close to unity (in the reaction zone close to the burned gas whose temperature must be close to the adiabatic one $T_s$), more exactly $1-\theta \sim {\mathcal O}(\beta^{-1})$ Temperature Non-Linearity of the Source Term: the Temperature-Dependent Factor of the Reaction Term for Some Values of the Zeldovtich and Heat Release Parameters Note that for an infinitely high activation energy, the reaction rate is piloted by a $\delta(\theta)$ function. The figure, beside, illustrates how common values of $\beta$ around 10 tend to make the reaction rate singular around $\theta$ of unity. Two set of values are presented: $\beta = 10$ and $\beta = 8$. The first magnitude is the representative value while the second one is a smoother one usually used to ease numerical simulations. In the same way, two values for the heat release $\alpha$ 0.9 and 0.75 are explored. The heat release is seen to have a minor impact on the temperature non-linearity. ## Transport Equations Additionally to the Navier-Stokes equations, at least with variable density, the transport equations for a reacting flow are the energy and species transport equations. In usual notations, the specie i transport equation is written as: $\frac{D \rho Y_i}{D t} = \nabla\cdot \rho D_i\vec\nabla Y_i - \nu_i\bar M_i\dot\omega$ and the temperature transport equation: $\frac{D\rho C_p T}{Dt} = \nabla\cdot \lambda\vec\nabla T + Q\nu_F \bar M_F \dot\omega$ The diffusion is modelled thanks to Fick's law that is a (usually good) approximation to the rigorous diffusion velocity calculation. Regarding the temperature transport equation, it is derived from the energy transport equation under the assumption of a low-Mach number flow (compressibility and viscous heating neglected). The low-Mach number approximation is suitable for the deflagration regime (as it will be demonstrated below), which is the main focus of combustion modelling. Hence, the transport equation for temperature, as a simplified version of the energy transport equation, is usually retained for the study of combustion and its modelling. ### Low-Mach Number Equations In compressible flows, when the motion of the fluid is not negligible compared to the speed of sound (which is the speed at which the molecules can reorganize themselves), the heap of molecules results in a local increase of pressure and temperature moving as an acoustic wave. It means that, in such a system, a proper reference velocity is the speed of sound and a proper pressure reference is the kinetic pressure. A contrario, in low-Mach number flows, the reference speed is the natural representative speed of the flow and the reference pressure is the thermodynamic pressure. Hence, the set of reference quantities to characterize a low-Mach number flow is given in the table below: Density $\rho_o$ A reference density (upstream, average, etc.) Velocity $U_o$ A reference velocity (inlet average, etc.) Temperature $T_o$ A reference temperature (upstream, average, etc.) Pressure (static) $P_o=\rho_o \bar r T_o$ From Boyle-Mariotte Length $L_o$ A reference length (representative of the domain) Time $L_o/U_o$ Energy $C_p T_o$ Internal energy at constant reference pressure The equations for fluid mechanics properly adimensionalized can be written: Mass conservation: $\frac{D\rho}{Dt} =0$ Momentum: $\frac{D\rho\vec U}{Dt}=-\frac{1}{\gamma M}\vec\nabla P+\nabla\cdot\frac{1}{Re}\bar\bar\Sigma$ Total energy: $\frac{D\rho e_T}{Dt}-\frac{1}{RePr}\nabla\cdot\lambda\vec\nabla T=-\frac{\gamma-1}{\gamma}\nabla\cdot P\vec U +\frac{1}{Re}M^2(\gamma-1)\nabla\cdot \vec U\bar\bar\Sigma + \frac{\rho}{C_pT_o(U_o/L_o)}Q\nu_F \bar M_F\dot\omega$ Specie: $\frac{D\rho Y}{Dt}-\frac{1}{ScRe}\nabla\cdot\rho D\vec\nabla Y=-\frac{\rho}{U_o/L_o}\nu\bar M\dot\omega$ State law: $P=\rho T$ The low-Mach number equations are obtained considering that $M^2$ is small. 0.1 is usually taken as the limit, which recovers the value of a Mach number of 0.3 to characterize the incompressible regime. Considering the energy equation, in addition to the terms with $M^2$ in factor in the equation, the total energy reduces to internal energy as: $e_T = T/\gamma+M^2(\gamma-1)\vec U^2/2$. Moreover, the work of pressure is considered as negligible because the gradient of pressure is negligible (low-Mach number approximation is indeed also named isobaric approximation) and the flow is assumed close to a divergence-free state. For the same reason, volumic energy and enthalpy variations are assumed equal as they only differ through the addition of pressure. Hence, redimensionalized, the low-Mach number energy equation leads to the temperature equation as used in combustion analysis: $\frac{D\rho C_p T}{Dt} = \nabla\cdot \lambda\vec\nabla T + Q\nu_F \bar M_F \dot\omega$ ______________________________ Note: The species and temperature equations are not closed as the fields of velocity and density also need to be computed. Through intense heat release in a very small area (the jump in temperature in typical hydrocarbon flames is about seven and so is the drop in density in this isobaric process, and the thickness of a flame is of the order of the millimetre), combustion influences the flow field. Nevertheless, the vast majority of combustion modelling has been developed based on the species and temperature equations, assuming simple flow fields. ### The Damk&oumlaut;hler Number A flame is a reaction zone. From this simple point of view, two aspects have to be considered: (i) the rate at which it is fed by reactants, let call $\tau_d$ the characteristic time, and the strength of the chemistry to consume them, let call the characteristic chemical time $\tau_c$. In combustion, the Damk&oumlaut;hler number, Da, compares these both time scales and, for that reason, it is one of the most integral non-dimensional groups: $Da=\frac{\tau_d}{\tau_c}$. If Da is large, it means that the chemistry has always the time to fully consume the fresh mixture and turn it into equilibrium. Real flames are usually close to this state. The characteristic reaction time, $(Ae^{-T_a/T_s})^{-1}$, is estimated of the order of the tenth of a ms. When Da is low, the fresh mixture cannot be converted by a too weak chemistry. The flow remains frozen. This situation happens with ignition or misfire, for instance. The picture of a deflagration lends itself to a description based on the Damk&oumlaut;hler number. A reacting wave progresses towards the fresh mixture through preheating of the upstream closest layer. The elevation of the temperature strengthens the chemistry and reduces its characteristic time such that the mixture changes from a low-Da region (far upstream, frozen) to a high-Da region in the flame (intense reaction to equilibrium). ## Conservation Laws The processus of combustion transforms the chemical enthalpy into sensible enthalpy (i.e. rise the temperature of the gases thanks to the heat released). Simple relations can be drawn between species and temperature by studying the source terms appearing in the above equations: $\frac{Y_F}{\nu_F\bar M_F} - \frac{Y_O}{\nu_O\bar M_O} = \frac{Y_{F,u}}{\nu_F\bar M_F} - \frac{Y_{O,u}}{\nu_O\bar M_O}$ $Y_F + \frac{Cp T}{Q} = Y_{F,u} + \frac{CpT_u}{Q}$ Hence $T_b = T_u + \frac{Q Y_{F,u}}{Cp}$, $Y_{O,b} = Y_{O,u} - \frac{\nu_O \bar M_O}{\nu_F \bar M_F} Y_{F,u}$ and $Y_{F,b} = 0$. Here, the example has been taken for a lean case. As mentioned in Sec. Main Specificities, the stoichiometric state is used to non-dimensionalize the conservation equations: $Y_i^* = Y_{i,u}^* - \theta \qquad ; \qquad Y_i^*=Y_i/Y_{i,s}$. A comprehensive form of the reaction rate can be reconstituted to understand the difficulty of numerically resolving the reaction zone: $\dot\omega = B \prod_{i=O,F}(Y_{i,u}^*-\theta)^{n_i} \exp{\left ( -\beta\frac{1-\theta}{1-\alpha(1-\theta)}\right)}$ where $B$ stands for all the constant terms present in this reaction rate, plus density. Source Term versus Temperature For the stoichiometric case and a global order of two, the reaction rate is graphed versus the reduced temperature for different values of the heat release and Zeldovitch parameter. A high value of $\beta$ makes the reaction rate very sharp, versus temperature. It means that reaction is significant beyond a temperature level (sometimes called ignition temperature) that is close to one (the exponential term above is non-negligible for $1-\theta \sim \beta^{-1}$). The heat release has qualitatively the same impact but not so strong. Transposed to the case of a flame sheet, it effectively shows that the reaction exists only in a fraction of the thermal thickness of the flame (the region close to the flame that the latter preheats, hence, where the reduced temperature rises from 0 to 1 here) where the temperature deviates few from the maximal one (density can be assumed as constant and equal to its burned-gas value). Numerically capturing such a sharp reaction zone can be costly and the lower values of $\beta$ and $\alpha$ as presented here are usually preferred whenever possible. Most problems in combustion involve turbulent flows, gas and liquid fuels, and pollution transport issues (products of combustion as well as for example noise pollution). These problems require not only extensive experimental work, but also numerical modelling. All combustion models must be validated against the experiments as each one has its own drawbacks and limits. In this article, we will address the modeling fundamentals only. In addition to the flow parameters used in fluid mechanics, new dimensionless parameters are introduced, the most important of which are the Karlovitz number and the Damkholer number which represent ratios of chemical and flow time scales, and the Lewis number which compares the diffusion speeds of species. The combustion models are often classified on their capability to deal with the different combustion regimes. # Three Combustion Regimes Depending on how fuel and oxidizer are brought into contact in the combustion system, different combustion modes or regimes are identified. Traditionally, two regimes have been recognized: the premixed regime and the non-premixed regime. Over the last two decades, a third regime, sometime considered as a hybrid of the two former ones to a certain extend, has risen. It has been named partially-premixed regime. ## The Non-Premixed Regime Sketch of a diffusion flame This regime is certainly the easiest to understand. Everybody has already seen a lighter, candle or gas-powered stove. Basically, the fuel issues from a nozzle or a simple duct into the atmosphere. The combustion reaction is the oxidization of the fuel. Because fuel and oxidizer are in contact only in a limited region but are separated elsewhere (especially in the feeding system) this configuration is the safest. The non-premixed flame has some other advantages. By controlling the flows of both reactants, it is (theoretically) possible to locate the stoichiometric interface, and thus, the location of the flame sheet. Moreover, the strength of the flame can also be controlled through the same process. Depending on the width of the transition region from the oxidizer to the fuel side, the species (fuel and oxidizer) feed the flame at different rates. This is because the diffusion of the species is directly dependent on the unbalance (gradient) of their distribution. A sharp transition from fuel to oxidizer creates intense diffusion of those species towards the flame, increasing its burning rate. This burning rate control through the diffusion process is certainly one of the reasons of the alternate name of such a flame and combustion mode: diffusion flame and diffusion regime. Because a diffusion flame is fully determined by the inter-penetration of the fuel and oxidizer streams, it has been convenient to introduce a tracer of the state of the mixture. This is the role of the mixture fraction, usually called Z or f. Z is usually taken as unity in the fuel stream and is null in the oxidizer stream. It varies linearly between this two bounds such that at any point of a frozen flow the fuel mass fraction is given by $Y_F=ZY_{F,o}$ and the oxidizer mass fraction by $Y_O = (1-Z)Y_{O,o}$. $Y_{F,o}$ and $Y_{O,o}$ are the fuel and oxidizer mass fractions in the fuel and oxidizer streams, respectively. The mixture fraction posses a transport equation that is expected to not have any source term as a tracer of a mixture must be a conserved scalar. First, the fuel and oxidizer mass fraction transport equations are written in usual notations: $\frac{D \rho Y_F}{D t} = \nabla\cdot \rho D\vec\nabla Y_F - \nu_F\bar M_F\dot\omega$ $\frac{D \rho Y_O}{D t} = \nabla\cdot \rho D\vec\nabla Y_O - \nu_O\bar M_O\dot\omega$ The two above equations are linearly combined in a single one in a manner that the source term disappears: $\frac{D \rho (\nu_O\bar M_O Y_F-\nu_F\bar M_F Y_O)}{D t} = \nabla\cdot \rho D\vec\nabla (\nu_O\bar M_O Y_F-\nu_F\bar M_F Y_O)$ The quantity $(\nu_O\bar M_O Y_F-\nu_F\bar M_F Y_O)$ is thus a conserved scalar. The last step is to normalize it such that it equals unity in the pure fuel stream ($Y_F=Y_{F,o}$ and $Y_O=0$) and is null in the pure oxidizer stream ($Y_F=0$ and $Y_O=Y_{O,o}$). The resulting normalized passive scalar is the mixture fraction: $Z=\frac{\Phi(Y_F/Y_{F,o})-(Y_O/Y_{O,o})+1}{\Phi+1}\qquad \Phi=\frac{\nu_O\bar M_O Y_{F,o}}{\nu_F\bar M_F Y_{O,o}}$ governed by the transport equation $\frac{D \rho Z}{D t} = \nabla\cdot \rho D\vec\nabla Z$ The stoichiometric interface location (and thus the approximate location of the flame if the flow is reacting) is where $\Phi(Y_F/Y_{F,o})-(Y_O/Y_{O,o})$ vanishes (or $Y_F$ and $Y_O$ are both null in the reacting case). This leads to a stoichiometry definition: $Z_s=\frac{1}{1+\Phi}$ As the mixture fraction qualifies the degree of inter-penetration of fuel and oxidizer, the elements originally present in these molecules are conserved and can be directly traced back to the mixture fraction. This has led to an alternate defintion of the mixture fraction, based on element conservation. First, the elemental mass fraction $X_{j}$ of element j is linked to the species mass fraction $Y_i$: $X_j = \sum_{i=1}^{n} \frac{a_{i,j} \bar M_j}{\bar M_i} Y_i$ where $a_{i,j}$ is a matrix counting the number of element j atoms in specie molecule named i and n is the number of species in the mixture. The group pictured by the summation above is a linear combination of $Y_i$. Because the transport equations of species mass fraction, few lines earlier, are also linear, a transport equation for the elemental mass fraction can be written: $\frac{D \rho X_j}{D t} = \nabla\cdot \sum_{i=1}^n\frac{a_{i,j}\bar M_j}{\bar M_i}\rho D_i\vec\nabla Y_i$ For mass is conserved, the linear combination of the source terms vanishes. Furthermore, by taking the same diffusion coefficient $D_i$ for all the species, the elemental mass fraction transport equation has exactly the same form as the specie transport equation (except the source term). Notice that the assumption of equal diffusion coefficient was also made in the previous definition of the mixture fraction and is justified in turbulent combustion modelling by the turbulence diffusivity flattening the diffusion process in high Reynolds number flows. Hence, the elemental mass fraction transport equation has the same structure as the mixture fraction transport equation seen above. Properly renormalized to reach unity in the fuel stream and zero in the oxidizer stream, the elemental mass fraction is a convenient way of determining the mixture fraction field in a flow. Indeed, it is widely used in practice for this purpose. #### Dissipation Rate A very important quantity, derived from the mixture fraction concept, is the scalar dissipation rate, usually noted: $\chi$. In the above introduction to non-premixed combustion, it has been said that a diffusion flame is fully controlled through: (i) the position of the stoichiometric line, dictating where the flame sheet lies; (ii) the gradients of fuel on one side and oxidizer on the other side, dictating the feeding rate of the reaction zone through diffusion and thus the strength of combustion. According the the mixture fraction definition, the location of the stoichiometric line is naturally tracked through the $Z_s$ iso-line and it is seen here how the mixture fraction is a convenient tracer to locate the flame. In the same manner, the mixture fraction field should also be able to give information on the strength of the chemistry as the gradients of reactants are directly linked to the mixture fraction distribution. The feeding rate of the reaction zone is characterized through the inverse of a time. Because it is done through diffusion, it must be obtained through a combination of mixture fraction gradient and diffusion coefficient (dimensional analysis): $\chi_s = \frac{(\rho D)_s}{\rho_s} ||\vec \nabla Z||^2_s \qquad (\rho D)_s = \left ( \frac{\lambda}{C_p} \right )_s$ where the subscript s refers to quantities taken effectively where the reacting sheet is supposed to be, close to stoichiometry. This deduction of the scalar dissipation rate, scaling the feeding rate of the flame, is obtained here through physical arguments and is to be derived from equations below in a more mathematical manner. Note that the transport coefficient for the mixture fraction is identified to the one for temperature. This notation is usually used in the literature to emphasize that the rate of temperature diffusion (that is commensurable to the rate of species diffusion) is the retained parameter (as introduced in the following approach highlighting the role of the scalar dissipation rate). Because combustion is highly temperature-dependent, T is certainly the scalar to which attention must be paid for. The temperature equation in the Low-Mach Number regime (Sec. Low-Mach Number Equations) is written below in steady-state: $\rho\vec U \cdot \vec\nabla T = \nabla\cdot \frac{\lambda}{C_p}\vec\nabla T + \frac{Q}{C_p}\nu_F \bar M_F \dot\omega$ In order to make this equation easily tractable, the Howarth-Dorodnitzyn transform and the Chapman approximation are applied. In the Chapman approximation, the thermal dependence of $\lambda/C_p$ is approximated as $\rho^{-1}$. The Howarth-Dorodnitzyn transform introduces $\rho$ in the space coordinate system: $\vec\nabla \rightarrow \rho\vec\nabla \quad ; \quad \nabla\cdot\rightarrow \rho\nabla\cdot$. The effect of these both mathematical operations is to digest' the thermal variation of quantities such as density or transport coefficient. Hence, the temperature equation comes in a simpler mathematical shape: $\frac{\rho}{\rho_s}\vec U \cdot \vec\nabla T = \left ( \frac{\lambda}{C_p}\right )_s \nabla\cdot \vec\nabla T + \frac{Q}{\rho\rho_s C_p}\nu_F \bar M_F \dot\omega$ Here the references are taken in the flame, i.e. close to the stoichiometric line (s subscript). In a non-premixed system, strictly speaking, $\vec U$, is not really relevant as the flame is fully controlled by the diffusion process. Notwithstanding, in practice, non-premixed flames must be stabilized by creating a strain in the direction of diffusion. This is the reason why the velocity is left in the equation. Because a diffusion flame is fully described by the mixture fraction field, a change in coordinate can be applied: $\left ( \begin{array}{c} x \\ y \end{array}\right ) \longrightarrow \left ( \begin{array}{c} x \\ Z \end{array}\right )$ where x is the coordinate tangential to the iso-$Z_s$ (hence to the flame, in a first approximation) and y is perpendicular. The Jacobian of the transform is given as: $\left [ \begin{array}{cc} \frac{\partial x}{\partial x} & \frac{\partial Z}{\partial x} \\ \frac{\partial x}{\partial y} & \frac{\partial Z}{\partial y} \end{array}\right ] = \left [ \begin{array}{cc} 1 & 0 \\ 0 & l_d^{-1} \end{array}\right ]$ Note that the diffusive layer of thickness $l_d$ is defined as the region of transition between fuel and oxidizer and is thus given by the gradient of Z along the y direction. This transform is applied to the vectorial operators: $\nabla\cdot = \nabla_x\cdot+\nabla_y\cdot=\nabla_x\cdot+\nabla_y\cdot Z\nabla_Z\cdot$ $\vec\nabla = \vec\nabla_x+\vec\nabla_y=\vec\nabla_x+\vec\nabla_y Z\nabla_Z\cdot$ With this transform, the above temperature equation looks like: $\frac{\rho}{\rho_s}\vec U \cdot (\vec\nabla_x T + \vec\nabla Z \nabla_Z\cdot T) = \left (\frac{\lambda}{C_p}\right )_s \left ( \nabla_x\cdot \vec\nabla_x T + \nabla_y\cdot Z \nabla_Z\cdot \vec\nabla_y Z \nabla_Z\cdot T + \nabla_y\cdot Z \nabla_Z\cdot \vec\nabla_x T + \nabla_x\cdot \vec\nabla_y Z \nabla_Z\cdot T \right ) + \frac{Q}{\rho\rho_s C_p}\nu_F \bar M_F \dot\omega$ As mentioned above, the velocity and the variation along the tangential direction to the main flame structure x are not supposed to play a major role. By emphasizing the role of the gradient of Z along y as a key parameter defining the configuration the following equation is obtained: $0 = \left (\frac{\lambda}{C_p}\right )_s ||\vec\nabla_y Z||^2 \Delta_Z T + \frac{Q}{\rho\rho_s C_p}\nu_F \bar M_F \dot\omega$ This equation (sometimes named the flamelet equation) serves as the basic framework to study the structure of diffusion flames. It highlights the role of the dissipation rate with respect to the strength of the source term and shows that the dissipation rate calibrates the combustion intensity. To describe the structure of the diffusion flame, the reduced mixture fraction is set: $\xi = \frac{Z-Z_s}{Z_s(1-Z_s)\varepsilon}$ The utility of the reduced mixture fraction is to focus on the reaction zone. This reaction zone is supposed located on the stoichiometric line (this is why the reduced mixture fraction is centred on $Z_s$) and to be very thin (reason of the introduction of the magnifying factor $\varepsilon$). ## The Premixed Regime Sketch of a premixed flame In contrast to the non-premixed regime above, the reactants are here well mixed before entering the combustion chamber. Chemical reaction can occur everywhere and this flame can propagate upstream into the feeding system as a subsonic (deflagration regime) chemical wave. This presents lots of safety issues. Some situations prevent them: (i) the mixture is made too rich (lot of fuel compared to oxidizer) or too lean (too much oxidizer) such that the flame is close to its flammability limits (it cannot easily propagate); (ii) the feeding system and regions where the flame is not wanted are designed such that they impose strong heat loss to the flame in order to quench it. For a given thermodynamical state of the mixture (composition, temperature, pressure), the flame has its own dynamics (speed, heat release, etc) on which there is few control: the wave exchanges mass and energy through diffusion process in the fresh gases. On the other hand, those well defined quantities are convenient to describe the flame characteristics. The mechanism of spontaneous propagation towards fresh gas through the thermal transfer from the combustion zone to the immediate slice of fresh gas such that the ignition temperature is eventually reached for this latter was highlighted as early as by the end of the 19th century by Mallard and LeChatelier. The reason the chemical wave is contained in a narrow region of reaction propagating upstream is the consequence of the discussion on the non-linearity of the combustion with temperature in the Sec. Fundamental Aspects. It is of interest to compare the orders of magnitude of the temperature dependent term $\exp{(-\beta(1-\theta)/(1-\alpha(1-\theta)))}$ of the reaction source upstream in the fresh gas ($\theta\rightarrow 0$) and in the reaction zone close to equilibrium temperature ($\theta\rightarrow 1$) for the set of representative values: $\beta = 10$ and $\alpha=0.9$. It is found that the reaction is about $10^{43}$ times slower in the fresh gas than close to the burned gas. It is known that the chemical time scale is about 0.1 ms in the reaction zone of a typical flame, then the typical reaction time in the fresh gas in normal conditions is about $10^{39} s$. To be compared with the order of magnitude of the estimated Universe age: $1 0^{17} s$. Non-negligible chemistry is only confined in a thin reaction zone stuck to the hot burned gas at equilibrium temperature. In this zone, the #Damk&oumlaut;hler number is high, in contrast to in the fresh mixture. It is natural and convenient to consider that the reaction rate is strictly zero everywhere except in this small reaction zone (one recovers the Dirac-like shape of the reaction profile, provided that one can see the upstream flow as a region of increasing temperature towards the combustion zone and the downstream flow as in fully equilibrium). As the premixed flame is a reaction wave propagating from burned to fresh gases, the basic parameter is known to be the progress variable. In the fresh gas, the progress variable is conventionally put to zero. In the burned gas, it equals unity. Across the flame, the intermediate values describe the progress of the reaction to turn into burned gas the fresh gas penetrating the flame sheet. A progress variable can be set with the help of any quantity like temperature, reactant mass fraction, provided it is bounded by a single value in the burned gas and another one in the fresh gas. The progress variable is usually named c, in usual notations: $c=\frac{T-T_f}{T_b-T_f}$ It is seen that c is a normalization of a scalar quantity. As mentioned above, the scalar transport equations are assumed linear such that the transport equation for c can be obtained directly. Actually, the transport equation for T (Sec. Transport Equations) is linear if constant heat capacity is further assumed (combustion of hydrocarbon in air implies a large excess of nitrogen whose heat capacity is only slightly varying) and the progress variable equation is directly obtained (here for a default of fuel - lean combustion): $\frac{D\rho c}{Dt}=\nabla\cdot\rho D\vec\nabla c + \frac{\nu_F\bar M_F}{Y_{F,u}}\dot\omega \qquad \rho D = \frac{\lambda}{C_p}$ The fact that the default or excess of fuel has been discussed above leads to the introduction of another quantity: the equivalence ratio. The equivalence ratio, usually noted $\Phi$, is the ratio of two ratios. The first one is the ratio of the mass of fuel with the mass of oxidizer in the mixture. The second one is the same ratio for a mixture at stoichiometry. Hence, when the equivalence ratio equals unity, the mixture is at stoichiometry. If it is greater than unity, the mixture is named rich as there is an excess of fuel. In contrast, when it is smaller than unity the mixture is named lean. The equivalence ratio presented here for premixed flames has little connection with the equivalence ratio introduced earlier regarding the non-premixed regime. Basically, the equivalence ratio as defined for non-premixed flames gives the equivalence ratio of a premixed mixture with the same mass of fuel and oxidizer. Moreover, the equivalence ratio as defined for a premixed mixture can be obtained based on the mixture fraction (it is thus the local equivalence ratio at a point in the non-homogeneous mixture described by the mixture fraction). From the definitions given above: $\Phi=\frac{Z}{1-Z}\frac{1-Z_s}{Z_s}$ #### Premixed Flame Péclet Number Earlier in this section, it has been said that a premixed flame posses its own dynamics, as a free propagating surface, and has thus characteristic quantities. For this reason, a Péclet number may be defined, based on these quantities. The Péclet number has the same structure as the Reynolds number but the dynamical viscosity is replaced by the ratio of the thermal conductivity and the heat capacity of the mixture. The thickness $\delta_L$ of a premixed flame is essentially thermal. It means that it corresponds to the distance of the temperature rise between fresh and burned gases. This thickness is below the millimetre for conventional flames. The width of the reaction zone inside this flame is even smaller, by about one order of magnitude. This reaction zone is stuck to the hot side of the flame due to the high thermal dependency of the combustion reactions, as seen above. Hence, the flame region is essentially governed by a convection-diffusion process, the source term being negligible in most of it. It is convenient to write the progress variable transport equation in a steady-state framework. The quantities at flame temperature ($_f$) are used to non-dimensionalize the equation: $\overbrace{(\rho S_L)}^{\dot M}\frac{\vec S_L}{S_L}\vec\nabla c = (\rho D)_f \nabla\cdot (\rho D)^* \vec\nabla c$ Note that the source term is neglected, consistently with what has been said above. This convection-diffusion equation makes appear a first approximation of a flame Péclet number: $Pe_f = \frac{\dot M \delta_L}{(\rho D)_f} \approx 1$ From the Péclet number, it is possible to obtain an expression for the flame velocity (remembering that $\delta_L/S_{L,f} \approx \tau_c$, vid. inf. Sec. Three Turbulent-Flame Interaction Regimes): $S_{L,f}^2\approx \frac{(\rho D)_f}{\rho_f \tau_c}$ For typical hydrocarbon flames, the speed is some tens of centimetres per second and the diffusivity is some $10^{-5}$ square metres per second. The chemical time in the reaction zone of about one tenth of a millisecond is recovered. #### Details of the Premixed Unstrained Planar Flame A plane combustion wave propagating in a homogeneous fresh mixture is the reference case to describe the premixed regime. At constant speed, it is convenient to see the flame at rest with a flowing upstream mixture. This is actually the way propagating flames are usually stabilized. In the frame of description, the physics is 1-D, steady with a uniform (the flame is said unstrained) mass flowing across the system. Two types of equations are thus sufficient to describe the problem, the temperature transport equation and the species transport equations, as in Sec. Transport Equations. The transport coefficients will be chosen as equal: $\rho D_i = \lambda / C_p$ (unity Lewis numbers). Suppose the 1-D domain is described thanks to a conventional (Ox) axis with a flame propagating towards negative x (this is the conventional usage), the boundary conditions are: • in the frozen mixture: • $Y_i \rightarrow Y_{i,u} \qquad ; \qquad x\rightarrow-\infty$ • $T \rightarrow T_u \qquad ; \qquad x\rightarrow-\infty$ • in the burned gas region supposed at equilibrium: • $Y_i \rightarrow Y_{i,b} \qquad ; \qquad x\rightarrow+\infty$ • $T \rightarrow T_b \qquad ; \qquad x\rightarrow+\infty$ $Y_{i,b}$ and $T_b$ are obtained from Sec. Conservation Laws. The quantities that have been mentioned just above (scalar and temperature profiles, mass flow rate through the system) are the solution to be sought. According to the discussions above, the temperature transport equation in its full normalized form may be written as (lean /stoichiometric case): $\dot M \frac{\partial \theta}{\partial x} = \frac{\partial \ }{\partial x}\frac{\lambda}{Cp} \frac{\partial \theta}{\partial x} + \frac{\nu_F \bar M_F B}{Y_{F,s}} \prod_{i=O,F}(Y_{i,u}^*-\theta)^{n_i} \exp{-\beta\frac{1-\theta}{1-\alpha(1-\theta)}}$ This equation is further simplified by the variable change $d\xi=\dot M/(\lambda/Cp)dx$: $\frac{\partial \theta}{\partial \xi}=\frac{\partial^2 \theta}{\partial \xi^2} + \overbrace{\frac{\lambda/Cp}{\dot M^2}\frac{\nu_F \bar M_F B}{Y_{F,s}}}^{\Lambda} \prod_{i=O,F}(Y_{i,u}^*-\theta)^{n_i} \exp{-\beta\frac{1-\theta}{1-\alpha(1-\theta)}}$ ## The Partially-Premixed Regime Ideal sketch of a partially-premixed flame This regime is somewhat less academics and has been recognized two decades ago. It is acknowledged as a hybrid of the premixed and the non-premixed regimes but the degree of interaction of these two modes of combustion to accurately describe a partially-premixed flame is still not well understood. It can be simply pictured by a lifted diffusion flame. Let us consider fuel issuing from a nozzle into the air. If the exit velocity is large enough, for some fuels, the flame lifts off the rim of the nozzle. It means that below the flame base, fuel and oxidizer have room to premix. Hence, the flame base propagates into a premixed mixture. However, it cannot be reduced to a premixed flame (although it is often simplified as this): (i) the mixing is not perfect and the different parts of the flame front constituting the flame base burn in mixtures of different thermodynamical states. This provides those parts with different deflagration capabilities such that the flame base has a complex shape. Indeed, it is convex, naturally leaded by the part burning at stoichiometry, unless exotic' feeding temperatures are used. (ii) Because the mixture is not homogeneous, transfer of species and temperature driven by diffusion occurs in a direction perpendicular to the propagation of the flame base. Because the flame front is not flat, those transfers act as a connection vehicle across the different parts of the leading front. (iii) The unburned left downstream by the sections of the leading front not burning at stoichiometry diffuse towards each other to form a diffusion flame as described above. The connection of the leading front with the trailing diffusion flame has been evidenced as complicated and the siege of transfers of species and temperature. These two last items are the state-of-the-art difficulties in understanding those flames and do not appear in the models although it has been demonstrated they have a major impact and are certainly a fundamental characteristic of partially-premixed flames. The partially-premixed flame is usually described using c and Z as introduced earlier. Because the framework is essentially non-premixed, the mixture fraction is primarily used to describe the flame. Regarding the head of the flame where partial-premixing has an impact, each part of the front is described with a local progress variable. The need of defining a local progress variable is that each section of the partially-premixed front has a different equivalence ratio leading to a different definition of c: $c=\frac{T-T_u}{T_b(Z)-T_u}$ # Three Turbulent-Flame Interaction Regimes It may appear odd to try to describe here what is the reason of combustion modelling research: the interaction of the turbulence with chemistry. However, one of the first steps in building knowledge in turbulent combustion was the qualitative exploration of what might be the dynamics of a flame in a turbulent environment. This led to what is now known as combustion diagrams. As explained above, the premixed regime lends itself the easiest to such an approach as it exhibits natural intrinsic quantities which are not as objectively identifiable in the other combustion regimes. Note that these quantities may depend on the geometry of the flame: for instance turbulence can bend a flame sheet, leading to a change in its dynamics compared to the flat flame propagating in a medium at rest. In this section, the turbulence-flame interaction modes will be described for a premixed flame. Only remarks will be added regarding the non-premixed and partially-premixed regimes. An integral quantity to assess the interaction between a premixed flame sheet and the turbulence is the Karlovitz number Ka. It compares the characteristic time of flame displacement with the characteristic time of the smallest structures (that are also the fastest) of the turbulence. $Ka= \frac{\tau_c}{\tau_k}$ $\tau_c$ is the chemical time of the flame. To estimate it, it is necessary to come back to the above progress variable transport equation in a steady-state framework. $\overbrace{(\rho S_L)}^{\dot M}\frac{\vec S_L}{S_L}\vec\nabla c = (\rho D)_f\nabla\cdot (\rho D)^* \vec\nabla c + \dot\omega_c$ The premixed wave propagates at a speed $S_L$ because it is fed by reactants diffusing inside the combustion zone and which are preheated because temperature diffuses in the reverse direction. The speed at which the flame progresses is thus related to the rate of species diffusion into the reaction zone which are then consumed. As the premixed flame is a free propagating wave whose speed of propagation is only limited by the chemical strength, the characteristic chemical time is based only on the diffusion and the mass flow rate experienced by the flame: $\tau_c = \frac{\rho (\rho D)_f}{\dot M^2}$ The smallest eddies are the ones being dissipated by the viscous forces. Their characteristic time is estimated thanks to a combination of the viscosity and the flux of turbulent energy to be dissipated (also called turbulent dissipation $\varepsilon=u'^3/l_t$): $\tau_k = \sqrt{\frac{l_t\nu}{u'^3}}$ Thanks to those definitions of chemical and small structure times, it is possible to give another definition of the Karlovitz number: $Ka=\left (\frac{\delta_L}{l_k} \right)^2$ which is the square of the ratio between the premixed flame thickness and the small structure scale: Ka actually compares scales. To arrive to this latter result, the three following assumptions must be used: (i) the flame thickness is obtained thanks to the premixed flame Péclet number (vid. sup.); (ii) the turbulence small structure (Kolmogorov eddies) scale is given by: $l_k=(\nu^3/\varepsilon)^{1/4}$ following the same dimensional argument as for the estimation of its time; and (iii) scalar diffusion scales with viscosity. #### Remark Regarding the Diffusion Flame From what has been presented above, a diffusion flame does not have characteristic scales. Setting a turbulence combustion regime classification for non-premixed flames has still not been answered by research. Some laws of behaviour will only be drawn around the scalar dissipation rate which is the parameter of integral importance for a diffusion flame. Indeed, the dynamics of a diffusion flame is determined by the strain rate imposed by the turbulence. As for the premixed flames, the shortest eddies (Kolmogorov) are the ones having the largest impact. The diffusive layer is thus given by the size of the Kolmogrov eddies: $l_d\approx l_k$ and the typical diffusion time scale (feeding rate of the reaction zone) is given by the characteristic time of the Kolmogorov eddies: $\tau_k^{-1}\approx \chi_s$ as the Reynolds number of the Kolmogorov structures is unity. Here, $\chi_s$ is the sample-averaging of $\chi$ based on (conditioned) stoichiometric conditions, where the flame is expected to be. ## The Wrinkled Regime Wrinkled flamelet regime This regime is also called the flamelet regime. Basically, it assumes that the flame structure is not affected by turbulence. The flame sheet is convoluted and wrinkled by eddies but none of them is small enough to enter it. Locally magnifying, the laminar flame structure is maintained. This regime exists for a Karlovitz number below unity (vid. sup.), i.e. chemical time smaller than the small structure time or flame thickness smaller than small structure scale. Notwithstanding, the laminar flame dynamics can be disrupted for $u'>S_L$. In that case, although the flame structure is not altered by the small structures, it can be convected by large structures such that areas of different locations in the front interact. It shows that, even with a small Karlovitz number, the turbulence effect is not always weak. ## The Corrugated Regime Corrugated flamelet regime The formal definition of a flame is the region of temperature rise. However, the volume where the reaction takes place is about one order of magnitude smaller, embedded inside the temperature rise region and close to its high temperature end. Hence, there exist some levels of turbulence creating eddies able to enter the flame zone but still large enough to not affect the internal reaction sheet. In other words, the flame thermal region is thickened by turbulence but the reaction zone is still in the wrinkled regime. This situation is called the Corrugated Regime. Due to the structure of the Karlovitz number, once written in terms of length scales (vid. sup.), this situation arises for an increase of the Karlovitz number by two orders of magnitude compared to the value for the wrinkled regime. Hence, in the range $1 < Ka < 100$, the laminar structure of the reaction zone is still preserved but not the one of the preheat zone. ## The Thickened Regime Thickened regime In this last case, turbulence is intense enough to generate eddies able to affect the structure of the reaction zone as well. In practice, it is expected that those eddies are in the tail of the energy spectrum such that their lifetime is very short. Their impact on the reaction zone is thus limited. Obviously, Ka > 100. A topological description is of little relevance here and a well-stirred reactor model fits better. ## Reaction mechanisms Combustion is mainly a chemical process. Although we can, to some extent, describe a flame without any chemistry information, modelling of the flame propagation requires the knowledge of speeds of reactions, product concentrations, temperature, and other parameters. Therefore fundamental information about reaction kinetics is essential for any combustion model. A fuel-oxidizer mixture will generally combust if the reaction is fast enough to prevail until all of the mixture is burned into products. If the reaction is too slow, the flame will extinguish. If too fast, explosion or even detonation will occur. The reaction rate of a typical combustion reaction is influenced mainly by the concentration of the reactants, temperature, and pressure. A stoichiometric equation for an arbitrary reaction can be written as: $\sum_{j=1}^{n}\nu' (M_j) = \sum_{j=1}^{n}\nu'' (M_j),$ where $\nu$ denotes the stoichiometric coefficient, and $M_j$ stands for an arbitrary species. A one prime holds for the reactants while a double prime holds for the products of the reaction. The reaction rate, expressing the rate of disappearance of reactant i, is defined as: $RR_i = k \, \prod_{j=1}^{n}(M_j)^{\nu'},$ in which k is the specific reaction rate constant. Arrhenius found that this constant is a function of temperature only and is defined as: $k= A T^{\beta} \, exp \left( \frac{-E}{RT}\right)$ where A is pre-exponential factor, E is the activation energy, and $\beta$ is a temperature exponent. The constants vary from one reaction to another and can be found in the literature. Reaction mechanisms can be deduced from experiments (for every resolved reaction), they can also be constructed numerically by the automatic generation method (see [Griffiths (1994)] for a review on reaction mechanisms). For simple hydrocarbons, tens to hundreds of reactions are involved. By analysis and systematic reduction of reaction mechanisms, global reactions (from one to five step reactions) can be found (see [Westbrook (1984)]). ## Governing equations for chemically reacting flows Together with the usual Navier-Stokes equations for compresible flow (See Governing equations), additional equations are needed in reacting flows. The transport equation for the mass fraction $Y_k$ of k-th species is $\frac{\partial}{\partial t} \left( \rho Y_k \right) + \frac{\partial}{\partial x_j} \left( \rho u_j Y_k\right) = \frac{\partial}{\partial x_j} \left( \rho D_k \frac{\partial Y_k}{\partial x_j}\right)+ \dot \omega_k$ where Ficks' law is assumed for scalar diffusion with $D_k$ denoting the species difussion coefficient, and $\dot \omega_k$ denoting the species reaction rate. A non-reactive (passive) scalar (like the mixture fraction $Z$) is goverened by the following transport equation $\frac{\partial}{\partial t} \left( \rho Z \right) + \frac{\partial}{\partial x_j} \left( \rho u_j Z \right) = \frac{\partial}{\partial x_j} \left( \rho D \frac{\partial Z}{\partial x_j}\right)$ where $D$ is the diffusion coefficient of the passive scalar. ### RANS equations In turbulent flows, Favre averaging is often used to reduce the scales (see Reynolds averaging) and the mass fraction transport equation is transformed to $\frac{\partial \overline{\rho} \widetilde{Y}_k }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u}_j \widetilde{Y}_k}{\partial x_j}= \frac{\partial} {\partial x_j} \left( \overline{\rho D_k \frac{\partial Y_k} {\partial x_j} } - \overline{\rho} \widetilde{u''_i Y''_k } \right) + \overline{\dot \omega_k}$ where the turbulent fluxes $\widetilde{u''_i Y''_k}$ and reaction terms $\overline{\dot \omega_k}$ need to be closed. The passive scalar turbulent transport equation is $\frac{\partial \overline{\rho} \widetilde{Z} }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u}_j \widetilde{Z} }{\partial x_j}= \frac{\partial} {\partial x_j} \left( \overline{\rho D \frac{\partial Z} {\partial x_j} } - \overline{\rho} \widetilde{u''_i Z'' } \right)$ where $\widetilde{u''_i Z''}$ needs modelling. A common practice is to model the turbulent fluxes using the gradient diffusion hypothesis. For example, in the equation above the flux $\widetilde{u''_i Z''}$ is modelled as $\widetilde{u''_i Z''} = -D_t \frac{\partial \tilde Z}{\partial x_i}$ where $D_t$ is the turbulent diffusivity. Since $D_t >> D$, the first term inside the parentheses on the right hand of the mixture fraction transport equation is often neglected (Peters (2000)). This assumption is also used below. In addition to the mean passive scalar equation, an equation for the Favre variance $\widetilde{Z''^2}$ is often employed $\frac{\partial \overline{\rho} \widetilde{Z''^2} }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u}_j \widetilde{Z''^2} }{\partial x_j}= \frac{\partial}{\partial x_j} \left( \overline{\rho} \widetilde{u''_i Z''^2} \right) - 2 \overline{\rho} \widetilde{u''_i Z'' } - \overline{\rho} \widetilde{\chi}$ where $\widetilde{\chi}$ is the mean Scalar dissipation rate defined as $\widetilde{\chi} = 2 D \widetilde{\left| \frac{\partial Z''}{\partial x_j} \right|^2 }$ This term and the variance diffusion fluxes needs to be modelled. ### LES equations The Large eddy simulation (LES) approach for reactive flows introduces equations for the filtered species mass fractions within the compressible flow field. Similar to the #RANS equations, but using Favre filtering instead of Favre averaging, the filtered mass fraction transport equation is $\frac{\partial \overline{\rho} \widetilde{Y}_k }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u}_j \widetilde{Y}_k}{\partial x_j}= \frac{\partial} {\partial x_j} \left( \overline{\rho D_k \frac{\partial Y_k} {\partial x_j} } - J_j \right) + \overline{\dot \omega_k}$ where $J_j$ is the transport of subgrid fluctuations of mass fraction $J_j = \widetilde{u_jY_k} - \widetilde{u}_j \widetilde{Y}_k$ and has to be modelled. Fluctuations of diffusion coefficients are often ignored and their contributions are assumed to be much smaller than the apparent turbulent diffusion due to transport of subgrid fluctuations. The first term on the right hand side is then $\frac{\partial} {\partial x_j} \left( \overline{ \rho D_k \frac{\partial Y_k} {\partial x_j} } \right) \approx \frac{\partial} {\partial x_j} \left( \overline{\rho} D_k \frac{\partial \widetilde{Y}_k} {\partial x_j} \right)$ ## Infinitely fast chemistry All combustion models can be divided into two main groups according to the assumptions on the reaction kinetics. We can either assume the reactions to be infinitely fast - compared to e.g. mixing of the species, or comparable to the time scale of the mixing process. The simple approach is to assume infinitely fast chemistry. Historically, mixing of the species is the older approach, and it is still in wide use today. It is therefore simpler to solve for #Finite rate chemistry models, at the overshoot of introducing errors to the solution. ### Premixed combustion Premixed flames occur when the fuel and oxidiser are homogeneously mixed prior to ignition. These flames are not limited only to gas fuels, but also to the pre-vaporised fuels. Typical examples of premixed laminar flames is bunsen burner, where the air enters the fuel stream and the mixture burns in the wake of the riser tube walls forming nice stable flame. Another example of a premixed system is the solid rocket motor where oxidizer and fuel and properly mixed in a gum-like matrix that is uniformly distributed on the periphery of the chamber. Premixed flames have many advantages in terms of control of temperature and products and pollution concentration. However, they introduce some dangers like the autoignition (in the supply system). #### Eddy Break-Up model The Eddy Break-Up model is the typical example of mixed-is-burnt combustion model. It is based on the work of Magnussen, Hjertager, and Spalding and can be found in all commercial CFD packages. The model assumes that the reactions are completed at the moment of mixing, so that the reaction rate is completely controlled by turbulent mixing. Combustion is then described by a single step global chemical reaction $F + \nu_s O \rightarrow (1+\nu_s) P$ in which F stands for fuel, O for oxidiser, and P for products of the reaction. Alternativelly we can have a multistep scheme, where each reaction has its own mean reaction rate. The mean reaction rate is given by $\bar{\dot\omega}_F=A_{EB} \frac{\epsilon}{k} min\left[\bar{C}_F,\frac{\bar{C}_O}{\nu}, B_{EB}\frac{\bar{C}_P}{(1+\nu)}\right]$ where $\bar{C}$ denotes the mean concentrations of fuel, oxidiser, and products respectively. A and B are model constants with typical values of 0.5 and 4.0 respectively. The values of these constants are fitted according to experimental results and they are suitable for most cases of general interest. It is important to note that these constants are only based on experimental fitting and they need not be suitable for all the situations. Care must be taken especially in highly strained regions, where the ratio of $k$ to $\epsilon$ is large (flame-holder wakes, walls ...). In these, regions a positive reaction rate occurs and an artificial flame can be observed. CFD codes usually have some remedies to overcome this problem. This model largely over-predicts temperatures and concentrations of species like CO and other species. However, the Eddy Break-Up model enjoys a popularity for its simplicity, steady convergence, and implementation. ### Non-premixed combustion Non premixed combustion is a special class of combustion where fuel and oxidizer enter separately into the combustion chamber. The diffusion and mixing of the two streams must bring the reactants together for the reaction to occur. Mixing becomes the key characteristic for diffusion flames. Diffusion burners are easier and safer to operate than premixed burners. However their efficiency is reduced compared to premixed burners. One of the major theoretical tools in non-premixed combustion is the passive scalar mixture fraction $Z$ which is the backbone on most of the numerical methods in non-premixed combustion. #### Conserved scalar equilibrium models The reactive problem is split into two parts. First, the problem of mixing , which consists of the location of the flame surface which is a non-reactive problem concerning the propagation of a passive scalar; And second, the flame structure problem, which deals with the distribution of the reactive species inside the flamelet. To obtain the distribution inside the flame front we assume it is locally one-dimensional and depends only on time and the scalar coodinate. We first make use of the following chain rules $\frac{\partial Y_k}{\partial t} = \frac{\partial Z}{\partial t}\frac{\partial Y_k}{\partial Z}$ $\frac{\partial Y_k}{\partial x_j} = \frac{\partial Z}{\partial x_j}\frac{\partial Y_k}{\partial Z}$ and transformation $\frac{\partial }{\partial t}= \frac{\partial }{\partial t} + \frac{\partial Z}{\partial t} \frac{\partial }{\partial Z}$ upon substitution into the species transport equation (see #Governing Equations for Reacting Flows), we obtain $\rho \frac{\partial Y_k}{\partial t} + Y_k \left[ \frac{\partial \rho}{\partial t} + \frac{\partial \rho u_j}{\partial x_j} \right] + \frac{\partial Y_k}{\partial Z} \left[ \rho \frac{\partial Z}{\partial t} + \rho u_j \frac{\partial Z}{\partial x_j} - \frac{\partial}{\partial x_j}\left( \rho D \frac{\partial Z}{\partial x_j} \right) \right] = \rho D \left( \frac{\partial Z}{\partial x_j} \frac{\partial Z}{\partial x_j} \right) \frac{\partial^2 Y_k}{\partial Z^2} + \dot \omega_k$ The second and third terms in the LHS cancel out due to continuity and mixture fraction transport. At the outset, the equation boils down to $\frac{\partial Y_k}{\partial t} = \frac{\chi}{2} \frac{\partial ^2 Y_k}{\partial Z^2} + \dot \omega_k$ where $\chi = 2 D \left( \frac{\partial Z}{\partial x_j} \right)^2$ is called the scalar dissipation which controls the mixing, providing the interaction between the flow and the chemistry. If the flame dependence on time is dropped, even though the field $Z$ still depends on it. $\dot \omega_k= -\frac{\chi}{2} \frac{\partial ^2 Y_k}{\partial Z^2}$ If the reaction is assumed to be infinetly fast, the resultant flame distribution is in equilibrium. and $\dot \omega_k= 0$. When the flame is in equilibrium, the flame configuration $Y_k(Z)$ is independent of strain. ##### Burke-Schumann flame structure The Burke-Schuman solution is valid for irreversible infinitely fast chemistry. With a reaction in the form of $F + \nu_s O \rightarrow (1+\nu_s) P$ If the flame is in equilibrium and therefore the reaction term is 0. Two possible solution exists, one with pure mixing (no reaction) and a linear dependence of the species mass fraction with $Z$. Fuel mass fraction $Y_F=Y_F^0 Z$ Oxidizer mass fraction $Y_O=Y_O^0(1-Z)$ Where $Y_F^0$ and $Y_O^0$ are fuel and oxidizer mass fractions in the pure fuel and oxidizer streams respectively. The other solution is given by a discontinuous slope at stoichiometric mixture fraction $Z_{st}$ and two linear profiles (in the rich and lean side) at either side of the stoichiometric mixture fraction. Both concentrations must be 0 at stoichiometric, the reactants become products infinitely fast. $Y_F=Y_F^0 \frac{Z-Z_{st}}{1-Z_{st}}$ and oxidizer mass fraction $Y_O=Y_O^0 \frac{Z-Z_{st}}{Z_{st}}$ ## Finite rate chemistry ### Non-premixed combustion #### Flamelets based on conserved scalar Peters (2000) define Flamelets as "thin diffusion layers embedded in a turbulent non-reactive flow field". If the chemistry is fast enough, the chemistry is active within a thin region where the chemistry conditions are in (or close to) stoichiometric conditions, the "flame" surface. This thin region is assumed to be smaller than Kolmogorov length scale and therefore the region is locally laminar. The flame surface is defined as an iso-surface of a certain scalar $Z$, mixture fraction in #Non premixed combustion. The same equation used in #Conserved scalar models for equilibrium chemistry is used here but with chemical source term different from 0 $\frac{\partial Y_k}{\partial t} = \frac{\chi}{2} \frac{\partial ^2 Y_k}{\partial Z^2} + \dot \omega_k.$ This approach is called the Stationary Laminar Flamelet Model (SLFM) and has the advantage that flamelet profiles $Y_k=f(Z,\chi)$ can be pre-computed and stored in a dtaset or file which is called a "flamelet library" with all the required complex chemistry. For the generation of such libraries ready to use software is avalable such as Softpredict's Combustion Simulation Laboratory COSILAB [1] with its relevant solver RUN1DL, which can be used for a variety of relevant geometries; see various publications that are available for download. Other software tools are available such as CHEMKIN [2] and CANTERA [3]. ##### Flamelets in turbulent combustion In turbulent flames the interest is $\widetilde{Y}_k$. In flamelets, the flame thickness is assumed to be much smaller than Kolmogorov scale and obviously is much smaller than the grid size. It is therefore needed a distribution of the passive scalar within the cell. $\widetilde{Y}_k$ cannot be obtained directly from the flamelets library $\widetilde{Y}_k \neq Y_F(Z,\chi)$, where $Y_F(Z,\chi)$ corresponds to the value obtained from the flamelets libraries. A generic solution can be expressed as $\widetilde{Y}_k= \int Y_F( \widetilde{Z},\widetilde{\chi}) P(Z,\chi) dZ d\chi$ where $P(Z,\chi)$ is the joint Probability Density Function (PDF) of the mixture fraction and scalar dissipation which account for the scalar distribution inside the cell and "a priori" depends on time and space. The most simple assumption is to use a constant distribution of the scalar dissipation within the cell and the above equation reduces to $\widetilde{Y}_k= \int Y_F(\widetilde{Z},\widetilde{\chi}) P(Z) dZ$ $P(Z)$ is the PDF of the mixture fraction scalar and simple models (such as Gaussian or a beta PDF) can be build depending only on two moments of the scalar mean and variance,$\widetilde{Z},Z''$. If the mixture fraction and scalar dissipation are consider independent variables,$P(Z,\chi)$ can be written as $P(Z) P(\chi)$. The PDF of the scalar dissipation is assumed to be log-normal with variance unity. $\widetilde{Y}_k= \int Y_F(\widetilde{Z},\widetilde{\chi}) P(Z) P(\chi) dZ d\chi$ In Large eddy simulation (LES) context (see #LES equations for reacting flow), the probability density function is replaced by a subgrid PDF $\widetilde{P}$. The same equation hold by replacing averaged values with filtered values. $\widetilde{Y}_k= \int Y_F(\widetilde{Z},\widetilde{\chi}) \widetilde{P}(Z) \widetilde{P}(\chi) dZ d\chi$ The assumptions made regarding the shapes of the PDFs are still justified. In LES combustion the subgrid variance is smaller than RANS counterpart (part of the large-scale fluctuations are solved) and therefore the modelled PDFs are thinner. #### Intrinsic Low Dimensional Manifolds (ILDM) Detailed mechanisms describing ignition, flame propagation and pollutant formation typically involve several hundred species and elementary reactions, prohibiting their use in practical three-dimensional engine simulations. Conventionally reduced mechanisms often fail to predict minor radicals and pollutant precursors. The ILDM-method is an automatic reduction of a detailed mechanism, which assumes local equilibrium with respect to the fastest time scales identified by a local eigenvector analysis. In the reactive flow calculation, the species compositions are constrained to these manifolds. Conservation equations are solved for only a small number of reaction progress variables, thereby retaining the accuracy of detailed chemical mechanisms. This gives an effective way of coupling the details of complex chemistry with the time variations due to turbulence. The intrinsic low-dimensional manifold (ILDM) method {Maas:1992,Maas:1993} is a method for in-situ reduction of a detailed chemical mechanism based on a local time scale analysis. This method is based on the fact that different trajectories in state space start from a high-dimensional point and quickly relax to lower-dimensional manifolds due to the fast reactions. The movement along these lower-dimensional manifolds, however, is governed by the slow reactions. It exploits the variety of time scales to systematically reduce the detailed mechanism. For a detailed chemical mechanism with N species, N different time scales govern the process. An assumption that all the time scales are relaxed results in assuming complete equilibrium, where the only variables required to describe the system are the mixture fraction, the temperature and the pressure. This results in a zero-dimensional manifold. An assumption that all but the slowest 'n' time scales are relaxed results in a 'n' dimensional manifold, which requires the additional specification of 'n' parameters (called progress variables). In the ILDM method, the fast chemical reactions do not need to be identified a priori. An eigenvalue analysis of the detailed chemical mechanism is carried out which identifies the fast processes in dynamic equilibrium with the slow processes. The computation of ILDM points can be expensive, and hence an in-situ tabulation procedure is used, which enables the calculation of only those points that are needed during the CFD calculation. --Fredgauss 07:37, 25 August 2006 (MDT) U. Maas, S.B. Pope. Simplifying chemical kinetics: Intrinsic low-dimensional manifolds in composition space. Comb. Flame 88, 239, 1992. Ulrich Maas. Automatische Reduktion von Reaktionsmechanismen zur Simulation reaktiver Str¨omungen. Habilitationsschrift, Universit ¨at Stuttgart, 1993. #### Conditional Moment Closure (CMC) In Conditional Moment Closure (CMC) methods we assume that the species mass fractions are all correlated with the mixture fraction (in non premixed combustion). From Probability density function we have $\overline{Y_k}= \int P(\eta) d\eta$ where $\eta$ is the sample space for $Z$. CMC consists of providing a set of transport equations for the conditional moments which define the flame structure. Experimentally, it has been observed that temperature and chemical radicals are strong non-linear functions of mixture fraction. For a given species mass fraction we can decomposed it into a mean and a fluctuation: $Y_k= \overline{Y_k} + Y'_k$ The fluctuations $Y_k'$ are usually very strong in time and space which makes the closure of $\overline{\omega_k}$ very difficult. However, the alternative decomposition $Y_k= + y'_k$ where $y'_k$ is the fluctuation around the conditional mean or the "conditional fluctuation". Experimentally, it is observed that $y'_k<< Y'_k$, which forms the basic assumption of the CMC method. Closures. Due to this property better closure methods can be used reducing the non-linearity of the mass fraction equations. The Derivation of the CMC equations produces the following CMC transport equation where $Q \equiv $ for simplicity. $\frac{ \partial Q}{\partial t} + \frac{\partial Q}{\partial x_j} = \frac{<\chi|\eta> }{2} \frac{\partial ^2 Q}{\partial \eta^2} + \frac{ < \dot \omega_k|\eta> }{ <\rho| \eta >}$ In this equation, high order terms in Reynolds number have been neglected. (See Derivation of the CMC equations for the complete series of terms). It is well known that closure of the unconditional source term $\overline {\dot \omega_k}$ as a function of the mean temperature and species ($\overline{Y}, \overline{T}$) will give rise to large errors. However, in CMC the conditional averaged mass fractions contain more information and fluctuations around the mean are much smaller. The first order closure $< \dot \omega_k|\eta> \approx \dot \omega_k \left( Q, \right)$ is a good approximation in zones which are not close to extinction. ##### Second order closure A second order closure can be obtained if conditional fluctuations are taken into account. For a chemical source term in the form $\dot \omega_k = k Y_A Y_B$ with the rate constant in Arrhenius form $k=A_0 T^\beta exp [-Ta/T]$ the second order closure is (Klimenko and Bilger 1999) $< \dot \omega_k|\eta> \approx < \dot \omega_k|\eta >^{FO} \left[1+ \frac{< Y''_A Y''_B |\eta>}{Q_A Q_B}+ \left( \beta + T_a/Q_T \right) \left( \frac{< Y''_A T'' |\eta>}{Q_AQ_T} + \frac{< Y''_B T'' |\eta>}{Q_BQ_T} \right) + ... \right]$ where $< \dot \omega_k|\eta >^{FO}$ is the first order CMC closure and $Q_T \equiv $. When the temperature exponent $\beta$ or $T_a/Q_T$ are large the error of taking the first order approximation increases. Improvement of small pollutant predictions can be obtained using the above reaction rate for selected species like CO and NO. ##### Double conditioning Close to extinction and reignition. The conditional fluctuations can be very large and the primary closure of CMC of "small" fluctuations is not longer valid. A second variable $h$ can be chosen to define a double conditioned mass fraction $Q(x,t;\eta,\psi) \equiv $ Due to the strong dependence on chemical reactions to temperature, $h$ is advised to be a temperature related variable (Kronenburg 2004). Scalar dissipation is not a good choice, due to its log-normal behaviour (smaller scales give highest dissipation). A must better choice is the sensible enthalpy or a progress variable. Double conditional variables have much smaller conditional fluctuations and allow the existence of points with the same chemical composition which can be fully burning (high temperature) or just mixing (low temperature). The range of applicability is greatly increased and allows non-premixed and premixed problems to be treated without ad-hoc distinctions. The main problem is the closure of the new terms involving cross scalar transport. The double conditional CMC equation is obtained in a similar manner than the conventional CMC equations ##### LES modelling In a LES context a conditional filtering operator can be defined and $Q$ therefore represents a conditionally filtered reactive scalar. ### Linear Eddy Model The Linear Eddy Model (LEM) was first developed by Kerstein(1988). It is an one-dimensional model for representing the flame structure in turbulent flows. In every computational cell a molecular, diffusion and chemical model is defined as $\frac{\partial}{\partial t} \left( \rho Y_k \right) = \frac{\partial}{\partial \eta} \left( \rho D_k \frac{\partial Y_k}{\partial \eta }\right)+ \dot \omega_k$ where $\eta$is a spatial coordinate. The scalar distribution obtained can be seen as a one-dimensional reference field between Kolmogorov scale and grid scales. In a second stage a series of re-arranging stochastic event take place. These events represent the effects of a certain turbulent structure of size $l$, smaller than the grid size at a location $\eta_0$ within the one-dimensional domain. This vortex distort the $\eta$ field obtain by the one-dimensional equation, creating new maxima and minima in the interval $(\eta_0, \eta + \eta_0)$. The vortex size $l$ is chosen randomly based on the inertial scale range while $\eta_0$ is obtained from a uniform distribution in $\eta$. The number of events is chosen to match the turbulent diffusivity of the flow. ### PDF transport models Probability Density Function (PDF) methods are not exclusive to combustion, although they are particularly attractive to them. They provided more information than moment closures and they are used to compute inhomegenous turbulent flows, see reviews in Dopazo (1993) and Pope (1994). PDF methods are based on the transport equation of the joint-PDF of the scalars. Denoting $P \equiv P(\underline{\psi}; x, t)$ where $\underline{\psi} = ( \psi_1,\psi_2 ... \psi_N)$ is the phase space for the reactive scalars $\underline{Y} = ( Y_1,Y_2 ... Y_N)$. The transport equation of the joint PDF is: $\frac{\partial <\rho | \underline{Y}=\underline{\psi}> P }{\partial t} + \frac{ \partial <\rho u_j | \underline{Y}=\underline{\psi}> P }{\partial x_j} = \sum^N_\alpha \frac{\partial}{\partial \psi_\alpha}\left[ \rho \dot{\omega}_\alpha P \right] - \sum^N_\alpha \sum^N_\beta \frac{\partial^2}{\partial \psi_\alpha \psi_\beta} \left[ \right] P$ where the chemical source term is closed. Another term appeared on the right hand side which accounts for the effects of the molecular mixing on the PDF, is the so called "micro-mixing " term. Equal diffusivities are used for simplicity $D_k = D$ A more general approach is the velocity-composition joint-PDF with $P \equiv P(\underline{V},\underline{\psi}; x, t)$, where $\underline{V}$ is the sample space of the velocity field $u,v,w$. This approach has the advantage of avoiding gradient-diffusion modelling. A similar equation to the above is obtained combining the momentum and scalar transport equation. The PDF transport equation can be solved in two ways: through a Lagrangian approach using stochastic methods or in a Eulerian ways using stochastic fields. #### Lagrangian The main idea of Lagrangian methods is that the flow can be represented by an ensemble of fluid particles. Central to this approach is the stochastic differential equations and in particular the Langevin equation. #### Eulerian Instead of stochastic particles, smooth stochastic fields can be used to represent the probability density function (PDF) of a scalar (or joint PDF) involved in transport (convection), diffusion and chemical reaction (Valino 1998). A similar formulation was proposed by Sabelnikov and Soulard 2005, which removes part of the a-priori assumption of "smoothness" of the stochastic fields. This approach is purely Eulerian and offers implementations advantages compared to Lagrangian or semi-Eulerian methods. Transport equations for scalars are often easy to programme and normal CFD algorithms can be used (see Discretisation of convective term). Although discretization errors are introduced by solving transport equations, this is partially compesated by the error introduced in Lagrangian approaches due to the numerical evaluation of means. A new set of $N_s$ scalar variables (the stochastic field $\xi$) is used to represent the PDF $P (\underline{\psi}; x,t) = \frac{1}{N} \sum^{N_s}_{j=1} \prod^{N}_{k=1} \delta \left[\psi_k -\xi_k^j(x,t) \right]$ ### Other combustion models #### MMC The Multiple Mapping Conditioning (MMC) (Klimenko and Pope 2003) is an extension of the #Conditional Moment Closure (CMC) approach combined with probability density function methods. MMC looks for the minimum set of variables that describes the particular turbulent combustion system. #### Fractals Derived from the #Eddy Dissipation Concept (EDC). ## References • Dopazo, C. (1993), "Recent development in PDF methods", Turbulent Reacting Flows, ed. P. A. Libby and F. A. Williams. • Fox, R.O. (2003), Computational Models for Turbulent Reacting Flows, ISBN 0-521-65049-6,Cambridge University Press. • Kerstein, A. R. (1988), "A linear eddy model of turbulent scalar transport and mixing", Comb. Science and Technology, Vol. 60,pp. 391. • Klimenko, A. Y., Bilger, R. W. (1999), "Conditional moment closure for turbulent combustion", Progress in Energy and Combustion Science, Vol. 25,pp. 595-687. • Klimenko, A. Y., Pope, S. B. (2003), "The modeling of turbulent reactive flows based on multiple mapping conditioning", Physics of Fluids, Vol. 15, Num. 7, pp. 1907-1925. • Kronenburg, A., (2004), "Double conditioning of reactive scalar transport equations in turbulent non-premixed flames", Physics of Fluids, Vol. 16, Num. 7, pp. 2640-2648. • Griffiths, J. F. (1994), "Reduced Kinetic Models and Their Application to Practical Combustion Systems", Prog. in Energy and Combustion Science,Vol. 21, pp. 25-107. • Peters, N. (2000), Turbulent Combustion, ISBN 0-521-66082-3,Cambridge University Press. • Poinsot, T.,Veynante, D. (2001), Theoretical and Numerical Combustion, ISBN 1-930217-05-6, R. T Edwards. • Pope, S. B. (1994), "Lagrangian PDF methods for turbulent flows", Annu. Rev. Fluid Mech, Vol. 26, pp. 23-63. • Sabel'nikov, V.,Soulard, O. (2005), "Rapidly decorrelating velocity-field model as a tool for solving one-point Fokker-Planck equations for probability density functions of turbulent reactive scalars", Physical Review E, Vol. 72,pp. 016301-1-22. • Valino, L., (1998), "A field montecarlo formulation for calculating the probability density function of a single scalar in a turbulent flow", Flow. Turb and Combustion, Vol. 60,pp. 157-172. • Westbrook, Ch. K., Dryer,F. L., (1984), "Chemical Kinetic Modeling of Hydrocarbon Combustion", Prog. in Energy and Combustion Science,Vol. 10, pp. 1-57.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 269, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886047601699829, "perplexity": 997.5716205683921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642134/warc/CC-MAIN-20140305060722-00033-ip-10-183-142-35.ec2.internal.warc.gz"}
https://shelah.logic.at/papers/301/
Sh:301 • Hodges, W., & Shelah, S. (2019). Naturality and Definability II. CUBO A Mathematical Journal, 21(3), 9–27. • Abstract: In two papers we noted that in common practice many algebraic constructions are defined only ‘up to isomorphism’ rather than explicitly. We mentioned some questions raised by this fact, and we gave some partial answers. The present paper provides much fuller answers, though some questions remain open. Our main result says that there is a transitive model of Zermelo-Fraenkel set theory with choice (ZFC) in which every fully definable construction is ‘weakly natural’ (a weakening of the notion of a natural transformation). A corollary is that there are models of ZFC in which some well-known constructions, such as algebraic closure of fields, are not explicitly definable. We also show that there is no model of ZFC in which the explicitly definable constructions are precisely the natural ones. • Current version: 2020-07-14 (20p) published version (19p) Bib entry @article{Sh:301, author = {Hodges, Wilfrid and Shelah, Saharon}, title = {{Naturality and Definability II}}, journal = {CUBO A Mathematical Journal}, volume = {21}, number = {3}, year = {2019}, pages = {9-27}, doi = {10.4067/S0719-06462019000300009}, note = {\href{https://arxiv.org/abs/math/0102060}{arXiv: math/0102060}}, arxiv_number = {math/0102060} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086394906044006, "perplexity": 1771.1785028080944}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735812.88/warc/CC-MAIN-20200803140840-20200803170840-00122.warc.gz"}
http://tantalum.academickids.com/encyclopedia/index.php/Internal_energy
# Internal energy The internal energy of a system (abbreviated E or U) is the total kinetic energy due to the motion of molecules (translational, rotational, vibrational) and the total potential energy associated with the vibrational and electric energy of atoms within molecules or crystals. Internal energy is a quantifiable state function of a system. The SI unit of internal energy is the same as for energy, and is the joule. For systems consisting of molecules, the internal energy is partitioned among all of these types of motion. In systems consisting of monatomic particles, such as helium gas and other noble gases, the internal energy consists only of the translational kinetic energy of the individual atoms. Monatomic particles, of course, do not rotate or vibrate, and are not excited to higher electrical energies, except at very high temperatures. From the standpoint of statistical mechanics, the internal energy is shown to be equivalent to the ensemble average of the total energy of the system. ## Measurement Internal energy can not be measured directly; it is only measured as a change (ΔU). The equation for change in internal energy is [itex] \Delta U = Q \pm W \,\! [itex] where Q is heat input to or output of the system, measured in joules W is work done on or by the system, measured in joules A positive value of W represents work done on the system, a negative value representing work done by the system. Generally physisists use negative "U" indicating work done by a system; chemists use positive "U" indicating work done on a system. A positive value of Q represents heat flow into the system, a negative value representing heat flow out of the system. ## Duhem-Gibbs Relation For quasistatic processes, the following relation holds: [itex]dU = TdS - pdV + \mu dN \,\! [itex] where T is the temperature, measured in kelvins S the entropy, measured in joules / kelvin p the pressure, measured in pascals V the volume, messured in cubic metres. μ the chemical potential, (measured in joules or joules per mole) N the number of particles in the system, (unitless or measured in moles) • Art and Cultures • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments) • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations) • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution) • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages) • United States (http://www.academickids.com/encyclopedia/index.php/United_States) • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world) • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body) • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science) • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies) • Space and Astronomy • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8542876839637756, "perplexity": 1082.4111660918754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00617.warc.gz"}
https://brilliant.org/discussions/thread/number-theory-thailand-math-posn-2nd-round/
# Number Theory (Thailand Math POSN 2nd round) Write a full solution Theorems allowed to use: - Divisibility, gcd, lcm, Prime numbers - Modular arithmetic - Chinese Remainder Theorem - Complete Residue System modulo n - Reduced Residue System modulo n - Euler's Theorem - Fermat's Little Theorem - Wilson's Theorem 1.) Prove that there are no positive integers $$n$$ such that $2558^{n} \equiv 1 \pmod{2000^{n} - 1}$ 2.) Find the largest positive integer $$n$$ such that for all positive integers$$a$$, $a^{9} \equiv a \pmod{n}$ 3.) Find all solutions to linear congruence $2557x \equiv 2015 \pmod{1200}$ 4.) Let $$p$$ be a prime number and $$N = \displaystyle \frac{10^{p} - 1}{9}$$, prove that $N\times 10^{8p} + 2N\times 10^{7p} + 3N \times 10^{6p} + \dots + 9N \equiv 123456789 \pmod{p}$ 5.) Let $$p$$ be a prime number, prove that the congruence $x^{2} \equiv -1 \pmod{p}$ has a solutions if and only if $$p = 2$$ or $$p \equiv 1 \pmod{4}$$. This note is a part of Thailand Math POSN 2nd round 2015 Note by Samuraiwarm Tsunayoshi 3 years, 2 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Solution to Q-4: From Fermat's Little Theorem, we know that, for any integer $$a$$ and a prime $$p$$, we have $$a^p\equiv a\pmod{p}$$. Then, $10^p\equiv 10\pmod{p}\implies 10^p-1\equiv 9\pmod{p}\implies N=\frac{10^p-1}{9}\equiv 1\pmod{p}~,p\neq 3$ We have $$p\neq 3$$ in that case since $$\gcd(9,3)=3\neq 1$$ and $$3$$ is the only prime that is not coprime to $$9$$. For $$p=3$$, using divisibility rule of $$3$$, we have $$N=\dfrac{10^3-1}{9}=111\equiv 0\pmod{3}$$. Hence, we combine these two results as (with prime $$p$$), $N=\frac{10^p-1}{9}\equiv \begin{cases}1\pmod{p}~,~p\neq 3\\ 0\pmod{p}~,~p=3\end{cases}~\ldots~(i)$ Trivial Observation 1: We note beforehand that $$123456789\equiv 0\pmod{3}$$ using the divisibility rule of $$3$$. As such, if we have $$x\equiv 0\pmod{3}$$, we can also write that as $$x\equiv 123456789\pmod{3}$$ for any integer $$x$$. Now, for positive integral $$n$$, $$10^n\in\mathbb{Z}$$ and hence, by FLT, we have $$10^{np}\equiv 10^n\pmod{p}$$ for all primes $$p$$ and positive integral $$n$$. We use this result to form a modular congruence equation with the LHS of the statement to be proved modulo prime $$p$$ as, $\textrm{LHS}\equiv N(10^8+2\times 10^7+3\times 10^6+\ldots +9)\equiv 123456789N\pmod{p}$ Using $$(i)$$, this becomes, $\textrm{LHS}\equiv\begin{cases} 123456789\pmod{p}~,~p\neq 3\\ 0\pmod{p}~,~p=3\end{cases}$ Using Trivial Observation 1, this becomes, $\textrm{LHS}\equiv 123456789\pmod{p}=\textrm{RHS}$ $\therefore\quad N\times 10^{8p} + 2N\times 10^{7p} + 3N \times 10^{6p} + \dots + 9N \equiv 123456789 \pmod{p}$ - 3 years, 2 months ago Solution to Q no 3 : 2557x=2015(mod 1200) Since gcd (2557,1200)=1 there is an unique solution (mod 1200) By using the Euclidean Algorithm to write 1 as the linear combination of 2557 and 1200, such as 1200×228-2557(-107)=1 here 2557(-107)=1(mod 1200) so, 2557(-107×2015)=2015 (mod 1200) Since , -107×2015 = 395 (mod 1200) We conclude that x=395 ( modulo 1200) is an unique solution . so x =395 - 2 years, 8 months ago Nice solution. Also this question can be done by the CRT - 2 years, 6 months ago prob 1 : first assume there is a positive number n such that 2558^n =1 (mod 2000^n -1) or, 2558^n-1=0( mod 2000^n-1) that means 2000^n-1 divides 2558^n-1 now , let p be the maximum prime divisor of 2000^n -1 so. 2000^n=1(mod p) let this p be also a maximum prime divisor of 2558^n-1 so. 2558^n=1 (mod p) then , 2000^n =2558^n or 2000= 2558 and it is not possible . So there are no integer n such that 2558^n=1(mod 2000^n-1) - 2 years, 8 months ago In your solution, it seems to me that you assume that $$a^n\equiv b^n\pmod{p}\implies a=b$$ where $$a,b,n$$ are positive integers and $$p$$ is a prime. But that doesn't necessarily hold true. Consider $$a=3,b=4,p=11,n=10$$. We have $$a\neq b$$ but $$a^n\equiv b^n\pmod{p}$$. - 2 years, 8 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970670938491821, "perplexity": 1293.1261003451934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860168.62/warc/CC-MAIN-20180618090026-20180618110026-00391.warc.gz"}
http://math.stackexchange.com/questions/78718/markov-chains-as-autoregressive-processes
# Markov Chains as Autoregressive Processes Is there a simple way to approximate a Markov Chain as an Autoregressive Process, for instance, an AR(1) process? I am aware that it is easy to approximate an AR(1) process with a Markov Chain, but I am not sure about the converse. Thanks. - AR(1) processes ARE Markov chains. Most Markov chains are not AR(1). –  Did Nov 3 '11 at 20:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934201002120972, "perplexity": 350.2528021612859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835119.25/warc/CC-MAIN-20140820021355-00233-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/31863-related-rates-questions.html
# Math Help - Related Rates Questions 1. ## Related Rates Questions A spherical ballon is being inflated at a rate of 3 pi m^3/min. Twelve minutes after inflation first begins, what is the rate of increase of the diameter? i solved for dr/dt and then adjuster for diameter, but i dont understand why we were given t = 12 min... thx for any help u can giv 2. Originally Posted by a.a A spherical ballon is being inflated at a rate of 3 pi m^3/min. Twelve minutes after inflation first begins, what is the rate of increase of the diameter? i solved for dr/dt and then adjuster for diameter, but i dont understand why we were given t = 12 min... thx for any help u can giv Let diameter be D. Then $\frac{dD}{dt} = \frac{dD}{dV} \times \frac{dV}{dt}$. Given: $\frac{dV}{dt} = 3 \pi \, m^3/min$. Common knowledge: $V = \frac{4}{3} \, \pi r^3 = \frac{\pi}{6} \, D^3$. Therefore $\frac{dV}{dD} = \frac{\pi}{2} \, D^2$. Therefore: $\frac{dD}{dt} = \frac{2}{\pi D^2} \times 3 \pi = \frac{6}{D^2}$. You were given t = 12 minutes because that's the question!!! Find the value of $\frac{dD}{dt}$ when t = 12 minutes! So you have to substitute the value of D at t = 12. Clearly $V = (12)(3 \pi) = 36 \pi \, m^3$ and you know that $V = \frac{\pi}{6} \, D^3$. Therefore D = ....... at t = 12 minutes. Therefore $\frac{dD}{dt}$ when t = 12 minutes is equal to ...... 3. Thx a million. 4. ## final ans... umm.. for the final answer i got 1/6 m/min for dD/dt, is tha right? 5. Originally Posted by a.a umm.. for the final answer i got 1/6 m/min for dD/dt, is tha right? Yes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531345367431641, "perplexity": 1734.2370708467631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299515.96/warc/CC-MAIN-20150323172139-00126-ip-10-168-14-71.ec2.internal.warc.gz"}
https://cstheory.stackexchange.com/questions/39424/properties-of-toroidal-graph
# Properties of toroidal graph I am interested in work pertaining to graphs that have genus 1 i.e. toroidal graphs. Specifically, i am trying to find answers to the questions below. 1. Since toroidal graphs can be recognized in $P$ , what are some different known characterizations of toroidal graphs ? 2. There are more than thousand forbidden minors for toroidal graph class, and only four of them does not contain $K_{3,3}$ as a subdivision (This paper). Where can i find a bigger list of forbidden structures of toroidal graphs ? 3. Two disjoint copies of $K_5$'s are not toroidal. Is it true that if a graph $G$ have two vertex disjoint non-planar induced subgraphs, then $G$ is not a toridal ? If not, then what is special about disjoint copies of $K_5$'s ? Regarding (3), yes, if a graph $M$ has two vertex disjoint non-planar induced subgraphs $G$ and $H$, then $G\cup H$ (and hence $M$) is not toroidal. Thinking of the torus as $T=S^1\times S^1$, if a non-planar $G$ is embedded without crossing on $T$ then its edges must meet every $S^1\times\{a\}$ and $\{a\}\times S^1$, or else we cut such a circle out and realize $G$ as planar. By tracing where they so meet we obtain a subgraph of $G$ realized as homotopic to some $$\left(\{a\}\times S^1\right)\cup \left(S^1\times\{b\}\right)$$ (so two circles meet in $(a,b)$). Similarly for the second graph $H$ and say $$\left(\{c\}\times S^1\right)\cup \left(S^1\times\{d\}\right).$$ But then $G$ and $H$ are not disjoint (they share a vertex or have crossing edges) as the embedded graphs meet in $(a,d)$ and $(c,b)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8971173763275146, "perplexity": 276.9325394424488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00486.warc.gz"}
http://solidmechanics.org/Text/Chapter3_8/Chapter3_8.php
Chapter 3 Constitutive Models $–$ Relations between Stress and Strain 3.8 Small strain viscoplasticity: creep and high strain rate deformation of crystalline solids Viscoplastic constitutive equations are used to model the behavior of polycrystalline materials (metals and ceramics) that are subjected to stress at high temperatures (greater than half the melting point of the solid), and also to model the behavior of metals that are deformed at high rates of strain (greater than 100 per second). Viscoplasticity theory is a relatively simple extension of the rate independent plasticity model discussed in Section 3.6.  You may find it helpful to review this material before attempting to read this section. 3.8.1 Features of creep behavior 1.      If a tensile specimen of a crystalline solid is subjected to a time independent stress, it will progressively increase in length.  A typical series of length-v-time curves is illustrated in the figure. 2.      The length-v-time plot has three stages: a transient period of primary creep, where the creep rate is high; a longer period of secondary creep, where the extension rate is constant; and finally a period of tertiary creep, where the creep rate again increases.   Most creep laws focus on modeling primary and secondary creep.  In fact, it is often sufficient to model only secondary creep. 3.      The rate of extension increases with stress.  A typical plot of secondary creep rate as a function of stress is shown in the figure.   There are usually three regimes of behavior: each regime can be fit (over a range of stress) by a power-law with the form $\stackrel{˙}{\epsilon }\approx A{\sigma }^{m}$.  At low stresses, $m\approx 1-2.5$; at intermediate stresses $m\approx 2.5-7$, and at high stress m increases rapidly and can exceed 10-20. 4.      The rate of extension increases with temperature.   At a fixed stress, the temperature dependence of strain rate can be fit by an equation of the form $\stackrel{˙}{\epsilon }={\stackrel{˙}{\epsilon }}_{0}\mathrm{exp}\left(-Q/kT\right)$, where $Q$ is an activation energy; $k$ is the Boltzmann constant, and T is temperature.  Like the stress exponent m,   the activation energy $Q$  can transition from one value to another as the temperature and stress level is varied. 5.      The various regimes for m and $Q$ are associated with different mechanisms of creep.  At low stress, creep occurs mostly by grain boundary sliding and diffusion.  At higher stresses it occurs as a result of thermally activated dislocation motion.   H.J. Frost and M.F. Ashby. “Deformation Mechanism Maps,” Pergamon Press, Elmsford, NY (1982) plot charts that are helpful to get a rough idea of which mechanism is likely to be active in a particular application.  A schematic of a deformation mechanism map is illustrated in the picture: the figure shows various regimes of plastic flow as a function of temperature (normalized by melting temperature), and stress (normalized by shear modulus) 6.      The creep behavior of a material is strongly sensitive to its microstructure (especially grain size and the size and distribution of precipitates) and composition. Under proportional multi-axial loading, creep shows all the same characteristics as rate independent plasticity: (i) plastic strains are volume preserving; (ii) creep rates are insensitive to hydrostatic pressure; (iii) the principal strain rates are parallel to the principal stresses; (iv) plastic flow obeys the Levy-Mises flow rule.   These features of behavior are discussed in more detail in Sect 3.6.1. 3.8.2 Features of high-strain rate behavior Stress-strain curves for metals have been measured for strain rates as high as ${10}^{7}$ /sec.  The general form of the stress-strain curve is essentially identical to that measured at quasi-static strain rates (see Sect 3.6.1 for an example), but the flow stress increases with strain rate.  As an example, the experimental data of Klopp, Clifton and Shawki, Mechanics of Materials, 4, p. 375 (1985) for the high strain rate behavior of iron is reproduced in the figure.  The flow stress rises slowly with strain rate up to a strain rate of about ${10}^{6}$, and then begins to rise rapidly. 3.8.3 Small-strain, viscoplastic constitutive equations Viscoplastic constitutive equations are almost identical to the rate independent plastic equations in Section 3.6.  The main concepts in viscoplasticity are, 1.      Strain rate decomposition into elastic and plastic components; 2.      Elastic stress-strain law, which specifies the  elastic part of the strain rate in terms of stress rate; 3.      The plastic flow potential, which determines the magnitude of the plastic strain rate, given the stresses and the resistance of the material to flow. 4.       State Variables which characterize the resistance of the material to flow (analogous to yield stress) 5.      The plastic flow rule, which specifies the components of plastic strain rate under multiaxial loading. Recall that in rate independent plasticity the flow rule was expressed as the derivative of the yield surface with respect to stress.  In viscoplasticity, the flow rule involves the derivative of the plastic flow potential 6.      Hardening laws which specify the evolution of the state variables with plastic strain. These are discussed in more detail below. Strain rate decomposition We assume infinitesimal deformation, so shape changes are characterized by ${\epsilon }_{ij}=\left(\partial {u}_{i}/\partial {x}_{j}+\partial {u}_{j}/\partial {x}_{i}\right)/2$. The strain rate is decomposed into elastic and plastic parts as $\frac{d{\epsilon }_{ij}^{}}{dt}=\frac{d{\epsilon }_{ij}^{e}}{dt}+\frac{d{\epsilon }_{ij}^{p}}{dt}$ Elastic constitutive equations The elastic strains are related to the stresses using the standard linear elastic stress-strain law.   The elastic strain rate follows as $\frac{d{\stackrel{˙}{\epsilon }}_{ij}^{e}}{dt}={S}_{ijkl}\frac{d{\sigma }_{kl}}{dt}$ where ${S}_{ijkl}$ are the components of the elastic compliance tensor.  For the special case of an isotropic material with Young’s modulus $E$ and Poisson’s ratio $\nu$ $\frac{d{\epsilon }_{ij}^{e}}{dt}=\frac{1+\nu }{E}\frac{d{\sigma }_{ij}}{dt}-\frac{\nu }{E}\frac{d{\sigma }_{kk}}{dt}{\delta }_{ij}+\alpha \frac{d\Delta T}{dt}{\delta }_{ij}$ Plastic flow potential The plastic flow potential specifies the magnitude of the plastic strain rate, as a function of stress and the resistance of the material to plastic flow.   It is very similar to the yield surface for a rate independent material.   The plastic flow potential is constructed as follows. 1.      Define the plastic strain rate magnitude as ${\stackrel{˙}{\epsilon }}_{e}=\sqrt{2{\stackrel{˙}{\epsilon }}_{ij}^{p}{\stackrel{˙}{\epsilon }}_{ij}^{p}/3}$ 2.      Let ${\sigma }_{ij}$ denote the stresses acting on the material, and let ${\sigma }_{1},{\sigma }_{2},{\sigma }_{3}$ denote the principal stresses; 3.      Experiments show that the plastic strain rate is independent of hydrostatic pressure.  The strain rate must be a function of only the deviatoric stress components, defined as ${S}_{ij}={\sigma }_{ij}-{\sigma }_{kk}{\delta }_{ij}/3$ 4.      Assume that the material is isotropic.  The strain rate can therefore only depend on the invariants of the deviatoric stress tensor.   The deviatoric stress has only two nonzero invariants. It is convenient to choose ${\sigma }_{e}=\sqrt{\frac{3}{2}{S}_{ij}{S}_{ij}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\sigma }_{III}=\mathrm{det}\left(S\right)$ In practice, only the first of these (the von Mises effective stress) is used in most flow potentials. 5.      The plastic flow potential can be represented graphically by plotting it as a function of the three principal stresses, exactly as the yield surface is shown graphically for a rate independent material.   An example is shown in the picture.  The lines show contours of constant plastic strain rate. 6.      For Drucker stability, the contours of constant strain rate must be convex, and the plastic strain rate must increase with strain rate in the direction shown in the figure. 7.      Just as the yield stress of a rate independent material can increase with plastic strain, the resistance of a viscoplastic material to plastic straining can also increase with strain.   The resistance to flow is characterized by one or more material state variables, which may evolve with plastic straining. The most general form for the flow potential of an isotropic material is thus where $g$ must satisfy  for all $\alpha \ge 1$ (with state variables held fixed), and must also be a convex function of ${\sigma }_{e},{\sigma }_{III}$ Examples of flow potentials: Von-Mises flow potential with power-law rate sensitivity Creep is often modeled using the a flow potential of the form $g\left({\sigma }_{e},\left\{{\sigma }_{0}^{\left(i\right)}\right\}\right)=\sum _{i=1}^{N}{\stackrel{˙}{\epsilon }}_{0}^{\left(i\right)}\mathrm{exp}\left(-{Q}_{i}/kT\right){\left(\frac{{\sigma }_{e}}{{\sigma }_{0}^{\left(i\right)}}\right)}^{{m}_{i}}$ where ${\stackrel{˙}{\epsilon }}_{0}^{\left(i\right)}$, ${Q}_{i}$ and ${m}_{i}$, i=1..N are material properties ( ${Q}_{i}$ are the activation energies for the various mechanisms that contribute to creep); k is the Boltzmann constant and T is temperature.  The model is most often used with N=1, but more terms are required to fit material behavior over a wide range of temperatures and strain rates. The potential has several state variables, ${\sigma }_{0}^{\left(i\right)}$.   To model steady state creep, you can take ${\sigma }_{0}$ to be constant; to model transient creep, ${\sigma }_{0}$ must increase with strain.  An example of an evolution law for ${\sigma }_{0}$ is given below. High strain rate deformation is also modeled using a power law Mises flow potential: the following form is sometimes used: $g\left({\sigma }_{e},{\sigma }_{0}^{}\right)=\left\{\begin{array}{c}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0<{\sigma }_{e}/{\sigma }_{0}<1\\ {\stackrel{˙}{\epsilon }}_{0}^{\left(1\right)}\left[{\left({\sigma }_{e}/{\sigma }_{0}\right)}^{{m}_{1}}-1\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1<{\sigma }_{e}/{\sigma }_{0}<\alpha \\ {\stackrel{˙}{\epsilon }}_{0}^{\left(2\right)}\left[{\left({\sigma }_{e}/{\sigma }_{0}\right)}^{{m}_{2}}-1\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha <{\sigma }_{e}/{\sigma }_{0}\end{array}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}$ where ${\stackrel{˙}{\epsilon }}_{0}^{\left(2\right)}={\stackrel{˙}{\epsilon }}_{0}^{\left(1\right)}\left({\alpha }^{{m}_{1}}-1\right)/\left({\alpha }^{{m}_{2}}-1\right)$ and ${\stackrel{˙}{\epsilon }}_{0}^{\left(1\right)},\alpha ,{m}_{1},{m}_{2}$ are material properties, while ${\sigma }_{0}$ is a strain, strain rate and temperature dependent state variable, which represents the quasi-static yield stress of the material and evolves with deformation as described below.  In this equation, ${m}_{2}<{m}_{1}$ so as to model the transition in strain rate sensitivity at high strain rates; while $\alpha$ controls the point at which the transition occurs. Plastic flow rule The plastic flow rule specifies the components of plastic strain rate resulting from a multiaxial state of stress.   It is constructed so that: 1.      The plastic strain rate satisfies the Levy-Mises plastic flow rule 2.      The viscoplastic stress-strain law satisfies the Drucker stability criterion 3.      The flow rule predicts a plastic strain magnitude consistent with the flow potential. Both (1) and (2) are satisfied by If g depends on stress only through the Von-Mises effective stress, this expression can be simplified to For the particular case of the Power-law von Mises flow potential this gives ${\stackrel{˙}{\epsilon }}_{ij}^{p}=\sum _{n=1}^{N}{\stackrel{˙}{\epsilon }}_{0}^{\left(n\right)}\mathrm{exp}\left(-{Q}_{n}/kT\right){\left(\frac{{\sigma }_{e}}{{\sigma }_{0}^{\left(n\right)}}\right)}^{{m}_{n}}\frac{3{S}_{ij}}{2{\sigma }_{e}}$ Hardening rule The hardening rule specifies the evolution of state variables with plastic straining.   Many different forms of hardening rule are used (including kinematic hardening laws such as those discussed in 3.6.5).   A simple example of an isotropic hardening law which is often used to model transient creep is ${\sigma }_{0}^{\left(i\right)}={Y}_{i}{\left(1+\frac{{\epsilon }_{e}^{\left(i\right)}}{{\epsilon }_{0}^{\left(i\right)}}\right)}^{1/{n}_{i}}$ where ${\epsilon }_{e}^{\left(i\right)}=\int {\stackrel{˙}{\epsilon }}_{0}^{\left(i\right)}\mathrm{exp}\left(-{Q}_{i}/kT\right){\left({\sigma }_{e}/{\sigma }_{0}^{\left(i\right)}\right)}^{{m}_{i}}dt$ is the accumulated strain associated with each mechanism of creep, and ${Y}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\epsilon }_{0}^{\left(i\right)}$ and ${n}_{i}$ are material constants.  The law is usually used only with N=1. Similar hardening laws are used in constitutive equations for high strain rate deformation; but in this case the flow strength is made temperature dependent.  The following formula is sometimes used ${\sigma }_{0}=Y\left[1-\beta \left(T-{T}_{0}\right)\right]{\left(1+\frac{{\epsilon }_{e}^{}}{{\epsilon }_{0}^{}}\right)}^{1/n}$ where ${\epsilon }_{e}=\int \sqrt{3{\stackrel{˙}{\epsilon }}_{ij}^{p}{\stackrel{˙}{\epsilon }}_{ij}^{p}/2}dt$ is the total accumulated strain, T is temperature, and $Y,n,{T}_{0},\beta$ are material properties.  More sophisticated hardening laws make the flow stress a function of strain rate $–$ see Clifton “High strain rate behavior of metals.” Applied Mechanics Reviews 43 (5) (1990) S9-S22 for more details. 3.8.4 Representative values of parameters for viscoplastic models of creeping solids Fitting material parameters to test data is conceptually straightforward: the flow potential has been constructed so that for a uniaxial tension test with ${\sigma }_{11}=\sigma$, all other stress components zero, the uniaxial plastic strain rate is ${\stackrel{˙}{\epsilon }}_{11}^{p}=\sum _{n=1}^{N}{\stackrel{˙}{\epsilon }}_{0}^{\left(n\right)}\mathrm{exp}\left(-{Q}_{n}/kT\right){\left(\frac{\sigma }{{\sigma }_{0}^{\left(n\right)}}\right)}^{{m}_{n}}$ so the properties can be fit directly to the results of a series of uniaxial tensile tests conducted at different temperatures and applied stresses. To model steady-state creep  ${\sigma }_{0}^{\left(n\right)}$ can be taken to be constant.  One, or two terms in the series is usually sufficient to fit material behavior over a reasonable range of temperature and stress. Creep rates are very sensitive to the microstructure and composition of a material, so for accurate predictions you will need to find data for the particular material you plan to use.  Frost and Ashby “Deformation Mechanism Maps,” Pergamon Press, 1982 provide approximate expressions for creep rates of a wide range of materials, as well as references to experimental data.  As a rough guide, approximate values for a 1-term fit to creep data for polycrystalline Al alloys subjected to stresses in the range 5-60MPa are listed in the table. Approximate creep parameters for polycrystalline Al alloys ${\stackrel{˙}{\epsilon }}_{0}$ (sec-1) ${\sigma }_{0}$ (MPa) $m$ $Q$ (J) $1.3×{10}^{8}$ 20 4 $2.3×{10}^{-19}$ 3.8.5 Representative values of parameters for viscoplastic models of high strain rate deformation The material parameters in constitutive models for high strain rate deformation can also be fit to the results of a uniaxial tension or compression test.  For the model described in Section 3.7.3, the steady-state uniaxial strain rate as a function of stress is $\stackrel{˙}{\epsilon }=\left\{\begin{array}{c}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0<\sigma /{\sigma }_{0}<1\\ {\stackrel{˙}{\epsilon }}_{0}^{\left(1\right)}\left[{\left(\sigma /{\sigma }_{0}\right)}^{{m}_{1}}-1\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1<\sigma /{\sigma }_{0}<\alpha \\ {\stackrel{˙}{\epsilon }}_{0}^{\left(2\right)}\left[{\left(\sigma /{\sigma }_{0}\right)}^{{m}_{2}}-1\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha <\sigma /{\sigma }_{0}\end{array}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}$ The material constants ${m}_{1},{m}_{2},\alpha ,{\stackrel{˙}{\epsilon }}_{0}^{\left(1\right)}$, and the flow stress ${\sigma }_{0}$ can be determined from a series of uniaxial tension tests conducted at different temperatures and levels of applied stress; while ${\stackrel{˙}{\epsilon }}_{0}^{\left(2\right)}$ can be found from ${\stackrel{˙}{\epsilon }}_{0}^{\left(2\right)}={\stackrel{˙}{\epsilon }}_{0}^{\left(1\right)}\left({\alpha }^{{m}_{1}}-1\right)/\left({\alpha }^{{m}_{2}}-1\right)$. If strain hardening can be neglected, ${\sigma }_{0}$ is a temperature dependent constant, which could be approximated crudely as ${\sigma }_{0}=Y\left(1-\beta \left(T-{T}_{0}\right)\right)$, where $Y,\beta ,{T}_{0}$ are constants and T is temperature.  Viscoplastic properties of materials are very strongly dependent on their composition and microstructure, so for accurate predictions you will need to find data for the actual material you intend to use.  Clifton, R.J. “High strain rate behavior of metals.” Applied Mechanics Reviews 43 (5) (1990) S9-S22 describes several experimental techniques for testing material at high strain rates, and contains references to experimental data.  As a rough guide, parameter values for 1100-0 Al alloy (fit to data in Clifton’s paper cited earlier) are listed in the table below.   The value of $\beta$ was estimated by assuming that the solid loses all strength at the melting point of Al (approximately 650C). Approximate constitutive parameters for high strain rate behavior of Al alloy ${\stackrel{˙}{\epsilon }}_{0}^{\left(1\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left({\mathrm{s}}^{-1}\right)$ ${\stackrel{˙}{\epsilon }}_{0}^{\left(2\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left({\mathrm{s}}^{-1}\right)$ $\alpha$ ${m}_{1}$ ${m}_{2}$ $Y\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\text{MNm}}^{-2}$ $\beta \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left({K}^{-1}\right)$ ${T}_{0}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(K\right)$ 100 $2.4\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{10}^{7}$ 1.6 15 0.1 50 0.00157 298
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 77, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922696590423584, "perplexity": 1288.7165011463276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514162.67/warc/CC-MAIN-20181021161035-20181021182535-00523.warc.gz"}
http://mathhelpforum.com/number-theory/186293-number-positive-divisors-n.html
# Math Help - Number of positive divisors of n 1. ## Number of positive divisors of n Is it true that for all positive integers n, the inequality $d(n)\leq 1 + log_2 n$ holds? 2. ## Re: Number of positive divisors of n Originally Posted by alexmahone Is it true that for all positive integers n, the inequality $d(n)\leq 1 + log_2 n$ holds? I assume from the title that $d(n)$ is the number of divisors of $n$. Does that hold for $n=144~?$ 3. ## Re: Number of positive divisors of n Originally Posted by Plato I assume from the title that $d(n)$ is the number of divisors of $n$. Does that hold for $n=144~?$ 144 = 12^2 d(144) = 2 + 1 = 3 $1 + log_2 144 \approx 1 + 7.17 = 8.17$ It does hold for n = 144. 4. ## Re: Number of positive divisors of n Uh... no! 1 2 3 4 6 8 9 12 16 ... all divide 144, so the left hand side is larger than the right hand side 5. ## Re: Number of positive divisors of n Originally Posted by TheChaz Uh... no! 1 2 3 4 6 8 9 12 16 ... all divide 144, so the left hand side is larger than the right hand side Oops ... I should have done: 144 = 2^4 * 3^2 d(n) = (4 + 1)(2 + 1) = 15 6. ## Re: Number of positive divisors of n Originally Posted by alexmahone 144 = 12^2 d(144) = 2 + 1 = 3 $1 + log_2 144 \approx 1 + 7.17 = 8.17$ It does hold for n = 144. Because $144=2^4\cdot 3^2$ then $d(144)=(4+1)(3+1)=15$. 7. ## Re: Number of positive divisors of n Perhaps there is a mistake in the book: Is $d(n)\geq 1 + log_2 n$ true? 8. ## Re: Number of positive divisors of n Originally Posted by alexmahone Perhaps there is a mistake in the book: Is $d(n)\geq 1 + log_2 n$ true? The divisor bound « What’s new
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463949799537659, "perplexity": 838.0007663031671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647884.33/warc/CC-MAIN-20141024030047-00243-ip-10-16-133-185.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Cauchy_Sequence_Is_Eventually_Bounded_Away_From_Non-Limit
# Cauchy Sequence Is Eventually Bounded Away From Non-Limit ## Theorem Let $\struct {R, \norm {\, \cdot \,} }$ be a normed division ring. Let $\sequence {x_n}$ be a Cauchy sequence in $R$. Suppose $\sequence {x_n}$ does not converge to $l \in R$, then: $\exists K \in \N$ and $C \in \R_{\gt 0}: \forall n \gt K: C \lt \norm {x_n - l}$ ## Proof Since $\sequence {x_n}$ does not converge to $l$ then: $\exists \epsilon \in \R_{\gt 0}: \forall n \in \N, \exists m >= n: \norm {x_m - l} >= \epsilon$ Since $\sequence {x_n}$ is a Cauchy sequence then: $\exists K \in \N: \forall n, m \ge K: \norm {x_n - x_m} \lt \dfrac \epsilon 2$ Let $M >= K: \norm {x_M - l} >= \epsilon$ Then $\forall n > K$: $\displaystyle \epsilon$ $\le$ $\displaystyle \norm {x_M - l }$ $\quad$ $\quad$ $\displaystyle$ $=$ $\displaystyle \norm {x_M - x_n + x_n - l }$ $\quad$ $\quad$ $\displaystyle$ $\le$ $\displaystyle \norm {x_M - x_n } + \norm {x_n - l }$ $\quad$ Axiom (N3) of norm (Triangle Inequality) $\quad$ $\displaystyle$ $\lt$ $\displaystyle \dfrac \epsilon 2 + \norm {x_n -l }$ $\quad$ Since $n, M \ge K$ $\quad$ $\displaystyle \implies \ \$ $\displaystyle \dfrac \epsilon 2$ $\lt$ $\displaystyle \norm {x_n - l }$ $\quad$ Subtracting $\dfrac \epsilon 2$ from both sides of the equation. $\quad$ Let $C = \dfrac \epsilon 2$ and the result follows. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9978950619697571, "perplexity": 73.05963474391854}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256082.54/warc/CC-MAIN-20190520162024-20190520184024-00072.warc.gz"}
https://www.physicsforums.com/threads/friction-help.143655/
# Homework Help: Friction help 1. Nov 13, 2006 ### BigMann stupid question because I forgot the equation What is the equation for the minimum value of the coefficient of friction? is it m(v^2/r) or v^2/gr 2. Nov 13, 2006 ### physics girl phd check your units on them -- think about the units of the coefficient of friction -- then does one or the other make sense? 3. Nov 13, 2006 ### BigMann the correct equation is v^2/gr right? 4. Nov 13, 2006 ### physics girl phd well -- what are the units of that, and what are the units of the coefficient of friction? 5. Nov 13, 2006 ### BigMann I know that the coefficient of friction is Newtons 6. Nov 13, 2006 ### physics girl phd 7. Nov 13, 2006 ### HallsofIvy It's what we "know" that hurts! The Newton is a unit of force. While friction is a force, the coefficient of friction is not. 8. Nov 13, 2006 ### BigMann This then means that the first equation would have to be the right one based off of the units. And if Im incorrect than I guess I just forgot it all 9. Nov 13, 2006 ### physics girl phd The units of the first are mass x velocity^2/length -- kg/m^2/s^2/m -- is that the same as the units of the coefficient of friction? Then analyze the second in the same way. Analyzing units always helps -- but of course doesn't guarantee the right answer. 10. Nov 13, 2006 ### BigMann maybe I'm not understanding the units of the coefficient of friction. Could you explain to me what they are? 11. Nov 13, 2006 ### OlderDan The coefficient of friction is the ratio of two forces, the force of friction divided by the normal force. It has no units; it is a pure number.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9292267560958862, "perplexity": 1565.4447605038788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00639.warc.gz"}
https://ie.boun.edu.tr/publications
Export 433 results: Author Title Type [ Year] 2021 System Dynamics Review, vol. 36, pp. 397–446, 2021. Pattern Recognition Letters, vol. 148, pp. 36-42, 2021. Pattern Recognition, vol. 120, pp. 108144, 2021. Turkish Journal of Electrical Engineering & Computer Sciences, vol. 29, pp. 2186–2201, 2021. Engineering Science and Technology, an International Journal, 2021. Pattern Recognition, vol. 120, pp. 108097, 2021. 2020 Edali, M., and G. Yücel, Systems Research and Behavioral Science, vol. 37, pp. 936-958, 2020. Pattern Recognition, vol. 107, pp. 107507, 2020. Networks, vol. 75, pp. 137-157, 2020. Discrete Mathematics, vol. 343, pp. 111665, 2020. IEEE Communications Letters, vol. 24, pp. 25-28, 2020. M. Aydin, A., and Z. C. Taşkın, RAIRO-Oper. Res., vol. 54, pp. 1835-1861, 2020. Mathematical Biosciences, vol. 325, pp. 108363, 2020. European Journal of Operational Research, vol. 287, pp. 410-437, 2020. Küçükaydin, H., and N. Aras, Computers & Industrial Engineering, vol. 147, pp. 106577, 2020. Discrete Mathematics, vol. 343, pp. 111943, 2020. IISE Transactions, vol. 52, pp. 850-863, 2020. System Dynamics Review, vol. 36, pp. 467-496, 2020. Khaniyev, T., E. Kayış, and R. Güllü, European Journal of Operational Research, vol. 286, pp. 49-62, 2020. Energies, vol. 13, 2020. International Transactions in Operational Research, vol. 27, pp. 835-866, 2020. Environmental and Ecological Statistics, vol. 27, pp. 73–94, 2020. Journal of the Operational Research Society, vol. 71, pp. 2042-2052, 2020. Computers & operations research, vol. 123, pp. 104996, 2020. 2019 Computers & Operations Research, vol. 111, pp. 214-229, 2019. Annals of Operations Research, vol. 279, pp. 1–42, 2019. Turkish Journal of Electrical Engineering & Computer Sciences, vol. 27, pp. 832–846, 2019. Physics in Medicine {&} Biology, vol. 64, pp. 205024, oct, 2019. Şeker, O., T. Ekim, and Z. C. Taşkın, Networks, vol. 73, pp. 145-169, 2019. European Journal of Operational Research, vol. 272, pp. 372-388, 2019.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444242119789124, "perplexity": 3951.4706068297746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00042.warc.gz"}
http://mathhelpforum.com/advanced-statistics/11252-probability-questions-please-kindly-help.html
# Math Help - Probability questions. Please kindly help. 1. ## Probability questions. Please kindly help. 1. Let Y1 and Y2 be independent random variables that are both uniformly distributed on the interval (0,1). Find P(Y1< 2*Y2|Y1< 3*Y2). 2. Let Z ~ N(0,1) and Y ~ χ2(ν) (Chi-square with ν degrees of freedom). Assume that Z is independent of Y and let W = Z/√Y . Obtain E(W) and Var(W). 3. The joint density of Y1 and Y2 is given by f(y1,y2) = {y1+ y2, 0≤ y1≤ 1, 0 ≤ y2≤ 10 O therwise. Obtain Var(30*Y1+ 25*Y2). Thank you very much. 2. Originally Posted by Ruichan 1. Let Y1 and Y2 be independent random variables that are both uniformly distributed on the interval (0,1). Find P(Y1< 2*Y2|Y1< 3*Y2). The event [(Y1<2*Y2) or (Y1<3*Y2)] is the same as the event (Y1<3*Y3). This is because the event (Y1<2*Y2) coresponds to points above the line Y2=(1/2)Y1 in the Y1, Y2 plane, and inside the unit square, and the event (Y1<3*Y2) coresponds to points above the line Y2=(1/3)Y1 in the Y1, Y2 plane, and inside the unit square. The latter region includes all of the first region. Also as Y1 and Y2 are uniformly distributed on the unit square the required probability corresponds to the area of the the region, which is: 1-(1/6)=5/6. (the area not in the region is a triangle of base 1 and altitude 1/3). RonL
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525220990180969, "perplexity": 1615.2476528470888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447913.86/warc/CC-MAIN-20151124205407-00280-ip-10-71-132-137.ec2.internal.warc.gz"}
https://terrytao.wordpress.com/2009/04/02/polymath1-and-three-new-proofs-of-the-density-hales-jewett-theorem/
This week I gave a talk at UCLA on some aspects of polymath1 project, and specifically on the three new combinatorial proofs of the density Hales-Jewett theorem that have been obtained as a consequence of that project. (There have been a number of other achievements of this project, including some accomplished on the polymath1 threads hosted on this blog, but I will have to postpone a presentation of those to a later post.) It seems that at least two of the proofs will extend to prove this theorem for arbitrary alphabet lengths, but for this talk I will focus on the ${k=3}$ case. The theorem to prove is then Theorem 1 (${k=3}$ density Hales-Jewett theorem) Let ${0 < \delta \leq 1}$. Then if ${n}$ is a sufficiently large integer, any subset ${A}$ of the cube ${[3]^n = \{1,2,3\}^n}$ of density ${|A|/3^n}$ at least ${\delta}$ contains at least one combinatorial line ${\{ \ell(1), \ell(2), \ell(3)\}}$, where ${\ell \in \{1,2,3,x\}^n \backslash [3]^n}$ is a string of ${1}$s, ${2}$s, ${3}$s, and ${x}$‘s containing at least one “wildcard” ${x}$, and ${\ell(i)}$ is the string formed from ${\ell}$ by replacing all ${x}$‘s with ${i}$‘s. The full density Hales-Jewett theorem is the same statement, but with ${[3]}$ replaced by ${[k]}$ for some ${k \geq 1}$. (The case ${k=1}$ is trivial, and the case ${k=2}$ follows from Sperner’s theorem.) This theorem was first proven by Furstenberg and Katznelson, by first converting it to a statement in ergodic theory; the original paper of Furstenberg-Katznelson argument was for the ${k=3}$ case only, and gave only part of the proof in detail, but in a subsequent paper a full proof in general ${k}$ was provided. The remaining components of the original ${k=3}$ argument were later completed in unpublished notes of McCutcheon. One of the new proofs is essentially a finitary translation of this ${k=3}$ argument; in principle one could also finitise the significantly more complicated argument of Furstenberg and Katznelson for general ${k}$, but this has not been properly carried out yet (the other two proofs are likely to generalise much more easily to higher ${k}$). The result is considered quite deep; for instance, the general ${k}$ case of the density Hales-Jewett theorem already implies Szemerédi’s theorem, which is a highly non-trivial theorem in its own right, as a special case. Another of the proofs is based primarily on the density increment method that goes back to Roth, and also incorporates some ideas from a paper of Ajtai and Szemerédi establishing what we have called the “corners theorem” (and which is also implied by the ${k=3}$ case of the density Hales-Jewett theorem). A key new idea involved studying the correlations of the original set ${A}$ with special subsets of ${[3]^n}$, such as ${ij}$-insensitive sets, or intersections of ${ij}$-insensitive and ${ik}$-insensitive sets. Work is currently ongoing to extend this argument to higher ${k}$, and it seems that there are no major obstacles in doing so. This correlations idea inspired a new ergodic proof of the density Hales-Jewett theorem for all values of ${k}$ by Austin, which is in the spirit of the triangle removal lemma (or hypergraph removal lemma) proofs of Roth’s theorem (or the multidimensional Szemerédi theorem). A finitary translation of this argument in the ${k=3}$ case has been sketched out; I believe it also extends in a relatively straightforward manner to the higher ${k}$ case (in analogy with some proofs of the hypergraph removal lemma ). — 1. Simpler cases of density Hales-Jewett — In order to motivate the known proofs of the density Hales-Jewett theorem, it is instructive to consider some simpler theorems which are implied by this theorem. The first is the corners theorem of Ajtai and Szemerédi: Theorem 2 (Corners theorem) Let ${0 < \delta \leq 1}$. Then if ${n}$ is a sufficiently large integer, any subset ${A}$ of the square ${[n]^2}$ of density ${|A|/n^2}$ at least ${\delta}$ contains at least one right-angled triangle (or “corner”) ${\{ (x,y), (x+r,y), (x,y+r) \}}$ with ${r \neq 0}$. The ${k=3}$ density Hales-Jewett theorem implies the corners theorem; this is proven by utilising the map ${\phi: [3]^n \rightarrow [n]^2}$ from the cube to the square, defined by mapping a string ${x \in [3]^n}$ to a pair ${(a,b)}$, where ${a, b}$ are the number of ${1}$s and ${2}$s respectively in ${x}$. The key point is that ${\phi}$ maps combinatorial lines to corners. (Strictly speaking, this mapping only establishes the corners theorem for dense subsets of ${[n/3 - \sqrt{n}, n/3 + \sqrt{n}]^2}$, but it is not difficult to obtain the general case from this by replacing ${n}$ by ${n^2}$ and using translation-invariance.) (The corners theorem is also closely related to the problem of finding dense sets of points in a triangular grid without any equilateral triangles, a problem which we have called Fujimura’s problem. The corners theorem in turn implies Theorem 3 (Roth’s theorem) Let ${0 < \delta \leq 1}$. Then if ${n}$ is a sufficiently large integer, any subset ${A}$ of the interval ${[n]}$ of density ${|A|/n}$ at least ${\delta}$ contains at least one arithmetic progression ${a, a+r, a+2r}$ of length three. Roth’s theorem can be deduced from the corners theorem by considering the map ${\psi: [n]^2 \rightarrow [3n]}$ defined by ${\psi(a,b) := a+2b}$; the key point is that ${\psi}$ maps corners to arithmetic progressions of length three. There are higher ${k}$ analogues of these implications; the general ${k}$ version of the density Hales-Jewett theorem implies a general ${k}$ version of the corners theorem known as the multidimensional Szemerédi theorem, which in term implies a general version of Roth’s theorem known as Szemerédi’s theorem. — 2. The density increment argument — The strategy of the density increment argument, which goes back to Roth’s proof of Theorem 3 in 1956, is to perform a downward induction on the density ${\delta}$. Indeed, the theorem is obvious for high enough values of ${\delta}$; for instance, if ${\delta > 2/3}$, then partitioning the cube ${[3]^n}$ into lines and applying the pigeonhole principle will already give a combinatorial line. So the idea is to deduce the claim for a fixed density ${\delta}$ from that of a higher density ${\delta}$. A key concept here is that of an ${m}$-dimensional combinatorial subspace of ${[3]^n}$ – a set of the form ${\phi([3]^m)}$, where ${\phi \in \{1,2,3,*_1,\ldots,*_m\}^n}$ is a string formed using the base alphabet and ${m}$ wildcards ${*_1,\ldots,*_m}$ (with each wildcard appearing at least opnce), and ${\phi(a_1 \ldots a_m)}$ is the string formed by substituting ${a_i}$ for ${*_i}$ for each ${i}$. (Thus, for instance, a combinatorial line is a combinatorial subspace of dimension ${1}$.) The identification ${\phi}$ between ${[3]^m}$ and the combinatorial space ${\phi([3]^m)}$ maps combinatorial lines to combinatorial lines. Thus, to prove Theorem 1, it suffices to show Proposition 4 (Lack of lines implies density increment) Let ${0 < \delta \leq 1}$. Then if ${n}$ is a sufficiently large integer, and ${A \subset [3]^n}$ has density at least ${\delta}$ and has no combinatorial lines, then there exists an ${m}$-dimensional subspace ${\phi([3]^m)}$ of ${[3]^n}$ on which ${A}$ has density at least ${\delta + c(\delta)}$, where ${c(\delta) > 0}$ depends only on ${\delta}$ (and is bounded away from zero on any compact range of ${\delta}$), and ${m \geq m_0(n,\delta)}$ for some function ${m_0(n,\delta)}$ that goes to infinity as ${n \rightarrow \infty}$ for fixed ${\delta}$. It is easy to see that Proposition 4 implies Theorem 1 (for instance, one could consider the infimum of all ${\delta}$ for which the theorem holds, and show that having this infimum non-zero would lead to a contradiction). Now we have to figure out how to get that density increment. The original argument of Roth relied on Fourier analysis, which in turn relies on an underlying translation-invariant structure which is not present in the density Hales-Jewett setting. (Arithmetic progressions are translation-invariant, but combinatorial lines are not.) It turns out that one can proceed instead by adapting a (modification of) an argument of Ajtai and Szemerédi, which gave the first proof of Theorem 2. The (modified) Ajtai-Szemerédi argument uses the density increment method, assuming that ${A}$ has no right-angled triangles and showing that ${A}$ has an increased density on a subgrid – a product ${P \times Q}$ of fairly long arithmetic progressions with the same spacing. The argument proceeds in two stages, which we describe slightly informally (in particular, glossing over some technical details regarding quantitative parameters such as ${\varepsilon}$) as follows: • Step 1. If ${A \subset [n]^2}$ is dense but has no right-angled triangles, then ${A}$ has an increased density on a cartesian product ${U \times V}$ of dense sets ${U, V \subset [n]}$ (which are not necessarily arithmetic progressions). • Step 2. Any Cartesian product ${U \times V}$ in ${[n]^2}$ can be partitioned into reasonably large grids ${P \times Q}$, plus a remainder term of small density. From Step 1, Step 2 and the pigeonhole principle we obtain the desired density increment of ${A}$ on a grid ${P \times Q}$, and then the density increment argument gives us the corners theorem. Step 1 is actually quite easy. If ${A}$ is dense, then it must also be dense on some diagonal ${D = \{ (x,y): x+y = const \}}$, by the pigeonhole principle. Let ${U}$ and ${V}$ denote the rows and columns that ${A \cap D}$ occupies. Every pair of points in ${A \cap D}$ forms the hypotenuse of some corner, whose third vertex lies in ${U \times V}$. Thus, if ${A}$ has no corners, then ${A}$ must avoid all the points formed by ${U \times V}$ (except for those of the diagonal ${D}$). Thus ${A}$ has a significant density decrease on the Cartesian product ${U \times V}$. Dividing the remainder ${[n]^2 \backslash (U \times V)}$ into three further Cartesian products ${U \times ([n]\backslash V)}$, ${([n] \backslash U) \times V}$, ${([n] \backslash U) \times ([n] \backslash V)}$ and using the pigeonhole principle we obtain the claim (after redefining ${U, V}$ appropriately). Step 2 can be obtained by iterating a one-dimensional version: • Step 2a. Any set ${U \subset [n]}$ can be partitioned into reasonably long arithmetic progressions ${P}$, plus a remainder term of small density. Indeed, from Step 2a, one can partition ${U \times [n]}$ into products ${P \times [n]}$ (plus a small remainder), which can be easily repartitioned into grids ${P \times Q}$ (plus small remainder). This partitions ${U \times V}$ into sets ${P \times (V \cap Q)}$ (plus small remainder). Applying Step 2a again, each ${V \cap Q}$ can be partitioned further into progressions ${Q'}$ (plus small remainder), which allows us to partition each ${P \times (V \cap Q)}$ into grids ${P' \times Q'}$ (plus small remainder). So all one has left to do is establish Step 2a. But this can be done by the greedy algorithm: locate one long arithmetic progression ${P}$ in ${U}$ and remove it from ${U}$, then locate another to remove, and so forth until no further long progressions remain in the set. But Szemerédi’s theorem then tells us the remaining set has low density, and one is done! This argument has the apparent disadvantage of requiring a deep theorem (Szemerédi’s theorem) in order to complete the proof. However, interestingly enough, when one adapts the argument to the density Hales-Jewett theorem, one gets to replace Szemerédi’s theorem by a more elementary result – one which in fact follows from the (easy) ${k=2}$ version of the density Hales-Jewett theorem, i.e. Sperner’s theorem. We first need to understand the analogue of the Cartesian products ${U \times V}$. Note that ${U \times V}$ is the intersection of a “vertically insensitive set” ${U \times [n]}$ and a “horizontally insensitive set” ${[n] \times V}$. By “vertically insensitive” we mean that membership of a point ${(x,y)}$ in that set is unaffected if one moves that point in a vertical direction, and similarly for “horizontally insensitive”. In a similar fashion, define a “${12}$-insensitive set” to be a subset of ${[3]^n}$, membership in which is unaffected if one flips a coordinate from a ${1}$ to a ${2}$ or vice versa (e.g. if ${1223}$ lies in the set, then so must ${1213}$, ${1113}$, ${2113}$, etc.). Similarly define the notion of a “${13}$-insensitive set”. We then define a “complexity ${1}$ set” to be the intersection ${E_{12} \cap E_{13}}$ of a ${12}$-insensitive set ${E_{12}}$ and a ${13}$-insensitive set ${E_{13}}$; these are analogous to the Cartesian products ${U \times V}$. (For technical reasons, one actually has to deal with local versions of insensitive sets and complexity ${1}$ sets, in which one is only allowed to flip a moderately small number of the ${n}$ coordinates rather than all of them. But to simplify the discussion let me ignore this (important) detail, which is also a major issue to address in the other two proofs of this theorem.) The analogues of Steps 1, 2 for the density Hales-Jewett theorem are then • Step 1. If ${A \subset [3]^n}$ is dense but has no combinatorial lines, then ${A}$ has an increased density on a (local) complexity ${1}$ set ${E_{12} \cap E_{13}}$. • Step 2. Any (local) complexity ${1}$ set ${E_{12} \cap E_{13} \subset [3]^n}$ can be partitioned into moderately large combinatorial subspaces (plus a small remainder). We can sketch how Step 1 works as follows. Given any ${x \in [3]^n}$, let ${\pi_{1 \rightarrow 2}(x)}$ denote the string formed by replacing all ${1}$s with ${2}$s, e.g. ${\pi_{1 \rightarrow 2}(1321) = 2322}$. Similarly define ${\pi_{1 \rightarrow 3}(x)}$. Observe that ${x, \pi_{1 \rightarrow 2}(x), \pi_{1 \rightarrow 3}(x)}$ forms a combinatorial line (except in the rare case when ${x}$ doesn’t contain any ${1}$s). Thus if we let ${E_{12} := \{ x: \pi_{1 \rightarrow 2}(x) \in A\}}$, ${E_{13} := \{ x: \pi_{1 \rightarrow 3}(x) \in A\}}$, we see that ${A}$ must avoid essentially all of ${E_{12} \cap E_{13}}$. On the other hand, observe that ${E_{12}}$ and ${E_{13}}$ are ${12}$-insensitive and ${13}$-insensitive sets respectively. Taking complements and using the same sort of pigeonhole argument as before, we obtain the claim. (Actually, this argument doesn’t quite work because ${E_{12}}$, ${E_{13}}$ could be very sparse; this problem can be fixed, but requires one to use local complexity ${1}$ sets rather than global ones, and also to introduce the concept of “equal-slices measure”; I will not discuss these issues here.) Step 2 can be reduced, much as before, to the following analogue of Step 2a: • Step 2a. Any ${12}$-insensitive set ${E_{12} \subset [3]^n}$ can be partitioned into moderately large combinatorial subspaces (plus a small remainder). Identifying the letters ${1}$ and ${2}$ together, one can quotient ${[3]^n}$ down to ${[2]^n}$; the preimages of this projection are precisely the ${12}$-insensitive sets. Because of this, Step 2a is basically equivalent (modulo some technicalities about measure) to • Step 2a’. Any ${E \subset [2]^n}$ can be partitioned into moderately large combinatorial subspaces (plus a small remainder). By the greedy algorithm, we will be able to accomplish this step if we can show that every dense subset of ${[2]^n}$ contains moderately large subspaces. But this turns out to be possible by carefully iterating Sperner’s theorem (which shows that every dense subset of ${[2]^n}$ contains combinatorial lines). This proof of Theorem 1 seems to extend without major difficulty to the case of higher ${k}$, though details are still being worked out. — 3. The triangle removal argument — The triangle removal lemma of Ruzsa and Szemerédi is a graph-theoretic result which implies the corners theorem (and hence Roth’s theorem). It asserts the following: Lemma 5 (Triangle removal lemma) For every ${\varepsilon > 0}$ there exists ${\delta > 0}$ such that if a graph ${G}$ on ${n}$ vertices has fewer than ${\delta n^3}$ triangles, then the triangles can be deleted entirely by removing at most ${\varepsilon n^2}$ edges. Let’s see how the triangle removal lemma implies the corners theorem. A corner is, of course, already a triangle in the geometric sense, but we need to convert it to a triangle in the graph-theoretic sense, as follows. Let ${A}$ be a subset of ${[n]^2}$ with no corners; the aim is to show that ${A}$ has small density. Let ${V_h}$ be the set of all horizontal lines in ${[n]^2}$, ${V_v}$ the set of vertical lines, and ${V_d}$ the set of diagonal lines (thus all three sets have size about ${n}$). We create a tripartite graph ${G}$ on the vertex sets ${V_h \cup V_v \cup V_d}$ by joining a horizontal line ${h \in V_h}$ to a vertical line ${v \in V_v}$ whenever ${h}$ and ${v}$ intersect at a point in ${A}$, and similarly connecting ${V_h}$ or ${V_v}$ to ${V_d}$. Observe that a triangle in ${G}$ corresponds either to a corner in ${A}$, or to a “degenerate” corner in which the horizontal, vertical, and diagonal line are all concurrent. In particular, there are very few triangle in ${G}$, which can then be deleted by removing a small number of edges from ${G}$ by the triangle removal lemma. But each edge removed can delete at most one degenerate corner, and the number of degenerate corners is ${|A|}$, and so ${|A|}$ is small as required. All known proofs of the triangle removal lemma proceed by some version of the following three steps: • “Regularity lemma step”: Applying tools such as the Szemerédi regularity lemma, one can partition the graph ${G}$ into components ${G_{ij}}$ between cells ${V_i, V_j}$ of vertices, such that most of the ${G_{ij}}$ are “pseudorandom”. One way to define what pseudorandom means is to view each graph component ${G_{ij}}$ as a subset of the Cartesian product ${V_i \times V_j}$, in which case ${G_{ij}}$ is pseudorandom if it does not have a significant density increment on any smaller Cartesian product ${V'_i \times V'_j}$ of non-trivial size. • ”Counting lemma step”: By exploiting the pseudorandomness property, one shows that if ${G}$ has a triple ${G_{ij}, G_{jk}, G_{ki}}$ of dense pseudorandom graphs between cells ${V_i, V_j, V_k}$ of non-trivial size, then this triple must generate a large number of triangles; hence, if ${G}$ has very few triangles, then one cannot find such a triple of dense pseudorandom graphs. • ”Cleaning step”: If one then removes all components of ${G}$ which are too sparse or insufficiently pseudorandom, one can thus eliminate all triangles. Pulling this argument back to the corners theorem, we see that cells such as ${V_i, V_j, V_k}$ will correspond either to horizontally insensitive sets, vertically insensitive sets, or diagonally insensitive sets. Thus this proof of the corners theorem proceeds by partitioning ${[n]^2}$ in three different ways into insensitive sets in such a way that ${A}$ is pseudorandom with respect to many of the cells created by any two of these partitions, counting the corners generated by any triple of large cells in which A is pseudorandom and dense, and cleaning out all the other cells. It turns out that a variant of this argument can give Theorem 1; this was in fact the original approach studied by the polymath1 project, though it was only after a detour through ergodic theory (as well as the development of the density-increment argument discussed above) that the triangle-removal approach could be properly executed. In particular, an ergodic argument based on the infinitary analogue of the triangle removal lemma (and its hypergraph generalisations) was developed by Austin, which then inspired the combinatorial version sketched here. The analogue of the vertex cells ${V_i}$ are given by certain ${12}$-insensitive sets ${E_{12}^a}$, ${13}$-insensitive sets ${E_{13}^b}$, and ${23}$-insensitive sets ${E_{23}^c}$. Roughly speaking, a set ${A \subset [3]^n}$ would be said to be pseudorandom with respect to a cell ${E_{12}^a \cap E_{13}^b}$ if ${A \cap E_{12}^a \cap E_{13}^b}$ has no further density increment on any smaller cell ${E'_{12} \cap E'_{13}}$ with ${E'_{12}}$ a ${12}$-insensitive subset of ${E_{12}^a}$, and ${E'_{13}}$ a ${13}$-insensitive subset of ${E_{13}^b}$. (This is an oversimplification, glossing over an important refinement of the concept of pseudorandomness involving the discrepancy between global densities in ${[3]^n}$ and local densities in subspaces of ${[3]^n}$.) There is a similar notion of ${A}$ being pseudorandom with respect to a cell ${E_{13}^b \cap E_{23}^c}$ or ${E_{23}^c \cap E_{12}^a}$. We briefly describe the “regularity lemma” step. By modifying the proof of the regularity lemma, one can obtain three partitions $\displaystyle [3]^n = E_{12}^1 \cup \ldots \cup E_{12}^{M_{12}} = E_{13}^1 \cup \ldots \cup E_{13}^{M_{13}} = E_{23}^1 \cup \ldots \cup E_{23}^{M_{23}}$ into ${12}$-insensitive, ${13}$-insensitive, and ${23}$-insensitive components respectively, where ${M_{12}, M_{13}, M_{23}}$ are not too large, and ${A}$ is pseudorandom with respect to most cells ${E_{12}^a \cap E_{13}^b}$, ${E_{13}^b \cap E_{23}^c}$, and ${E_{23}^c \cap E_{12}^a}$. In order for the counting step to work, one also needs an additional “stationarity” reduction, which is difficult to state precisely, but roughly speaking asserts that the “local” statistics of sets such as ${E_{12}^a}$ on medium-dimensional subspaces are close to the corresponding “global” statistics of such sets; this can be achieved by an additional pigeonholing argument. We will gloss over this issue, pretending that there is no distinction between local statistics and global statistics. (Thus, for instance, if ${E_{12}^a}$ has large global density in ${[3]^n}$, we shall assume that ${E_{12}^a}$ also has large density on most medium-sized subspaces of ${[3]^n}$.) Now for the “counting lemma” step. Suppose we can find ${a,b,c}$ such that the cells ${E_{12}^a, E_{13}^b, E_{23}^c}$ are large, and that ${A}$ intersects ${E_{12}^a \cap E_{13}^b}$, ${E_{13}^b \cap E_{23}^c}$, and ${E_{23}^c \cap E_{12}^a}$ in a dense pseudorandom manner. We claim that this will force ${A}$ to have a large number of combinatorial lines ${\ell}$, with ${\ell(1)}$ in ${A \cap E_{12}^a \cap E_{13}^b}$, ${\ell(2)}$ in ${A \cap E_{23}^c \cap E_{12}^a}$, and ${\ell(3)}$ in ${A \cap E_{13}^b \cap E_{23}^c}$. Because of the dense pseudorandom nature of ${A}$ in these cells, it turns out that it will suffice to show that there are a lot of lines ${\ell(1)}$ with ${\ell(1) \in E_{12}^a \cap E_{13}^b}$, ${\ell(2) \in E_{23}^c \cap E_{12}^a}$, and ${\ell(3) \in E_{13}^b \cap E_{23}^c}$. One way to generate a line ${\ell}$ is by taking the triple ${\{ x, \pi_{1 \rightarrow 2}(x), \pi_{1 \rightarrow 3}(x) \}}$, where ${x \in [3]^n}$ is a generic point. (Actually, as we will see below, we would have to to a subspace of ${[3]^n}$ before using this recipe to generate lines.) Then we need to find many ${x}$ obeying the constraints $\displaystyle x \in E_{12}^a \cap E_{13}^b; \quad \pi_{1 \rightarrow 2}(x) \in E_{23}^c \cap E_{12}^a; \quad \pi_{1 \rightarrow 3}(x) \in E_{13}^b \cap E_{23}^c.$ Because of the various insensitivity properties, many of these conditions are redundant, and we can simplify to $\displaystyle x \in E_{12}^a \cap E_{13}^b; \quad \pi_{1 \rightarrow 2}(x) \in E_{23}^c.$ Now note that the property “${\pi_{1 \rightarrow 2}(x) \in E_{23}^c}$” is 123-insensitive; it is simultaneously 12-insensitive, 23-insensitive, and 13-insensitive. As ${E_{23}^c}$ is assumed to be large, there will be large combinatorial subspaces on which (a suitably localised version of) this property “${\pi_{1 \rightarrow 2}(x) \in E_{23}^c}$” will be always true. Localising to this space (taking advantage of the stationarity properties alluded to earlier), we are now looking for solutions to $\displaystyle x \in E_{12}^a \cap E_{13}^b.$ We’ll pick ${x}$ to be of the form ${\pi_{2 \rightarrow 1}(y)}$ for some ${y}$. We can then rewrite the constraints on ${y}$ as $\displaystyle y \in E_{12}^a; \quad \pi_{2 \rightarrow 1}(y) \in E_{13}^b.$ The property “${\pi_{2 \rightarrow 1}(y) \in E_{13}^b}$” is 123-invariant, and ${E_{13}^b}$ is large, so by arguing as before we can pass to a large subspace where this property is always true. The largeness of ${E_{12}^a}$ then gives us a large number of solutions. Taking contrapositives, we conclude that if ${A}$ in fact has no combinatorial lines, then there do not exist any triple ${E_{12}^a, E_{13}^b, E_{23}^c}$ of large cells with respect to which ${A}$ is dense and pseudorandom. This forces ${A}$ to be confined either to very small cells, or to very sparse subsets of cells, or to the rare cells which fail to be pseudorandom. None of these cases can contribute much to the density of ${A}$, and so ${A}$ itself is very sparse – contradicting the hypothesis in Theorem 1 that ${A}$ is dense (this is the “cleaning step”). This concludes the sketch of the triangle-removal proof of this theorem. The ergodic version of this argument, due to Austin, works for all values of ${k}$, so I expect the combinatorial version to do so as well. — 4. The finitary Furstenberg-Katznelson argument — In 1989, Furstenberg and Katznelson gave the first proof of Theorem 1, by translating it into a recurrence statement about a certain type of stationary process indexed by an infinite cube ${[3]^\omega := \bigcup_{n=1}^\infty [3]^n}$. This argument was inspired by a long string of other successful proofs of density Ramsey theorems via ergodic means, starting with the initial 1977 paper of Furstenberg giving an ergodic theory proof of Szemeredi’s theorem. The latter proof was transcribed into a finitary language by myself, so it was reasonable to expect that the Furstenberg-Katznelson argument could similarly be translated into a combinatorial framework. Let us first briefly describe the original strategy of Furstenberg to establish Roth’s theorem, but phrased in an informal, and vaguely combinatorial, language. The basic task is to get a non-trivial lower bound on averages of the form $\displaystyle {\Bbb E}_{a,r} f(a) f(a+r) f(a+2r) \ \ \ \ \ (1)$ where we will be a bit vague about what ${a,r}$ are ranging over, and where ${f}$ is some non-negative function of positive mean. It is then natural to study more general averages of the form $\displaystyle {\Bbb E}_{a,r} f(a) g(a+r) h(a+2r). \ \ \ \ \ (2)$ Now, it turns out that certain types of functions ${f,g,h}$ give a negligible contribution to expressions such as (2). In particular, if ${f}$ is weakly mixing, which roughly means that the pair correlations $\displaystyle {\Bbb E}_a f(a) f(a+r)$ are small for most ${r}$, then the average (2) is small no matter what ${g, h}$ are (so long as they are bounded). This can be established by some applications of the Cauchy-Schwarz inequality (or its close cousin, the van der Corput lemma). As a consequence of this, all weakly mixing components of ${f}$ can essentially be discarded when considering an average such as (1). After getting rid of the weakly mixing components, what is left? Being weakly mixing is like saying that almost all the shifts ${f(\cdot+r)}$ of ${f}$ are close to orthogonal to each other. At the other extreme is that of periodicity – the shifts ${f(\cdot + r)}$ periodically recur to become equal to ${f}$ again. There is a slightly more general notion of almost periodicity – roughly, this means that the shifts ${f(\cdot + r)}$ don’t have to recur exactly to ${f}$ again, but they are forced to range in a precompact set, which basically means that for every ${\varepsilon > 0}$, that ${f(\cdot+r)}$ lies within ${\varepsilon}$ (in some suitable norm) of some finite-dimensional space. A good example of an almost periodic function is an eigenfunction, in which we have ${f(a+r) = \lambda_r f(a)}$ for each ${r}$ and some quantity ${\lambda_r}$ independent of ${a}$ (e.g. one can take ${f(a) = e^{2\pi i \alpha a}}$ for some ${\alpha \in {\mathbb R}}$). In this case, the finite-dimensional space is simply the scalar multiples of ${f(a)}$ (and one can even take ${\varepsilon=0}$ in this special case). It is easy to see that non-trivial almost periodic functions are not weakly mixing; more generally, any function which correlates non-trivially with an almost periodic function can also be seen to not be weakly mixing. In the converse direction, it is also fairly easy to show that any function which is not weakly mixing must have non-trivial correlation with an almost periodic function. Because of this, it turns out that one can basically decompose any function into almost periodic and weakly mixing components. For the purposes of getting lower bounds on (1), this allows us to essentially reduce matters to the special case when ${f}$ is almost periodic. But then the shifts ${f(\cdot + r)}$ are almost ranging in a finite-dimensional set, which allows one to essentially assign each shift ${r}$ a colour from a finite range of colours. If one then applies the van der Waerden theorem, one can find many arithmetic progressions ${a,a+r,a+2r}$ which have the same colour, and this can be used to give a non-trivial lower bound on (1). (Thus we see that the role of a compactness property such as almost periodicity is to reduce density Ramsey theorems to colouring Ramsey theorems.) This type of argument can be extended to more advanced recurrence theorems, but certain things become more complicated. For instance, suppose one wanted to count progressions of length ${4}$; this amounts to lower bounding expressions such as $\displaystyle {\Bbb E}_{a,r} f(a) f(a+r) f(a+2r) f(a+3r). \ \ \ \ \ (3)$ It turns out that ${f}$ being weakly mixing is no longer enough to give a negligible contribution to expressions such as (3). For that, one needs the stronger property of being weakly mixing relative to almost periodic functions; roughly speaking, this means that for most ${r}$, the expression ${f(\cdot) f(\cdot+r)}$ is not merely of small mean (which is what weak mixing would mean), but that this expression furthermore does not correlate strongly with any almost periodic function (i.e. ${{\Bbb E}_a f(a) f(a+r) g(a)}$ is small for any almost periodic ${g}$). Once one has this stronger weak mixing property, then one can discard all components of ${f}$ which are weakly mixing relative to almost periodic functions. One then has to figure out what is left after all these components are discarded. Because we strengthened the notion of weak mixing, we have to weaken the notion of almost periodicity to compensate. The correct notion is no longer that of almost periodicity – in which the shifts ${f(\cdot + r)}$ almost take values in a finite-dimensional vector space – but that of almost periodicity relative to almost periodic functions, in which the shifts almost take values in a finite-dimensional module over the algebra of almost periodic functions. A good example of such a beast is that of a quadratic eigenfunction, in which we have ${f(a+r) = \lambda_r(a) f(a)}$ where ${\lambda_r(a)}$ is itself an ordinary eigenfunction, and thus almost periodic in the ordinary sense; here, the relative module is the one-dimensional module formed by almost periodic multiples of ${f}$. (A typical example of a quadratic eigenfunction is ${f(a) = e^{2\pi i \alpha a^2}}$ for some ${\alpha \in {\mathbb R}}$.) It turns out that one can “relativise” all of the previous arguments to the almost periodic “factor”, and decompose an arbitrary ${f}$ into a component which is weakly mixing relative to almost periodic functions, and another component which is almost periodic relative to almost periodic functions. The former type of components can be discarded. For the latter, we can once again start colouring the shifts ${f(\cdot+r)}$ with a finite number of colours, but with the caveat that the colour assigned is no longer independent of ${a}$, but depends in an almost periodic fashion on ${a}$. Nevertheless, it is still possible to combine the van der Waerden colouring Ramsey theorem with the theory of recurrence for ordinary almost periodic functions to get a lower bound on (3) in this case. One can then iterate this argument to deal with arithmetic progressions of longer length, but one now needs to consider even more intricate notions of almost periodicity, e.g. almost periodicity relative to (almost periodic functions relative to almost periodic functions), etc. It turns out that these types of ideas can be adapted (with some effort) to the density Hales-Jewett setting. It’s simplest to begin with the ${k=2}$ situation rather than the ${k=3}$ situation. Here, we are trying to obtain non-trivial lower bounds for averages of the form $\displaystyle {\Bbb E}_{\ell} f(\ell(1)) f(\ell(2)) \ \ \ \ \ (4)$ where ${\ell}$ ranges in some fashion over combinatorial lines in ${[2]^n}$, and ${f}$ is some non-negative function with large mean. The analogues of weakly mixing and almost periodic in this setting are the ${12}$-uniform and ${12}$-low influence functions respectively. Roughly speaking, a function is ${12}$-low influence if its value usually doesn’t change much if a ${1}$ is flipped to a ${2}$ or vice versa (e.g. the indicator function of a ${12}$-insensitive set is ${12}$-low influence); conversely, a ${12}$-uniform function is a function ${g}$ such that ${{\Bbb E}_{\ell} f(\ell(1)) g(\ell(2))}$ is small for all (bounded) ${f}$. One can show that any function can be decomposed, more or less orthogonally, into a ${12}$-uniform function and a ${12}$-low influence function, with the upshot being that one can basically reduce the task of lower bounding (4) to the case when ${f}$ is ${12}$-low influence. But then ${f(\ell(1))}$ and ${f(\ell(2))}$ are approximately equal to each other, and it is straightforward to get a lower-bound in this case. Now we turn to the ${k=3}$ setting, where we are looking at lower-bounding expressions such as $\displaystyle {\Bbb E}_{\ell} f(\ell(1)) g(\ell(2)) h(\ell(3)) \ \ \ \ \ (5)$ with ${f=g=h}$. It turns out that ${g}$ (say) being ${12}$-uniform is no longer enough to give a negligible contribution to the average (5). Instead, one needs the more complicated notion of ${g}$ being ${12}$-uniform relative to ${23}$-low influence functions; this means that not only are the averages ${{\Bbb E}_{\ell} f(\ell(1)) g(\ell(2))}$ small for all bounded ${f}$, but furthermore ${{\Bbb E}_{\ell} f(\ell(1)) g(\ell(2)) h(\ell)}$ is small for all bounded ${f}$ and all ${23}$-low influence ${h}$ (there is a minor technical point here that ${h}$ is a function of a line rather than of a point, but this should be ignored). Any component of ${g}$ in (5) which is ${12}$-uniform relative to ${23}$-low influence functions are negligible and so can be removed. One then needs to figure out what is left in ${g}$ when these components are removed. The answer turns out to be functions ${g}$ that are ${12}$-almost periodic relative to ${23}$-low influence. The precise definition of this concept is technical, but very roughly speaking it means that if one flips a digit from a ${1}$ to a ${2}$, then the value of ${g}$ changes in a manner which is controlled by ${23}$-low influence functions. Anyway, the upshot is that one can reduce ${g}$ in (5) from ${f}$ to the components of ${f}$ which are ${12}$-almost periodic relative to ${23}$-low influence. Similarly, one can reduce ${h}$ in (5) from ${f}$ to the components of ${f}$ which are ${13}$-almost periodic relative to ${23}$-low influence. At this point, one has to use a colouring Ramsey theorem – in this case, the Graham-Rothschild theorem – in conjunction with the relative almost periodicity to locate lots of places in which ${g(\ell(2))}$ is close to ${g(\ell(1))}$ while ${h(\ell(3))}$ is simultaneously close to ${h(\ell(1))}$. This turns (5) into an expression of the form ${{\Bbb E}_x f(x) g(x) h(x)}$, which turns out to be relatively easy to lower bound (because ${g, h}$, being projections of ${f}$, tend to be large wherever ${f}$ is large).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 499, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960300087928772, "perplexity": 224.35504559675562}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049273946.43/warc/CC-MAIN-20160524002113-00148-ip-10-185-217-139.ec2.internal.warc.gz"}
http://theinfolist.com/html/ALL/s/electromagnetism.html
TheInfoList Electromagnetism is a branch of physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regular succession of eve ... involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force is carried by electromagnetic field An electromagnetic field (also EM field or EMF) is a classical (i.e. non-quantum) field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the in ... s composed of electric field An electric field (sometimes E-field) is the physical field that surrounds electrically-charged particle In physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' ' ... s and magnetic field A magnetic field is a vector field In vector calculus and physics, a vector field is an assignment of a vector to each point in a subset of space. For instance, a vector field in the plane can be visualised as a collection of arrows with ... s, and it is responsible for electromagnetic radiation In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and force. ... such as light Light or visible light is electromagnetic radiation within the portion of the electromagnetic spectrum that is visual perception, perceived by the human eye. Visible light is usually defined as having wavelengths in the range of 400–700 nan ... . It is one of the four fundamental interaction In physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the natural science that studies matter, its Motion (physics), motion and behavior through Sp ... s (commonly called forces) in nature Nature, in the broadest sense, is the natural, physical, material world or universe The universe ( la, universus) is all of space and time and their contents, including planets, stars, galaxy, galaxies, and all other forms of matter an ... , together with the strong interaction In nuclear physics Nuclear physics is the field of physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and ... , the weak interaction In nuclear physics Nuclear physics is the field of physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and ... , and gravitation Gravity (), or gravitation, is a natural phenomenon Types of natural phenomena include: Weather, fog, thunder, tornadoes; biological processes, decomposition, germination seedlings, three days after germination. Germination is th ... . At high energy, the weak force and electromagnetic force are unified as a single electroweak force In particle physics, the electroweak interaction or electroweak force is the unified field theory, unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction. Although these two forces ... . Electromagnetic phenomena are defined in terms of the electromagnetic force, sometimes called the Lorentz force In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and force. "Ph ... , which includes both electricity Electricity is the set of physical Physical may refer to: *Physical examination, a regular overall check-up with a doctor *Physical (album), ''Physical'' (album), a 1981 album by Olivia Newton-John **Physical (Olivia Newton-John song), "Physi ... and magnetism Magnetism is a class of physical attributes that are mediated by magnetic field A magnetic field is a vector field In vector calculus and physics, a vector field is an assignment of a vector to each point in a subset of space. For in ... as different manifestations of the same phenomenon. The electromagnetic force plays a major role in determining the internal properties of most objects encountered in daily life. The electromagnetic attraction between atomic nuclei The atomic nucleus is the small, dense region consisting of protons and neutrons at the center of an atom, discovered in 1911 by Ernest Rutherford based on the 1909 Geiger-Marsden experiments, Geiger–Marsden gold foil experiment. After the d ... and their orbital electron The electron is a subatomic particle (denoted by the symbol or ) whose electric charge is negative one elementary charge. Electrons belong to the first generation (particle physics), generation of the lepton particle family, and are general ... s holds atom An atom is the smallest unit of ordinary matter In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of ato ... s together. Electromagnetic forces are responsible for the chemical bond A chemical bond is a lasting attraction between atom An atom is the smallest unit of ordinary matter In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday ... s between atoms which create molecule A molecule is an electrically Electricity is the set of physical phenomena associated with the presence and motion Image:Leaving Yongsan Station.jpg, 300px, Motion involves a change in position In physics, motion is the phenomenon ... s, and intermolecular force An intermolecular force (IMF) (or secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction or repulsion which act between atoms and other types of neighboring particles, e.g. atom ... s. The electromagnetic force governs all chemical processes, which arise from interactions between the electrons The electron is a subatomic particle In physical sciences, subatomic particles are smaller than atom An atom is the smallest unit of ordinary matter In classical physics and general chemistry, matter is any substance that has m ... of neighboring atoms. Electromagnetism is very widely used in modern technology, and electromagnetic theory is the basis of and electronics The field of electronics is a branch of physics and electrical engineering that deals with the emission, behaviour and effects of electrons The electron is a subatomic particle In physical sciences, subatomic particles are smaller than ... including digital technology. There are numerous mathematical descriptions of the electromagnetic field There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations ... . Most prominently, Maxwell's equations Maxwell's equations are a set of coupled partial differential equation In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), ... describe how electric and magnetic fields are generated and altered by each other and by charges and currents. The theoretical implications of electromagnetism, particularly the establishment of the speed of light based on properties of the "medium" of propagation ( permeability and permittivity In electromagnetism Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electric charge, electrically charged particles. The electromagnetic force is car ... ), led to the development of special relativity In physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regular s ... by Albert Einstein Albert Einstein ( ; ; 14 March 1879 – 18 April 1955) was a German-born theoretical physicist, widely acknowledged to be one of the greatest physicists of all time. Einstein is known for developing the theory of relativity The theo ... in 1905. # History of the theory Originally, electricity and magnetism were considered to be two separate forces. This view changed with the publication of James Clerk Maxwell James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish scientist A scientist is a person who conducts Scientific method, scientific research to advance knowledge in an Branches of science, area of interest. In classica ... 's 1873 '' A Treatise on Electricity and Magnetism ''A Treatise on Electricity and Magnetism'' is a two-volume treatise on electromagnetism Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electric ... '' in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments: # Electric charges ' or ' one another with a force inversely proportional In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... to the square of the distance between them: unlike charges attract, like ones repel. # Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole. # An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire. # A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement. In April 1820, Hans Christian Ørsted Hans Christian Ørsted ( , ; often rendered Oersted in English; 14 August 17779 March 1851) was a Danish physicist A physicist is a scientist A scientist is a person who conducts Scientific method, scientific research to advance knowledge ... observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The unit of magnetic induction Electromagnetic or magnetic induction is the production of an electromotive force In electromagnetism and electronics, electromotive force (emf, denoted \mathcal and measured in volts) is the electrical action produced by a non-electrical sour ... ( oersted The oersted (symbol Oe) is the coherent derived unit of the auxiliary magnetic field H in the centimetre–gram–second system of units The centimetre–gram–second system of units (abbreviated CGS or cgs) is a variant of the metric syste ... ) is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electric charge, electrically charged particles. The electromagnetic force is carried by electromagneti ... . They influenced French physicist André-Marie Ampère André-Marie Ampère (, ; ; 20 January 177510 June 1836) was a French physicist A physicist is a scientist A scientist is a person who conducts Scientific method, scientific research to advance knowledge in an Branches of science, area of ... 's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy. This unification, which was observed by Michael Faraday Michael Faraday (; 22 September 1791 – 25 August 1867) was an English scientist A scientist is a person who conducts scientific research The scientific method is an Empirical evidence, empirical method of acquiring knowledge ... , extended by James Clerk Maxwell James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish scientist A scientist is a person who conducts Scientific method, scientific research to advance knowledge in an Branches of science, area of interest. In classica ... , and partially reformulated by Oliver Heaviside Oliver Heaviside FRS (; 18 May 1850 – 3 February 1925) was an English autodidactic Autodidacticism (also autodidactism) or self-education (also self-learning and self-teaching) is education without the guidance of masters (such as teach ... and Heinrich Hertz Heinrich Rudolf Hertz ( ; ; 22 February 1857 – 1 January 1894) was a German physicist A physicist is a scientist A scientist is a person who conducts scientific research The scientific method is an Empirical evidence, empi ... , is one of the key accomplishments of 19th-century mathematical physics Mathematical physics refers to the development of mathematical methods for application to problems in physics. The '' Journal of Mathematical Physics'' defines the field as "the application of mathematics to problems in physics and the developme ... . It has had far-reaching consequences, one of which was the understanding of the nature of light Light or visible light is electromagnetic radiation within the portion of the electromagnetic spectrum that is visual perception, perceived by the human eye. Visible light is usually defined as having wavelengths in the range of 400–700 nan ... . Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves In physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the natural science that studies matter, its Motion (physics), motion and behavior through ... are at present seen as taking the form of , self-propagating electromagnetic field disturbances called photon The photon ( el, φῶς, phōs, light) is a type of elementary particle In , an elementary particle or fundamental particle is a that is not composed of other particles. Particles currently thought to be elementary include the fundamental s ... s. Different frequencies Frequency is the number of occurrences of a repeating event per unit of time A unit of time is any particular time Time is the indefinite continued sequence, progress of existence and event (philosophy), events that occur in an apparent ... of oscillation give rise to the different forms of electromagnetic radiation In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and force. ... , from radio wave Radio waves are a type of electromagnetic radiation In physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the natural science that studies ma ... s at the lowest frequencies, to visible light at intermediate frequencies, to gamma ray A gamma ray, also known as gamma radiation (symbol γ or \gamma), is a penetrating form of electromagnetic radiation In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, it ... s at the highest frequencies. Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi Gian Domenico Romagnosi (; 11 December 1761 – 8 June 1835) was an Italian philosopher, economist and jurist. Biography Gian Domenico Romagnosi was born in Salsomaggiore Terme. He studied law at the University of Parma from 1782 to 1786. In 1 ... , an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, so if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community. An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated: A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars." # Fundamental forces The electromagnetic force is one of the four known fundamental force#REDIRECT Fundamental interaction In physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the natural science that studies matter, its Motion (physics ... s. The other fundamental forces are: * the strong nuclear force In nuclear physics Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. Other forms of nuclear matter are also studied. Nuclear physics should not be confused with atomic physics, which ... , which binds quark A quark () is a type of elementary particle In particle physics, an elementary particle or fundamental particle is a subatomic particle that is not composed of other particles. Particles currently thought to be elementary include the fundam ... s to form nucleon In chemistry Chemistry is the study of the properties and behavior of . It is a that covers the that make up matter to the composed of s, s and s: their composition, structure, properties, behavior and the changes they undergo during ... s, and binds nucleons to form nuclei ''Nucleus'' (plural nuclei) is a Latin word for the seed inside a fruit. It most often refers to: *Atomic nucleus, the very dense central region of an atom *Cell nucleus, a central organelle of a eukaryotic cell, containing most of the cell's DNA ... . * the weak nuclear force In nuclear physics Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. Other forms of nuclear matter are also studied. Nuclear physics should not be confused with atomic physics, which ... , which binds to all known particles in the Standard Model The Standard Model of particle physics Particle physics (also known as high energy physics) is a branch of physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsi ... , and causes certain forms of radioactive decay Radioactive decay (also known as nuclear decay, radioactivity, radioactive disintegration or nuclear disintegration) is the process by which an unstable atomic nucleus loses energy by radiation. A material containing unstable nuclei is consi ... . (In particle physics Particle physics (also known as high energy physics) is a branch of physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which rel ... though, the electroweak interaction In particle physics, the electroweak interaction or electroweak force is the unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction. Although these two forces appear very differe ... is the unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction); * the gravitational force Gravity (), or gravitation, is a natural phenomenon Types of natural phenomena include: Weather, fog, thunder, tornadoes; biological processes, decomposition, germination seedlings, three days after germination. Germination is t ... . All other forces (e.g., friction Friction is the force In physics, a force is an influence that can change the motion (physics), motion of an Physical object, object. A force can cause an object with mass to change its velocity (e.g. moving from a Newton's first law, st ... , contact forces) are derived from these four fundamental forces In physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the natural science that studies matter, its Motion (physics), motion and behavior through Sp ... and they are known as non-fundamental forces. The electromagnetic force is responsible for practically all phenomena one encounters in daily life above the nuclear scale, with the exception of gravity. Roughly speaking, all the forces involved in interactions between atom An atom is the smallest unit of ordinary matter In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of ato ... s can be explained by the electromagnetic force acting between the electrically charged atomic nuclei The atomic nucleus is the small, dense region consisting of proton A proton is a subatomic particle, symbol or , with a positive electric charge of +1''e'' elementary charge and a mass slightly less than that of a neutron. Protons and neutr ... and electron The electron is a subatomic particle (denoted by the symbol or ) whose electric charge is negative one elementary charge. Electrons belong to the first generation (particle physics), generation of the lepton particle family, and are general ... s of the atoms. Electromagnetic forces also explain how these particles carry momentum by their movement. This includes the forces we experience in "pushing" or "pulling" ordinary material objects, which result from the intermolecular force An intermolecular force (IMF) (or secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction or repulsion which act between atoms and other types of neighboring particles, e.g. atom ... s that act between the individual molecule A molecule is an electrically Electricity is the set of physical phenomena associated with the presence and motion Image:Leaving Yongsan Station.jpg, 300px, Motion involves a change in position In physics, motion is the phenomenon ... s in our bodies and those in the objects. The electromagnetic force is also involved in all forms of . A necessary part of understanding the intra-atomic and intermolecular forces is the effective force generated by the momentum of the electrons' movement, such that as electrons move between interacting atoms they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle The Pauli exclusion principle is the quantum mechanical principle which states that two or more identical fermion In particle physics Particle physics (also known as high energy physics) is a branch of physics Physics (from grc ... . The behaviour of matter at the molecular scale including its density is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves. # Classical electrodynamics In 1600, William Gilbert proposed, in his '' De Magnete Title page of 1628 edition ''De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure'' (''On the Magnet and Magnetic Bodies, and on That Great Magnet the Earth'') is a scientific work published in 1600 by the English physician and scien ... '', that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin Benjamin Franklin ( April 17, 1790) was one of the Founding Fathers of the United States The Founding Fathers of the United States, or simply the Founding Fathers or Founders, were a group of American revolutionary Patriots (also ... 's proposed experiments in 1752. One of the first to discover and publish a link between man-made electric current and magnetism was , who in 1802 noticed that connecting a wire across a voltaic pile The voltaic pile was the first electrical battery A battery is a device consisting of one or more electrochemical cell An electrochemical cell is a device capable of either generating electrical energy from chemical reaction A che ... deflected a nearby compass A compass is a device that shows the cardinal direction The four cardinal directions, or cardinal points, are the directions north, east, south, and west, commonly denoted by their initials N, E, S, and W. East and west are perpendicular ( ... needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to produce a theory of electromagnetism that set the subject on a mathematical foundation. A theory of electromagnetism, known as classical electromagnetism Classical electromagnetism or classical electrodynamics is a branch of theoretical physics Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and ... , was developed by various physicists during the period between 1820 and 1873 when it culminated in the publication of a treatise A treatise is a formal Formal, formality, informal or informality imply the complying with, or not complying with, some set theory, set of requirements (substantial form, forms, in Ancient Greek). They may refer to: Dress code and events * For ... by James Clerk Maxwell James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish scientist A scientist is a person who conducts Scientific method, scientific research to advance knowledge in an Branches of science, area of interest. In classica ... , which unified the preceding developments into a single theory and discovered the electromagnetic nature of light. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations Maxwell's equations are a set of coupled partial differential equation In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), ... , and the electromagnetic force is given by the Lorentz force law Lorentz is a name derived from the Roman surname, Laurentius, which means "from Laurentum". It is the German form of Laurence. Notable people with the name include: Given name * Lorentz Aspen (born 1978), Norwegian heavy metal pianist and keyb ... .Purcell: p. 278: Chapter 6.1, "Definition of the Magnetic Field." Lorentz force and force equation. One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light The speed of light in vacuum A vacuum is a space Space is the boundless three-dimensional Three-dimensional space (also: 3-space or, rarely, tri-dimensional space) is a geometric setting in which three values (called paramet ... in a vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability In electromagnetism Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electric charge, electrically charged particles. The electromagnetic force is ... of free space A vacuum is space devoid of matter. The word stems from the Latin adjective ''vacuus'' for "vacant" or "Void (astronomy), void". An approximation to such vacuum is a region with a gaseous pressure much less than atmospheric pressure. Physicist ... . This violates Galilean invariance Galilean invariance or Galilean relativity states that the laws of motion are the same in all inertial frame In classical physics Classical physics is a group of physics Physics (from grc, φυσική (ἐπιστήμη), physik ... , a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether Luminiferous aether or ether ("luminiferous", meaning "light-bearing") was the postulated medium Medium may refer to: Science and technology Aviation *Medium bomber, a class of war plane *Tecma Medium, a French hang glider design Communic ... through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz Lorentz' theory of electrons. Formulas for the curl of the magnetic field (IV) and the electrical field E (V), ''La théorie electromagnétique de Maxwell et son application aux corps mouvants'', 1892, p. 452. Hendrik Antoon Lorentz (; 18 Ju ... and Henri Poincaré Jules Henri Poincaré ( S: stress final syllable ; 29 April 1854 – 17 July 1912) was a French French (french: français(e), link=no) may refer to: * Something of, from, or related to France France (), officially the French Repu ... , in 1905, Albert Einstein Albert Einstein ( ; ; 14 March 1879 – 18 April 1955) was a German-born theoretical physicist, widely acknowledged to be one of the greatest physicists of all time. Einstein is known for developing the theory of relativity The theo ... solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity The history of special relativity consists of many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. It culminated in the theory of special relativity proposed by Albert Einst ... .) In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism The covariant formulation of classical electromagnetism Classical electromagnetism or classical electrodynamics is a branch of theoretical physics that studies the interactions between electric charge Electric charge is the physical property ... .) # Extension to nonlinear phenomena The Maxwell equations are ''linear,'' in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics In mathematics and science, a nonlinear system is a system in which the change of the output is not proportionality (mathematics), proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, ... can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics Magnetohydrodynamics (MHD; also magneto-fluid dynamics or hydro­magnetics) is the study of the magnetic properties and behaviour of electrically conducting Electrical resistivity (also called specific electrical resistance or volume res ... , which combines Maxwell theory with the Navier–Stokes equations In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and force. ... . # Quantities and units Electromagnetic units are part of a system of electrical units based primarily upon the magnetic properties of electric currents, the fundamental SI unit being the ampere. The units are: * ampere The ampere (, ; symbol: A), often shortened to "amp",SI supports only the use of symbols and deprecates the use of abbreviations for units. is the base unit of electric current An electric current is a stream of charged particle In physics ... (electric current) * coulomb The coulomb (symbol: C) is the International System of Units International is an adjective (also used as a noun) meaning "between nations". International may also refer to: Music Albums * International (Kevin Michael album), ''International'' ( ... (electric charge) * farad The farad (symbol: F) is the SI derived unit of electrical capacitance, the ability of a body to store an electrical charge. It is named after the English physicist Michael Faraday (1791-1867). In SI base units 1 F = 1 kilogram, kg− ... (capacitance) * henry Henry may refer to: People *Henry (given name) Henry is a masculine given name derived from Old French Old French (, , ; French language, Modern French: ) was the language spoken in Northern France from the 8th century to the 14th century ... (inductance) * ohm The ohm (symbol: Ω) is the SI derived unit SI derived units are units of measurement ' Measurement is the number, numerical quantification (science), quantification of the variable and attribute (research), attributes of an object or event, ... (resistance) * siemens Siemens AG ( ) is a German multinational Multinational may refer to: * Multinational corporation, a corporate organization operating in multiple countries * Multinational force, a military body from multiple countries * Multinational state, ... (conductance) * tesla (magnetic flux density) * volt The volt is the derived unit for electric potential The electric potential (also called the ''electric field potential'', potential drop, the electrostatic potential) is defined as the amount of work (physics), work energy needed to move a ... (electric potential) * watt The watt (symbol: W) is a unit of power Power typically refers to: * Power (physics) In physics, power is the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the watt, equa ... (power) * weber Weber (, or ; German: ) is a surname of German language, German origin, derived from the noun meaning "weaving, weaver". In some cases, following migration to English-speaking countries, it has been anglicised to the English surname 'Webber' or ev ... (magnetic flux) In the electromagnetic cgs system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in a vacuum is unity Unity may refer to: Buildings * Unity Building The Unity Building, in Oregon, Illinois, is a historic building in that city's Oregon Commercial Historic District. As part of the district the Oregon Unity Building has been listed on the National R ... . As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system. Formulas for physical laws of electromagnetism (such as Maxwell's equations Maxwell's equations are a set of coupled partial differential equation In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), ... ) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ... between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymous ... , "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units. * Abraham–Lorentz force In the physics of electromagnetism, the Abraham–Lorentz force (also Lorentz–Abraham force) is the recoil force on an acceleration, accelerating charged particle caused by the particle emitting electromagnetic radiation. It is also called the ra ... * Aeromagnetic survey An aeromagnetic survey is a common type of geophysical survey Geophysical survey is the systematic collection of geophysical data for spatial studies. Detection and analysis of the geophysical signals forms the core of Geophysical signal process ... s * Computational electromagnetics Computational electromagnetics (CEM), computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment. It typically involves using computer p ... * Double-slit experiment In modern physics, the double-slit experiment is a demonstration that light and matter can display characteristics of both classically defined waves and particles; moreover, it displays the fundamentally probabilistic nature of quantum mechani ... * Electromagnet An electromagnet is a type of magnet A magnet is a material or object that produces a magnetic field A magnetic field is a vector field In vector calculus and physics, a vector field is an assignment of a vector to each poin ... * Electromagnetic induction Electromagnetic or magnetic induction is the production of an electromotive force In electromagnetism Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs b ... * Electromagnetic wave equation The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a Medium (optics), medium or in a vacuum. It is a Wave equation#Scalar wave equation in three space d ... * Electromagnetic scattering Scattering is a term used in physics to describe a wide range of physical processes where moving particles or radiation of some form, such as light Light or visible light is electromagnetic radiation within the portion of the electromagneti ... * Electromechanics In engineering Engineering is the use of scientific method, scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings. The discipline of engineering encomp ... * Geophysics Geophysics () is a subject of natural science Natural science is a branch A branch ( or , ) or tree branch (sometimes referred to in botany Botany, also called , plant biology or phytology, is the science of plant life and a b ... * Introduction to electromagnetism * Magnetostatics Magnetostatics is the study of magnetic fields in systems where the electric currents, currents are steady (not changing with time). It is the magnetic analogue of electrostatics, where the electric charge, charges are stationary. The magnetizati ... * Magnetoquasistatic field A magnetoquasistatic field is a class of electromagnetic field in which a slowly oscillating magnetic field is dominant. A magnetoquasistatic field is typically generated by ''low-frequency'' induction from a magnetic dipole or a current loop. The ... * Optics Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of optical instruments, instruments that use or Photodetector, detect it. Optics usually describes t ... * Relativistic electromagnetism Relativistic electromagnetism is a physical phenomenon explained in electromagnetic field theory due to Coulomb's law ''F'' between two point charges ''q''1 and ''q''2 is directly proportional to the product of the magnitudes of charges and ... * Wheeler–Feynman absorber theory The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory), named after its originators, the physicists Richard Feynman Richard Phillips Feynman (; May 11, 1918 – February 15, 1988) was an American theor ... # References * * ## Textbooks * * * * * * * * * * * * * * * * ## General references * * * * * * * * *
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750099539756775, "perplexity": 1416.8552393070818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00462.warc.gz"}
https://byjus.com/rd-sharma-solutions/class-9-maths-chapter-2-exponents-of-real-numbers/
# RD Sharma Solutions Class 9 Exponents Of Real Numbers ## RD Sharma Solutions Class 9 Chapter 2 RD Sharma class 9 maths solutions chapter 2 is available here. The chapter exponents of real numbers deals topics like representing real numbers on the number line, operations on real numbers and laws of exponents for real numbers. The class 9 maths solutions also contains important questions and exercises that are asked during exams. Throughout this solved examples, we will be working with real numbers. So it is desirable to define powers of real numbers. The powers of real numbers are defined in the same way as powers of rational numbers and the same laws of indices (exponents). anis called the nth power of a. The real number a is called the base and n is called the exponent or index of the nth power of a. So here with the solutions, get a clear idea of the different laws of integral exponents by looking at the examples which are solved in a step-by-step manner. Further, with the RD Sharma exercises students can understand the meaning of principal nth root of a real number. They will also learn to apply the laws of rational exponents in the solved examples given below. Learn these concepts easily by practicing the questions from the exercises given in RD Sharma solution for the chapter “Exponents of Real Numbers”. Class Class 9 Chapter Chapter 2 Name Exponents of Real Numbers Exercise All RD Sharma Class 9 Maths Solutions – Chapter 2 Find detailed RD Sharma solutions for class 9 maths chapter 2 – exponents of real numbers below. Several math exercises are also given in the following table. We at BYJU’s are providing RD Sharma solutions for class 6th, 7th, 8, 9th 10th 11th and 12th for CBSE students. Do check out other solutions on our website. #### Practise This Question The denominator of a rational number is greater than its numerator by 3. If the numerator is increased by 7 and the denominator is decreased by 1, the new number becomes 32. The original number will be:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854589581489563, "perplexity": 605.0743563527953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258849.89/warc/CC-MAIN-20190526045109-20190526071109-00293.warc.gz"}
http://www.boundaryvalueproblems.com/content/2011/1/23/
Research # Periodic solutions for nonautonomous second order Hamiltonian systems with sublinear nonlinearity Zhiyong Wang1* and Jihui Zhang2 Author Affiliations 1 Department of Mathematics, Nanjing University of Information Science and Technology, Nanjing 210044, Jiangsu, People's Republic of China 2 Jiangsu Key Laboratory for NSLSCS, School of Mathematics Sciences, Nanjing Normal University, Nanjing 210097, Jiangsu, People's Republic of China For all author emails, please log on. Boundary Value Problems 2011, 2011:23 doi:10.1186/1687-2770-2011-23 Received: 14 May 2011 Accepted: 13 September 2011 Published: 13 September 2011 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ### Abstract Some existence and multiplicity of periodic solutions are obtained for nonautonomous second order Hamiltonian systems with sublinear nonlinearity by using the least action principle and minimax methods in critical point theory. Mathematics Subject Classification (2000): 34C25, 37J45, 58E50. ##### Keywords: Control function; Periodic solutions; The least action principle; Minimax methods ### 1 Introduction and main results Consider the second order systems (1.1) where T > 0 and F : [0, T] × ℝ→ ℝ satisfies the following assumption: (A) F (t, x) is measurable in t for every x ∈ ℝand continuously differentiable in x for a.e. t ∈ [0, T], and there exist a C(ℝ+, ℝ+), b L1(0, T ; ℝ+) such that for all x ∈ ℝand a.e. t ∈ [0, T]. The existence of periodic solutions for problem (1.1) has been studied extensively, a lot of existence and multiplicity results have been obtained, we refer the readers to [1-13] and the reference therein. In particular, under the assumptions that the nonlinearity ∇F (t, x) is bounded, that is, there exists p(t) ∈ L1(0, T ; ℝ+) such that (1.2) for all x ∈ ℝand a.e. t ∈ [0, T], and that (1.3) Mawhin and Willem in [3] have proved that problem (1.1) admitted a periodic solution. After that, when the nonlinearity ∇F (t, x) is sublinear, that is, there exists f(t), g(t) ∈ L1(0, T ; ℝ+) and α ∈ [0, 1) such that (1.4) for all x ∈ ℝand a.e. t ∈ [0, T], Tang in [7] have generalized the above results under the hypotheses (1.5) Subsequently, Meng and Tang in [13] further improved condition (1.5) with α ∈ (0, 1) by using the following assumptions (1.6) (1.7) Recently, authors in [14] investigated the existence of periodic solutions for the second order nonautonomous Hamiltonian systems with p-Laplacian, here p > 1, it is assumed that the nonlinearity ∇F (t, x) may grow slightly slower than |x|p-1, a typical example with p = 2 is (1.8) solutions are found as saddle points to the corresponding action functional. Furthermore, authors in [12] have extended the ideas of [14], replacing in assumptions (1.4) and (1.5) the term |x| with a more general function h(|x|), which generalized the results of [3,7,10,11]. Concretely speaking, it is assumed that there exist f(t), g(t) ∈ L1(0, T; ℝ+) and a nonnegative function h C([0, +∞), [0, +∞)) such that for all x ∈ ℝand a.e. t ∈ [0, T], and that where h be a control function with the properties: if α = 0, h(t) only need to satisfy conditions (a)-(c), here C*, K1 and K2 are positive constants. Moreover, α ∈ [0, 1) is posed. Under these assumptions, periodic solutions of problem (1.1) are obtained. In addition, if the nonlinearity ∇F (t, x) grows more faster at infinity with the rate like , f(t) satisfies some certain restrictions and α is required in a more wider range, say, α ∈ [0,1], periodic solutions have also been established in [12] by minimax methods. An interesting question naturally arises: Is it possible to handle both the case such as (1.8) and some cases like (1.4), (1.5), in which only f(t) ∈ L1(0, T ; ℝ+) and α ∈ [0, 1)? In this paper, we will focus on this problem. We now state our main results. Theorem 1.1. Suppose that F satisfies assumption (A) and the following conditions: (S1) There exist constants C ≥ 0, C* > 0 and a positive function h C(ℝ+, ℝ+) with the properties: where . Moreover, there exist f L1(0, T; ℝ+) and g L1(0, T; ℝ+) such that for all x ∈ ℝand a.e. t ∈ [0, T]; (S2) There exists a positive function h C(ℝ+, ℝ+) which satisfies the conditions (i)-(iv) and Then, problem (1.1) has at least one solution which minimizes the functional φ given by on the Hilbert space defined by with the norm Theorem 1.2. Suppose that (S1) and assumption (A) hold. Assume that Then, problem (1.1) has at least one solution in . Theorem 1.3. Suppose that (S1), (S3) and assumption (A) hold. Assume that there exist δ > 0, ε > 0 and an integer k > 0 such that (1.9) for all x ∈ ℝand a.e. t ∈ [0, T], and (1.10) for all |x| ≤ δ and a.e. t ∈ [0, T], where . Then, problem (1.1) has at least two distinct solutions in . Theorem 1.4. Suppose that (S1), (S2) and assumption (A) hold. Assume that there exist δ > 0, ε > 0 and an integer k ≥ 0 such that (1.11) for all |x| ≤ δ and a.e. t ∈ [0, T]. Then, problem (1.1) has at least three distinct solutions in . Remark 1.1. (i) Let α ∈ [0, 1), in Theorems 1.1-1.4, ∇F(t, x) does not need to be controlled by |x|2α at infinity; in particular, we can not only deal with the case in which ∇F(t, x) grows slightly faster than |x|2α at infinity, such as the example (1.8), but also we can treat the cases like (1.4), (1.5). (ii) Compared with [12], we remove the restriction on the function f(t) as well as the restriction on the range of α ∈ [0, 1] when we are concerned with the cases like (1.8). (iii) Here, we point out that introducing the control function h(t) has also been used in [12,14], however, these control functions are different from ours because of the distinct characters of h(t). Remark 1.2. From (i) of (S1), we see that, nonincreasing control functions h(t) can be permitted. With respect to the detailed example on this assertion, one can see Example 4.3 of Section 4. Remark 1.3. There are functions F(t, x) satisfying our theorems and not satisfying the results in [1-14]. For example, consider function where f(t) ∈ L1(0, T; ℝ+) and f(t) > 0 for a.e. t ∈ [0, T]. It is apparent that (1.12) for all x ∈ ℝand t ∈ [0, T]. (1.12) shows that (1.4) does not hold for any α ∈ [0, 1), moreover, note f(t) only belongs to L1(0, T; ℝ+) and no further requirements on the upper bound of are posed, then the approach of [12] cannot be repeated. This example cannot be solved by earlier results, such as [1-13]. On the other hand, take , , C = 0, C* = 1, then by simple computation, one has and Hence, (S1) and (S2) are hold, by Theorem 1.1, problem (1.1) has at least one solution which minimizes the functional φ in . What's more, Theorem 1.1 can also deal with some cases which satisfy the conditions (1.4) and (1.5). For instance, consider function where q(t) ∈ L1(0, T; ℝ). It is not difficult to see that for all x ∈ ℝand a.e. t ∈ [0, T]. Choose , , C = 0, C* = 1, and g(t) = |q(t)|, then (S1) and (S2) hold, by Theorem 1.1, problem (1.1) has at least one solution which minimizes the functional φ in . However, we can find that the results of [14] cannot cover this case. More examples are drawn in Section 4. Our paper is organized as follows. In Section 2, we collect some notations and give a result regrading properties of control function h(t). In Section 3, we are devote to the proofs of main theorems. Finally, we will give some examples to illustrate our results in Section 4. ### 2 Preliminaries For , let and , then one has and where . It follows from assumption (A) that the corresponding function φ on given by is continuously differentiable and weakly lower semi-continuous on (see[2]). Moreover, one has for all . It is well known that the solutions of problem (1.1) correspond to the critical point of φ. In order to prove our main theorems, we prepare the following auxiliary result, which will be used frequently later on. Lemma 2.1. Suppose that there exists a positive function h which satisfies the conditions (i), (iii), (iv) of (S1), then we have the following estimates: Proof. It follows from (iv) of (S1) that, for any ε > 0, there exists M1 > 0 such that (2.1) By (iii) of (S1), there exists M2 > 0 such that (2.2) which implies that (2.3) where M := max{M1, M2}. Hence, we obtain (2.4) for all t > 0 by (i) of (S1). Obviously, h(t) satisfies (1) due to the definition of h(t) and (2.4). Next, we come to check condition (2). Recalling the property (iv) of (S1) and (2.2), we get Therefore, condition (2) holds. Finally, we show that (3) is also true. By (iii) of (S1), one arrives at, for every β > 0, there exists M3 > 0 such that (2.5) Let θ ≥ 1, using (2.5) and integrating the relation over an interval [1, S] ⊂ [1, +∞), we obtain Thus, since by (iv) of (S1), one has for all t M3. That is, which completes the proof. □ ### 3 Proof of main results For the sake of convenience, we will denote various positive constants as Ci, i = 1, 2, 3,.... Now, we are ready to proof our main results. Proof of Theorem 1.1. For , it follows from (S1), Lemma 2.1 and Sobolev's inequality that (3.1) which implies that (3.2) Taking into account Lemma 2.1 and (S2), one has (3.3) As ||u|| → +∞ if and only if , for ε small enough, (3.2) and (3.3) deduce that Hence, by the least action principle, problem (1.1) has at least one solution which minimizes the function φ in . □ Proof of Theorem 1.2. First, we prove that φ satisfies the (PS) condition. Suppose that is a (PS) sequence of φ, that is, φ'(un) → 0 as n → +∞ and {φ(un)} is bounded. In a way similar to the proof of Theorem 1.1, we have for all n. Hence, we get (3.4) for large n. On the other hand, it follows from Wirtinger's inequality that (3.5) for all n. Combining (3.4) with (3.5), we obtain (3.6) for all large n. By (3.1), (3.6), Lemma 2.1 and (S3), one has This contradicts the boundedness of {φ(un)}. So, is bounded. Notice (3.6) and (1) of Lemma 2.1, hence {un} is bounded. Arguing then as in Proposition 4.1 in [3], we conclude that the (PS) condition is satisfied. In order to apply the saddle point theorem in [2,3], we only need to verify the following conditions: (φ1) φ(u) → +∞ as ||u|| → +∞in , where , (φ2) φ(u) → -∞ as |u(t)| → +∞. In fact, for all , by (S1), Sobolev's inequality and Lemma 2.1, we have which implies that (3.7) By Wirtinger's inequality, one has Hence, for ε small enough, (φ1) follows from (3.7). On the other hand, by (S3) and Lemma 2.1, we get which implies that Thus, (φ2) is verified. The proof of Theorem 1.2 is completed. □ Proof of Theorem 1.3. Let , and ψ = -φ. Then, ψ C1(E, ℝ) satisfies the (PS) condition by the proof of Theorem 1.2. In view of Theorem 5.29 and Example 5.26 in [2], we only need to prove that We see that for all x ∈ ℝand a.e. t ∈ [0, T]. By (S1) and Lemma 2.1, one has for all |x| ≥ δ, a.e. t ∈ [0, T] and some Q(t) ∈ L1(0, T; ℝ+) given by Now, it follows from (1.10) that for all x ∈ ℝand a.e. t ∈ [0, T]. Hence, we obtain for all u Hk. Then, (ψ1) follows from the above inequality. For , by (1.9), one has So, (ψ2) is obtained. At last, (ψ3) follows from (φ1) which are appeared in the proof of Theorem 1.2. Then the proof of Theorem 1.3 is completed. □ Proof of Theorem 1.4. From the proof of Theorem 1.1, we know that φ is coercive which implies that φ satisfies the (PS) condition. With the similar manner to [4,7], we can get the multiplicity results, here we omit the details. □ ### 4 Examples In this section, we give some examples to illustrate our results. Example 4.1. Consider the function where d(t) ∈ L1(0, T; ℝ). Let , then , by a direct computation, (S1) and (S3) hold. Then, by Theorem 1.2, we conclude that problem (1.1) has one solution in . However, as the reason of Remark 1.3, the results in [1-13] cannot be applied. Example 4.2. Consider the function where A(t), B(t) are suitable functions which insure assumption (A) hold. Also, put , , we see that (S1), (S2) and (1.11) hold. By virtue of Theorem 1.4, problem (1.1) has at least three distinct solutions in . Example 4.3. Consider the function We observe that which means ∇F(t, x) is bounded, moreover, one has Then, by the results in [3,7,12], problem (1.1) has one solution which minimizes the functional φ in . In fact, our Theorem 1.1 can also handle this case. In this situation, let , , and choose C = 2, C* = 1 , g(t) ≡ 0, we infer and So, by Theorem 1.1, problem (1.1) has one solution which minimizes the functional φ in . Remark 4.1. Unlike the control functions in [12], where h(t) is nondecreasing, here control function is bounded but not increasing. Example 4.4. Consider the function where k(t) ∈ L1(0, T; ℝ). It is easy to check that The above inequality leads to (1.4) hold with Take , then So, by the theorems in [3,7,12,13], problem (1.1) has at least one solution in . Indeed, our Theorem 1.2 can also deal with this case. Let , , and choose C = 0, C* = 1, , g(t) = |k(t)|, we know Furthermore, one has Hence, (S1) and (S3) are true, by Theorem 1.2, problem (1.1) has at least one solution in . However, we can find that the results in [14] cannot deal with this case. ### Competing interests The authors declare that they have no competing interests. ### Authors' contributions All authors typed, read and approved the final manuscript. ### Acknowledgements The authors would like to thank Professor Huicheng Yin for his help and many valuable discussions, and the first author takes the opportunity to thank Professor Xiangsheng Xu and the members at Department of Mathematics and Statistics at Mississippi State University for their warm hospitality and kindness. This Project is Supported by National Natural Science Foundation of China (Grant No. 11026213, 10871096), Natural Science Foundation of the Jiangsu Higher Education Institutions (Grant No. 10KJB110006) and Foundation of Nanjing University of Information Science and Technology (Grant No. 20080280). ### References 1. Berger, MS, Schechter, M: On the solvability of semilinear gradient operators. Adv Math. 25, 97–132 (1977). Publisher Full Text 2. Rabinowitz, PH: Minimax methods in critical point theory with applications to differential equations. CBMS Reg Conf Ser in Math, Providence, RI: American Mathematical Society (1986) 3. Mawhin, J, Willem, M: Critical Point Theory and Hamiltonian Systems, New York: Springer-Verlag (1989) 4. Brezis, H, Nirenberg, L: Remarks on finding critical points. Comm Pure Appl Math. 44, 939–963 (1991). Publisher Full Text 5. Long, YM: Nonlinear oscillations for classical Hamiltonian systems with bi-even subquadratic potentials. Nonlinear Anal. 24, 1665–1671 (1995). Publisher Full Text 6. Tang, CL: Periodic solutions for non-autonomous second order systems. J Math Anal Appl. 202, 465–469 (1996). Publisher Full Text 7. Tang, CL: Periodic solutions for nonautonomous second order systems with sublinear nonlinearity. Proc Amer Math Soc. 126, 3263–3270 (1998). Publisher Full Text 8. Tang, CL, Wu, XP: Periodic solutions for second order systems with not uniformly coercive potential. J Math Anal Appl. 259, 386–397 (2001). Publisher Full Text 9. Ekeland, I, Ghoussoub, N: Certain new aspects of the calculus of variations in the large. Bull Amer Math Soc. 39, 207–265 (2002). Publisher Full Text 10. Zhao, F, Wu, X: Periodic solutions for a class of non-autonomous second order systems. J Math Anal Appl. 296, 422–434 (2004). Publisher Full Text 11. Zhao, F, Wu, X: Existence and multiplicity of periodic solution for a class of non-autonomous second-order systems with linear nonlinearity. Nonlinear Anal. 60, 325–335 (2005) 12. Wang, Z, Zhang, J: Periodic solutions of a class of second order non-autonomous Hamiltonian systems. Nonlinear Anal. 72, 4480–4487 (2010). Publisher Full Text 13. Meng, Q, Tang, XH: Solutions of a second-order Hamiltonian with periodic boundary conditions. Comm Pure Appl Anal. 9, 1053–1067 (2010) 14. Wang, Z, Zhang, J: Periodic solutions of non-autonomous second order Hamiltonian systems with p-Laplacian. Election J Differ Equ. 2009, 1–12 (2009)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546819925308228, "perplexity": 1136.030064267938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705939136/warc/CC-MAIN-20130516120539-00022-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.ques10.com/p/52979/find-the-work-done-in-moving-a-particle-in-a-force/
0 122views Find the work done in moving a particle in a force field given by $\bar{F}=3xy\ \hat{i}-5z\hat{j}+10x\hat{k}$ along the curve x=t2+1, y=2t2, z=t3 from t=1 to t=2 $\\[2ex] r=xi+yj+zk \displaystyle \\[2ex] dr=dxi+dyj+dzk\\[2ex] dr=2t\ dt\ i+4t\ dt\ j+3t^2\ dt\ k \displaystyle \\[2ex] \bar{F}=\ 3xyi-5zj+10xk=3\ \left(t^2+1\right)\left(2t^2\right)i-5t^3j+10\left(t^2+1\right)k \displaystyle \\[2ex] \bar{F}\ .dr\ \ =6t^2\left(t^2+1\right)2t\ dt-20t^4dt+30t^2\left(t^2+1\right)dt \displaystyle \\[2ex] \bar{F}=\left(12t^5+10\ t^4+12t^3+30t^2\ \right)dt\ \displaystyle$ $\\[2ex] Work\ done=\ \int_c\bar{F}.dr\\[2ex] \displaystyle =\int_1^2\left(12t^5+10\ t^4+12t^3+30t^2\ \right)dt \displaystyle \\[2ex] ={\left[2t^6+2t^5+3t^4+10t^3\right]}_1^2\ \displaystyle \\[2ex] =\left[2\times{}\ 2^6+2\times{}2^5+3\times{}2^4+110\times{}2^3\right]-\left[2+2+3+10\right] \displaystyle \\[2ex] =303 \displaystyle \\[2ex] Hence\ the\ work\ done\ by\ the\ \bar{F}=303\ units.\ \displaystyle$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8215914964675903, "perplexity": 689.5464669399679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058263.20/warc/CC-MAIN-20210927030035-20210927060035-00239.warc.gz"}
https://math.stackexchange.com/questions/2368580/equivalent-definitions-of-perfect-field
# Equivalent definitions of perfect field. With $T$ some field, I'd like to prove the following implication: If all irreducible polynomials are separable (have distinct roots), then $T$ is either of characteristic $0$, or it's characteristic is $p>0$ and the frobenius endomorphism $x \mapsto x^p$ is an automorphism I already have the opposite implication, but I'd appreciate a hint on how to the one I wrote in this question. ## 2 Answers If $T$ has characteristic $p>0$ and $a \in T$ is not a $p$th power in $T$, then any irreducible factor of $x^p-a$ is inseparable. Suppose that $T$ has characteristic $p>0,$ and that $x \mapsto x^p$ is not an automorphism. Since a ring homomorphism from a field into itself is injective or zero, then $x \mapsto x^p$ is not surjective. Let $a \in T$ be such that $a \neq b^p$ for every $b \in T$. Let $f(x)=x^p-a$. We prove that $f$ is irreducible and inseparable. If $\alpha$ is a root of $x^p-a$, then $x^p-a = (x-\alpha)^p$, so $\alpha$ is a multiple root of $f$ (with multiplicity $p$), hence $f$ is inseparable. Now, let $g(x)$ be an irreducible factor of $f(x)$. Note that $\alpha \not\in T$, otherwise $a=\alpha^p$, contrary to the assumption. Then $g(x)=(x-\alpha)^k$ for some $1<k \leq p$. Using the binomial theorem, we have $$g(x)=(x-\alpha)^k=x^k - k\alpha x^{k-1} + \cdots + (-\alpha)^k.$$ Therefore, $k\alpha \in T$. Since $\alpha \not\in T$, then $k=p$, so $g=f$. Hence, $f$ is irreducible. • Thanks for the detailed answer. – John P Jul 23 '17 at 17:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985314011573792, "perplexity": 55.351247433905655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572517.50/warc/CC-MAIN-20190916100041-20190916122041-00198.warc.gz"}
http://mathhelpforum.com/trigonometry/208411-inner-loop-limacon-print.html
# Inner Loop of a limacon • November 25th 2012, 05:33 PM wowitsangie03 Inner Loop of a limacon The polar function, r(theta)=2 cos(theta)-1, is a limacon. It has an outer and inner loop. Find the exact values , a and b, such that a <=theta<=b creates the inner loop of the limacon. I know that r= a+- bcos(theta) so i guess that means its -1/2 and that means 2pi/3 and 4pi/3 I need help. • November 25th 2012, 07:18 PM MarkFL Re: Help. If you look at a plot of the curve, you see you want to solve for: $2\cos(\theta)-1=0$ where $-\frac{\pi}{2}<\theta<\frac{\pi}{2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923089325428009, "perplexity": 2206.3012907729826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298020.57/warc/CC-MAIN-20150323172138-00174-ip-10-168-14-71.ec2.internal.warc.gz"}
https://ethresear.ch/t/rsa-accumulators-for-plasma-cash-history-reduction/3739
# RSA Accumulators for Plasma Cash history reduction #1 Special thanks to Justin Drake for discussion One of the challenges for all Plasma flavors, Plasma Cash too, is the amount of history data that users need to keep around, and send to recipients if they want to send coins. For example, suppose a Plasma chain has one block committed to chain per minute, with \approx 2^{16} transactions per minute (~= 1000 transactions per second). We assume each user has their coins split across \approx 10 fragments. To have the full information needed to win an exit game, each fragment requires a Merkle branch of length \approx 16 per block, so that’s 16 hashes * 32 bytes * 500000 minutes * 10 fragments \approx 2.5 GB for one year. In a transfer or atomic swap or similar operation, this is data that must be transferred. We can increase the efficiency here with RSA accumulators. An RSA accumulator is a data structure providing similar functionality to a Merkle tree: a large amount of data can be compressed into a single fixed-size “root”, such that given the data and this root, one can construct a “witness” to prove that any specific value from the data is part of the data represented by that root. This is done as follows. We choose a large value N whose factorization is not fully known, and a generator g (eg. g = 3) which represents the empty accumulator. To add a value v to an accumulator A, we set the new accumulator A' = A^v. To prove that a value v is part of an accumulator A, we can use a proof of knowledge of exponent scheme (see below), proving that we know a cofactor x such that (g^v)^x = A (note that x may be very large; it is linear in the number of elements, hence why we need a proof scheme rather than providing x directly). For example, suppose that we want to add the values 3,5,11 into the accumulator. Then, A = g^{165}. To prove that 3 is part of A (we’ll use the notation "part of [g...A]" from now on for reasons that should become clear soon), we use a proof of knowledge scheme to prove that (g^3)^x = A for some known cofactor x. In this case, x = 55, but in cases involving many thousands of values x could get very large, which is why the proof of knowledge schemes are necessary. One example of a proof of knowledge scheme, described by Wesolowski (see also this talk by Benedikt Bünz) is as follows. Let h = g^v and z = h^x. Select B = hash(h, z) \mod N. Compute b = h^{floor(\frac{x}{B})}, and r = x \mod B. A proof is simply (b, z), which we can verify by checking b^B * h^r = z (proof of completeness: b^B * h^r = h^{B * floor(\frac{x}{B}) + x \mod B} = h^x). Note that this proof is constant-sized. To prove that a value is v not part of [g...A], we need only prove that we know r such that 0 < r < v where A * g^r is a known power of g^v using the algorithm above. For example, suppose we want to prove that 7 is not part of [g...A] in the example above. 7 is not a factor of 165, so there is no integer x such that (g^7)^x = g^{165} = A. But g^7 is a known root of A * g^3 = g^{168}, so we can provide the values r=3 (satisfying the desired 0 < r < 7) and cofactor x = 24, and we see (g^7)^{24} = A * g^3 = g^{168}. Now, here is how we can use this technique. We require Plasma Cash roots to come with both a Merkle tree of transactions and an RSA accumulator of coin indices that are modified in that block. Using a transaction in an exit game requires both the Merkle proof and the RSA accumulator proof of membership. This means that a RSA proof of non-membership is sufficient to satisfy a user that there is no data in that block that could be used as a transaction in an exit game. However, the RSA accumulator that we use is cumulative; that is, in the first block, the generator that we use can be 3, but in every subsequent block, the generator used is the accumulator output of the previous block. This allows us to batch proofs of non-membership: to prove that a given coin index was not touched in blocks n .... n+k, we make a proof of non-membership based on the post-state of the accumulator after block n+k using the pre-state of block n as the generator. If a user receives this proof of non-membership, they can be convinced that no proof of membership can be generated for any of the blocks in the range n ... n+k. This means that the history proof size for a coin goes down from one Merkle branch per Plasma block to two RSA accumulator proofs per transaction of that coin: one proof of membership for when the transaction takes place, and one proof of non-membership for the range where it does not. If each coin gets transacted on average once per day, and an RSA proof of non-membership is ~1 kB, then this means \approx 1 Kb * 365 days * 10 fragments \approx 3.6 MB for one year. If there are many coins, then we can optimize further. Proofs of non-membership can be batched: for example, if you prove that A * g^{50} is a power of g^{143}, then you know that log_g(A) = -50 \mod 143, which implies it is not a multiple of 11 or 13. This works well up to a few hundred indices, but if we want to have very fine denominations it may be possible to batch much more, see Log(coins)-sized proofs of inclusion and exclusion for RSA accumulators 11 Likes Log(coins)-sized proofs of inclusion and exclusion for RSA accumulators Plasma Cash with Sparse Merkle Trees, Bloom filters, and Probabilistic Transfers Accumulators, scalability of UTXO blockchains, and data availability Plasma World Map - the hitchhiker’s guide to the plasma Improving UX for Storage Rent Towards Efficient RSA Accumulator Proof Generation - Plasma Prime Short RSA exclusion proofs for Plasma Prime Compact RSA inclusion/exclusion proofs Short S[NT]ARK exclusion proofs for Plasma Let's assign prime numbers to transactions instead of coins Plasma using the RSA hash accumulator Plasma Prime design proposal #2 Here is a great talk about RSA accumulators by Benedikt Bünz from ‘Scaling Bitcoin Kaizen’ a few days ago. Is this scheme still secure if the operator who provided the RSA setup knows factors of N? Wouldn’t they be able to provide non-membership proof for any coin? According to Bünz, there have been suggested an ideas of trustless RSA setup based on modules over Euclidean rings. I could not find any more information on the security of this approach, unfortunately. 4 Likes #3 So if i am following correctly you are allowing accumulating, not just prime numbers. Which means that a user can make a fake proof. Example: Say they commit 14. They can then fake their witness that either 7 or 2 or 14 have been accumulated. This means that the history proof size for a coin goes down from one Merkle branch per Plasma block to two RSA accumulator proofs per transaction of that coin : one proof of membership for when the transaction takes place, and one proof of non-membership for the range where it does not. If each coin gets transacted on average once per day, and an RSA proof of non-membership is ~1 kB, then this means ≈1 Kb * 365 days * 10 fragments ≈3.6 MB for one year. There is also the need of each user to update their witness as the accumulator is updated. Which adds data availability requirements as well as extra bandwidth to keep their witness up date. But i guess this is less than the merkle tree bandwidth requirements. 1 Like #4 Apparently you need to hash committed values to primes. We can also use predefined sequence of primes as a mapping, maybe that will be cheaper for execution in the EVM. 0 Likes #5 Apparently you need to hash committed values to primes. We can also use predefined sequence of primes as a mapping, maybe that will be cheaper for execution in the EVM. Definitely the latter. Just use the sequence of primes starting from the beginning: 2, 3, 5, 7, 11, 13, 17, 19… This way a batched proof of inclusion or exclusion only requires ~2-4 bytes per coin included, and not >=32. 1 Like #6 Please correct me if I’m wrong, but wouldn’t proof be (b, z, r)? In the Weselowski scheme x = 2^\tau and is publicly known, so the verifier computes r \leftarrow x \mod B, while in your scheme x is only known to the prover. To prove that a value is v not part of an accumulator A, we need only prove that we know r such that 0 < r < v where A * g^r is a known power of g^v using the algorithm above. Could you please elaborate this part a little more? How do we find r? Why does the proof hold? 1 Like #7 Please correct me if I’m wrong, but wouldn’t proof be (b,z,r)? In the Weselowski scheme x=2τ and is publicly known, so the verifier computes r←xmodB, while in your scheme x is only known to the prover. Ah yes, I think you’re right. Could you please elaborate this part a little more? How do we find r? Why does the proof hold? Suppose I prove that A * g^3 is a power of g^{10}. Then, I know that DLOG_g(A) equals 7 mod 10, and so it is NOT a multiple of 10. Replace 3 with r and 10 with v. 0 Likes #8 It seems this proof would have to be verified on-chain. Do you have a ballpark for the cost of the full operation? I guess it needs logarithmic exponentiations with the size of the exponent + modulo the large RSA prime (?) Don’t have a strong intuition for the gas cost of that. 0 Likes #9 Do you have a ballpark for the cost of the full operation? The check is b^B * h^r = z. B is a 256-bit value, as is r as r < B. b and h are both RSA-modulus-sized values. So it’s two calls to MODEXP with 256-bit exponents, and two calls with a one-bit exponent plus a manually implemented adder (this is using the formula ab = \frac{(a+b)^2 - (a-b)^2}{4}). 0 Likes #10 Can we instead have g^x be the inclusion witness and have the verification be that (g^x)^v = A? (the proof of knowledge of exponent scheme would still be needed to prove that the accumulator output in a block uses the previous block’s accumulator output as generator) I wonder if the the following scheme works instead: supposing a coin (whose id is q) has been spent t times, and the genesis block accumulator generator is g=3. Then the latest block accumulator value should be A = g^{q^t {p_1}^{e_1} {p_2}^{e_2} \ldots } where q \ne p_i for all i. The coin validity proof consists of a proof that q^{t+1} is not part of A and 1 RSA inclusion proof and merkle inclusion proof per transaction of the coin (as before). 2 Likes #11 I wonder if the the following scheme works instead: supposing a coin (whose id is q) has been spent t times, and the genesis block accumulator generator is g=3. Then the latest block accumulator value should be A=gqtp1e1p2e2… where q≠pi for all i. The coin validity proof consists of a proof that qt+1 is not part of A and 1 RSA inclusion proof and merkle inclusion proof per transaction of the coin (as before). What’s the goal here? To reduce the size of the history proof by a further ~50%? If so it does seem like that does it. 0 Likes #12 https://bitcointalk.org/index.php?topic=1064860.0 1 Like #13 Given that the accumulator is ever-increasing, wouldn’t we very quickly run into problems with integer overflow when representing these values (both on-chain and off-chain)? 1 Like #14 The multiplication is modulo some N for which we don’t know the factorization. 1 Like #15 How do you pick N with uknown factorization without a trusted dealer? None of the methods mentioned in Bünz’s talk seem reasonable. Is there a well established way to do that? 0 Likes #16 You can do either a secure multi party computation, or pick an RSA number such as RSA-2048. I wrote a blogpost about accumulators and the proof of exponentiation schemes here by the way: 4 Likes #17 Note that the verifier needs to additionally check that A’ accumulates from A, ie that the product of the newly accumulated elements {x1x2 … xn} is the discrete log of A’ base A. The proof has to be the NI-PoKE2 from Stanford’s paper, and not just the b^B * h^r = z check. 0 Likes #18 This check should be done for every published accumulator value to make sure the operator is only doing additions. See my comment on this post:Let's assign prime numbers to transactions instead of coins If this is done, I think the b^B * h^r = z check is sufficient. 0 Likes #19 In @gakonst 's deep-dive, he cautions against using @vbuterin 's non-membership due to the lack of a formal proof. I just wanted to chime in for those not convinced by Vitalik’s argument above, with a direct constructive reduction from Vitalik’s NMP to the LLX NMP. Suppose we’re trying to prove that a prime x is not committed to in A, and we are able to produce q,r where 0<r<x and (g^x)^q=Ag^r as in Vitalik’s scheme. First of all we can replace q\gets q-1 and r\gets x-r, so that we instead have (g^x)^q\cdot g^r=A. Since x is prime, we know \gcd(x,xq+r)=1, since otherwise x|(xq+r)\implies x|r, but 0<r<x by assumption, contradiction. Thus by running the Euclidean algorithm on x,xq+r, we can find a,b such that ax+b(xq+r)=1. But g^{xq+r}=A by assumption, so g^{ax}A^b=g^1, which exactly means that (g^a,b) is a valid LLX proof. EDIT/P.S.: For batched NMPs of x_1,...,x_n, it is not enough to simply make sure 0<r<\prod x_i. You further need to verify that \gcd(r,\prod x_i)=1. If x is in the accumulator, then for any arbitrary m you can produce a batch NMP for x,m that’s valid except for the fact that x|r. 3 Likes #20 As @gakonst explains in his deep dive, the value B should be prime and is calculated via a hash to prime function. Is there an efficient way to do a hash to prime function on chain? Or is there an efficient method to verify that a given B is calculated correctly? 0 Likes
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031818866729736, "perplexity": 1429.827424680855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578743307.87/warc/CC-MAIN-20190425233736-20190426015736-00207.warc.gz"}
https://brilliant.org/discussions/thread/work-inspired-by-a-wiki-page/
# Work Inspired by a wiki page: This wiki page page inspired me to do discover $$\sum^n_{i=1}i^a$$ with $$a$$ being a natural number; I will just start using $$\sum^n_{i=1}i^0=1^0+2^0+3^0+...+n^0=n$$ so I can discover the next ones; Using the same method that the linked page had done with $$n^2$$ and $$n^3$$, I will discover the polynomial that is equal to $$\sum^n_{i=1}i^a$$; First, I will use binomial theorem: $(n+1)^{a+1}=\sum^{a+1}_{i=0}\binom{a+1}{i}n^i$ The $1^{a+1-i}$ will be always 1, so it will not change the result if not written Therefore: $(n+1)^{a+1}=n^{a+1}+\sum^a_{i=0}\binom{a+1}{i}n^i$ (I took the "$n^{a+1}$" out of the sum notation) $(n+1)^{a+1}-n^{a+1}=\sum^a_{i=0}\binom{a+1}{i}n^i$ As the page did, I will substitute n by 1,2,3....n $2^{a+1}-1^{a+1}=\sum^a_{i=0}\binom{a+1}{i}1^i$ $3^{a+1}-2^{a+1}=\sum^a_{i=0}\binom{a+1}{i}2^i$ $4^{a+1}-3^{a+1}=\sum^a_{i=0}\binom{a+1}{i}3^i$ $\vdots$ $n^{a+1}-(n-1)^{a+1}=\sum^a_{i=0}\binom{a+1}{i}(n-1)^i$ $(n+1)^{a+1}-n^{a+1}=\sum^a_{i=0}\binom{a+1}{i}n^i$ Then I will sum everything, notice that everything on the left side, except $(n+1)^{a+1}$ and $-1^{a+1}$ cancels themselves, for the right side, every sum has now another sum inside it! $(n+1)^{a+1}-1=\sum^a_{i=0}\left (\binom{a+1}{i}\sum^n_{j=1}j^i \right )$ I will work the left side now, remember what $(n+1)^{a+1}$ is, but now, we will have to take the $\binom{a+1}{0}n^0=1$, because it will cancel with $-1$: $\sum^{a+1}_{i=1}\binom{a+1}{i}n^i=\sum^a_{i=0}\left (\binom{a+1}{i}\sum^n_{j=1}j^i \right )$ Now, I will take the term that appears on the sum of the right when i=a out of the notation: $\sum^{a+1}_{i=1}\binom{a+1}{i}n^i=\binom{a+1}{a}\sum^n_{j=1}j^a+\sum^{a-1}_{i=0}\left (\binom{a+1}{i}\sum^n_{j=1}j^i \right )$ Notice that this term is exactally what we are searching, so I will isolate it: $\binom{a+1}{a}\sum^n_{j=1}j^a=\sum^{a+1}_{i=1}\binom{a+1}{i}n^i-\sum^{a-1}_{i=0}\left (\binom{a+1}{i}\sum^n_{j=1}j^i \right )$ Remember that $\binom{a+1}{a}=\binom{a+1}{1}=a+1$: $\sum^n_{j=1}j^a=\frac{\sum^{a+1}_{i=1}\binom{a+1}{i}n^i-\sum^{a-1}_{i=0}\left (\binom{a+1}{i}\sum^n_{j=1}j^i \right )}{a+1}$ Now, Remember that, when $a=0$, we have a polynomial expression $n$, then if $a=2$ also "is"(As in: there is a polynomial expression that has the same values) a polynomial expression, and, by a strong induction, if a is natural, then the expression $\sum^n_{i=1}i^a$ also "is". I also made a program, in Python, that returned the expression of $\sum^n_{i=1}i^a$, but to much more values other than 0, 1, 2, 3(Which the original page did), some of the results are: $\sum^n_{i=1}i^{1}=\frac{n^2 + n}{2}$ $\sum^n_{i=1}i^{2}=\frac{2n^3 + 3n^2 + n}{6}$ $\sum^n_{i=1}i^{3}=\frac{n^4 + 2n^3 + n^{2}}{4}$ $\sum^n_{i=1}i^{4}=\frac{6n^5 + 15n^4 + 10n^3 - n}{30}$ $\sum^n_{i=1}i^{5}=\frac{2n^6 + 6n^5 + 5n^4 - 1n^2}{12}$ $\sum^n_{i=1}i^{6}=\frac{6n^7 + 21n^6 + 21n^5 - 7n^3 + n}{42}$ $\sum^n_{i=1}i^{7}=\frac{3n^{8} + 12n^{7} + 14n^{6} - 7n^{4} + 2n^{2}}{24}$ $\sum^n_{i=1}i^{8}=\frac{10n^{9} + 45n^{8} + 60n^{7} - 42n^{5} + 20n^{3} - 3n}{90}$ $\sum^n_{i=1}i^{9}=\frac{2n^{10} + 10n^{9} + 15n^{8} - 14n^{6} + 10n^{4} - 3n^{2}}{20}$ There are patterns there, but I will make another note to that, as this is getting huge Note by Matheus Jahnke 3 years, 10 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: This is amazing, i never knew we can generalize it !! Before reading this note , i used to approximate the answer. $S_{k} = 1^{k}+2^{k}+3^{k}+\cdots+n^{k} =\displaystyle\sum_{r=1}^{n} r^{k}$ For a large value of $n$ (probably greater than $1000$ for negligible error) we can convert the sum in an integral $\displaystyle\sum_{r=0}^{n}r^{k} = n^{k+1} \int_{0}^{1} x^{k} dx \\ S_{k} \approx \dfrac{n^{k+1}}{k+1}$ For example take $k =1$ and $n = 2000$ $S_{1} \approx \dfrac{2000^{2}}{2} = 2000000 \\ \text{exact sum = } 2001000$ an error of only $1000$ . the bigger the value of $n$ , more accurate is the approximation - 3 years, 10 months ago That is one of the patterns: The term with the highest degree of the equivalent of $\sum^n_{i=1}i^a$ is $\frac{n^{a+1}}{a+1}$ We can prove it by induction, as in: it works for $a = 1$, because $\sum^n_{i=1} i^1=\frac{n^2}{1+1} + ...$ If it works for $a = k, k-1,k-2, ...$, then for $a = k + 1$, we have: $\sum^n_{i=1}i^{k+1}=\frac{\sum^{k+2}_{i=1}\binom{k+2}{i}n^i-\left(\sum^{a}_{i=0}\binom{k+2}{i}\sum^n_{j=1}j^i\right )}{k+2}$ The term with the highest degree is $\frac{n^{(k+1)+1}}{(k+1)+1}$ due the first sum highest degree, no term of the other sum has a equal or higher degree, because the highest one is $\sum^n_{i=1}i^k$, which gives $\frac{n^{k+1}}{k+1}$ And as the others sums have a lower exponent, then they will have a lower degree of term(this is a strong induction) - 3 years, 10 months ago There's a (somewhat) general result for this involving Bernoulli numbers. It's called Faulhaber's formula. - 3 years, 10 months ago That is a very good comment, thanks for linking it, even though it make everything I did useless... - 3 years, 10 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 63, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582144021987915, "perplexity": 1143.9099010624614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00575.warc.gz"}
https://academicjournals.org/journal/JECE/article-full-text/B2695B767187
### Journal ofEnvironmental Chemistry and Ecotoxicology • Abbreviation: J. Environ. Chem. Ecotoxicol. • Language: English • ISSN: 2141-226X • DOI: 10.5897/JECE • Start Year: 2009 • Published Articles: 195 ## Potential for anaerobic digestion by incubation and anaerobic fermentation of landfill waste: A case of the city of Lomé (Togo) ##### Edem Komi Koledzi • Edem Komi Koledzi • Laboratory of Waste Management, Treatment and Recovery (GTVD), Faculty of Sciences, University of Lomé P. O. Box 1515 Lomé, Togo. ##### Kokou Semeho Hundjoe • Kokou Semeho Hundjoe • Laboratory of Waste Management, Treatment and Recovery (GTVD), Faculty of Sciences, University of Lomé P. O. Box 1515 Lomé, Togo. ##### Nitale M’Balikine Krou • Nitale M’Balikine Krou • Laboratory of Sanitation, Water Science and Environment (LASEE), Faculty of Science and Technology, University of Kara, Togo •  Accepted: 28 June 2021 •  Published: 31 July 2021 ABSTRACT The objective of this work is to determine the methanogenic potential of landfill waste. To do this, two types of tests were carried out. The first one is the pre-test of anaerobic digestion by incubation carried out in a 500 mL flask in which 50 g of waste is placed in contact with 350 mL of water and the second one is the pre-test of the anaerobic digestion by fermentation also carried out under the same conditions as the test for anaerobic digestion by incubation with the difference that in the latter case, inoculum (cow dung) is added up to 21.54 g, that is, 19.23 g of MV in order to respect an I/S ratio of 2. The production of biogas is measured by displacement of an acidified water solution to pH = 2 using an inverted test tube. The results showed that the waste is more favorable for anaerobic digestion by fermentation than for anaerobic digestion by incubation and that all the digesters operated at mesophilic temperature. Indeed, at the end of the experiment, a mass of 20 g produced; with the digester by fermentation 83.895 mL of biogas or 77.75 mL of methane for the waste and 34.307 mL of biogas or 30.25 mL of methane for the sand. With the digester by incubation 4.6 mL of biogas or 2.1 mL of methane for the waste and 4.35 mL of biogas or 2.05 mL of methane for the sand. These results clearly show that landfill waste can be recovered by anaerobic digestion and therefore the establishment of a biogas installation on site and will not only reduce GHGs but also recover the biogas in the form of electricity. Key words: Anaerobic digestion, methanogenic potential, biogas, landfill. INTRODUCTION Landfill is an easy and inexpensive technique for disposing of household and similar waste without adequate management. However, it can cause various problems in terms of hygiene and health as well as the environment (Thonart  et  al.,  1998).  Landfill  waste  is  a source of soil and groundwater contamination due to leachate on the one hand and on the other hand leads to significant biogas emissions (Amarante and Lima, 2010; Adjiri et al., 2018). Biogas mainly consists of combustible methane (50 to 70%) and inert carbon dioxide (20 to 50%). Other gases can be added in a minority way in the composition of biogas: hydrogen, hydrogen sulphide (H2S), ammonia (NH3), and nitrogen (N2). The content of these gases depends closely on the waste treated and the degree of anaerobic digestion (Buffiere 2007; Jung, 2013). In Togo, 80% of household and similar waste is sent to landfill (Tcha-Thom, 2014). The abandoned Agoè-Nyivé landfill has an area of ??65,552 m2 with a landfill capacity of 272,041 m3 and receives around 800 tonnes of waste per day, the majority of which is organic (around 30%) (Koledzi, 2011). In Togo, as in most developing countries, the amount of energy produced is less than consumption (Samah, 2015). Biomass, which is the main source of energy, represents 76% of final energy consumption, with mainly wood fuels, in particular fuelwood, charcoal and certain agricultural residues. Despite the fact that energy production remains below demand, the energy supplying countries in Togo only reduce the amount over time as demand only exacerbates. In order to fill this ever-widening void, the use of renewable energies has become essential (Tcha-Thom, 2019). In this context, the production of biogas from fermentable waste is an opportunity for the diversification of energy resources and sustainable management of the environment. The production of biogas through organic waste and its use could increase the economic feasibility of sanitation projects through the production of electricity. The use of biogas will directly contribute to meeting the challenges of climate change and to meeting Togo's current energy challenges. Thus, the objective of this work is to determine the methanogenic potential of the waste from the Agoè-Nyivé landfill with a view to setting up an anaerobic digestion unit on site. MATERIALS AND METHODS Presentation of the discharge and characterization of substrates Agoè-Nyivé landfill, the site of this study, is located in the district of Agoè-Nyivé, a locality located in the northern suburbs of the city of Lomé. That landfill has an area of 65,552 m2 with a landfill capacity of 272,041 m3. Lomé is located at 10 m above sea level. Lomé's dominant climate is known to be tropical. In winter, there is much less rainfall than in summer. The average temperature in Lomé is 26.6°C. Figure 1 shows the overview of the Agoè-Nyivé landfill in the city of Lomé. Fermentable waste and sand to which cow dung is added are studied. The sample of cow dung (40 kg) primarily used as inoculum for the experiments is taken from the agronomic farm of the Higher School of Agronomy of the University of Lomé. These samples are characterized in terms of pH, dry matter (DM) and organic matter (OM) necessary to launch the digestion tests. The moisture percentage of the various organic waste is determined by the difference in weight of the sample before and after drying until the mass stabilizes (Equation 1): with %H: percentage of humidity and MS: dry matter. The organic matter content (OM) is obtained by weighing difference between the mass of the dry waste (M1) and the mass of the waste calcined at 550°C (M2) up to a constant weight for more than 6 h. with M2: the final mass of waste calcined at 550°C. BMP pretest Two types of tests are carried out. This is firstly the pre-test of the anaerobic digestion by incubation carried out in a 500 mL flask (PVC bottle) in which 50 g of waste is contacted with 350 mL of tap water at 27°C. The bottle is filled to 80% of the total volume and then closed tightly. Then a second pre-test of the anaerobic digestion by fermentation is also carried out under the same conditions as the test of the anaerobic digestion by incubation with the addition of the inoculum (cow dung) in the amount of 21.54 g, that is, 19.23 g of MV in order to respect an I/S ratio of 2. The production of biogas is measured by displacement of an acidified water solution to pH = 2 using an inverted test tube (Figure 2). The use of acidified water prevents the dissolution of CO2 from the biogas (Tcha-Thom, 2019; Lacour, 2012). At the end of these pretests, 48 and 16 mL of biogas were produced, respectively by the anaerobic digestion by fermentation and the anaerobic digestion by incubation. Implementation of anaerobic digestion tests In anaerobic digestion by incubation, 20 g of wet waste are introduced into a 100 mL flask and make up with tap water to 80% of the total volume of the flask and then sealed. In the first week, the production of biogas and methane is measured daily using the device in Figure 3, then after 5 days the measurements were carried out every two days. The methane content is measured by displacing the liquid using NaOH solution (pH = 13) which traps the CO2 and that of the biogas using a HCl solution, pH = 2. The entire device is kept at room temperature to stimulate tropical conditions. The test is carried out over 30 days and in duplicate. A total of 4 flasks were used, 2 for the measurement of the biogas and 2 for the measurement of the methane content for each test: (1) Test 1 noted BAC1 (Wet waste from the landfill): 20 g of waste from the landfill; (2) Test 2 noted BAC2 (Wet sand from the landfill): 20 g of sand from the landfill. For anaerobic digestion by fermentation, the inoculum used is cow dung. Thus, 20 g of sample is brought into contact with inoculum (cow dung) so that the ratio Inoculum/Substrate (I/S) g of MV is equal to 2. Tap water is used to bring the whole to 80% of the volume of the bottle and then hermetically closed. The volume of production measurements were made under the same conditions as the test for anaerobic digestion by incubation. The test also lasts 30 days. Three tests were carried out all in duplicate: (1) Test 1 noted BAC3 (Landfill waste + inoculum + water): 20 g of landfill waste, that is, 3.84 g of MV with 8.6 g of cow dung or 7.68 g of MV; (2) Test 2 noted BAC4 (Sands + inoculum + water): 20 g of sand from the landfill, that is, 7.8 g of MV with 17.47 g of cow dung, or 15.6 g of MV; (3)  A  test  3  noted  BAC5  (Inoculum + water): 20 g of cow dung is carried out, anaerobic digestion of the inoculum to evaluate the production of biogas therein. The device (Figure 3) makes it possible to produce the volume of biogas and methane. Monitoring of the reaction environment of the digestion medium Monitoring the reaction environment requires large amounts of liquid phase which taken from the 100 mL vials could alter the volume of productions monitored. This is why the larger 25 L reactors (Figure 4) are set up and allow the sampling of the liquid phase and the monitoring of anaerobic digestion parameters such as pH, temperature, TAC and AGVs. Thus, in these 25 L reactors are placed 5 kg of wet waste and 15 L of water. Anaerobic digestion by incubation and fermentation are also carried out. For anaerobic digestion by incubation, two tests were carried out: (1) The first noted BAG1 containing the waste (2) The second denoted BAG2 containing sand The anaerobic digestion of the inoculum (cow dung) is added so that the I/S ratio of MV is equal to 2. The tests carried out were: (1) The first noted BAG3 containing 5 kg of waste from the landfill, that is, 960 g of MV with 2.15 kg of cow dung or 1920 g of MV so that the I/S ratio of MV is equal to 2; (2) The second noted BAG4 containing sand from the landfill is 1950 g of MV with 4.37 kg of cow dung or 3900 g of MV so that the I/S ratio of MV is equal to 2; (3) A test noted BAG5 containing 5 kg of inoculum. To monitor the parameters, a tap is placed on the digester to recover the liquid phase. Depending on the different matrices related to the stages of the anaerobic digestion process, the following parameters: Temperature; VFA; TAC; pH; % H; DM; and VM were analyzed. RESULTS AND DISCUSSION Physico-chemical characterization of substrates The preliminary characterization of the substrates is essential because it makes it possible to identify the difficulties inherent in the anaerobic digestion of the substrates in order to remedy them or the opportunities to optimize them. This also makes it possible to fix the experimental conditions for the experiments to be developed in the later phase. The physicochemical characterizations of the substrates (sand and waste) and of the inoculum (cow dung) are shown in Table 1. All the substrates studied have a basic pH varying from 8.4 to 9.7. That of waste from landfill is more pronounced than that of cow dung. Such pHs are not a priori favorable for anaerobic digestion. At least 70% of the substrates are dry matter showing their low water content. The organic matter content of the substrates is higher (89.3%) for cow dung than in landfill waste which contains a content of 39% (Sand) and 19.2% (organic waste). The results on cow dung are similar to those presented by Tcha-Thom (2019) who showed that cow dung had a pH of 8.5, a dry matter content of between 15 and 30% and high organic matter content of the order of 71% DM. The high pH of the waste from the landfill is believed to be primarily due to the reactions taking place within the landfill which would cause significant volatilization of ammonia. Study of the inoculum response The objective of the study of the inoculum response is to assess the effectiveness of the microbial flora present within the substrate of cow dung and its ability to degrade any substrate dedicated to methanization. The evolution of the cumulative volume of biogas and methane produced (Figure 5) shows an increase over time. After 30 days, the total volume of biogas produced reaches 97.75 mL, that is, 5.2.10-3m3 biogas/kg of MV, and that of methane is 91.75 mL, that is, 5.5.10-3 m3/kg of MV. The cumulative curve for biogas is similar to that for methane and the difference between methane and biogas is due to the fact that the CO2 has been stripped of the biogas. The methane content is 93.8% gold, under optimal conditions the methane content varies between 30 and 55%. This discrepancy is due to the fact that the reaction at the reactor level did not proceed until the production of biogas was stopped. Reactors by incubation and fermentation Figure 6 shows the evolution of the cumulative volume of biogas and methane from BAC1 and BAC2 digesters by incubation. Figure 7 shows the change in the cumulative volume of biogas and methane from BAC3 and BAC4 from digesters by fermentation. The change in the cumulative volume of biogas and methane produced in BAC1 and BAC2 (Figure 6) shows that it increases over time. After 30 days, the total volume of biogas produced for BAC1 reaches 4.6 mL, that is, 1.2.10-3 m3 biogas/kg of MV; that of BAC2 reaches 4.35 mL, that is, 5.6.10-4 m3 biogas/kg of MV. The total volume of methane produced for BAC1 reaches 2.1 mL or 5.5.10-4 m3 methane/kg MV, that of BAC2 reaches 2.05 mL, that is, 2.6.10-4 m3/kg MV. The results showed that the biogas and methane production of BAC1 is slightly higher than that of BAC2. This is because BAC1 contains the organic fraction while BAC2 contains sand. The methane content for BAC1 is 45.7% and that of BAC2 is 47.1% which respects the biogas content of methane. The change in the cumulative volume of biogas and methane produced in BAC3 and BAC4 (Figure 7) shows that it increases over time. After 30 days, the total volume of biogas produced for BAC3 reaches 126 mL, that is, 1.1.10-2 m3 biogas/kg MV that of BAC4 reaches 119.75 mL, that is, 5.1.10-3 m3 biogas/kg MV. The total volume of methane produced for BAC3 reaches 79.85 mL or 6.9.10-3 m3 methane/kg of MV that of BAC4 reaches 110.45 mL or 4.7.10-3 m3/kg of MV. The results showed that the biogas production (volume) of BAC3 is slightly higher than that of BAC4 and the methane production of BAC4 is much higher than that of BAC3. The methane content for BAC3 is 63.4% and that of BAC4 is 92.2% gold, under optimal conditions the content is between 30 and 55%. This difference is related to the presence of inoculum and the fact that the reaction in the digester did not take place until the production of biogas was stopped. Physico-chemical characterization of the different reactors The temperatures recorded vary between 23.9 and 28.3°C for the BAG1 (D) reactor (Figure 8a), between 24 and 27.9°C for the BAG2 (S) reactor (Figure 8b), between 24.2 and 28.2°C for the BAG3 (DB) reactor (Figure 8c), between 24.3 and 28.3°C for the BAG4 (SB) reactor (Figure 8d), between 24 and 27.6°C for the BAG5 reactor (B) (Figure 8e). The experiments were carried out at room temperature to stimulate natural conditions in a tropical environment. The analysis results showed that all the reactors operated in the mesophilic fermentation range of 23 and 29°C with an average of 26 ± 1°C, that is, a deficit of around 9°C compared to the optimum temperature in mesophilic digestion. This temperature is far from the optimum which is between 35 and 37°C under mesophilic conditions (Tcha-Thom, 2019). This deficit was not observed by Morais and Joácio (2006), where the average temperature inside the bioreactors is (35.0 ± 2°C) throughout incubation. The trials took place between September and October, and at this time of year, the ambient temperature is around 27°C. This deficit of around 9°C can be explained by the fact that the average ambient temperature is around 27°C. Evolution of pH and VFAs The recorded pH values ??vary between 7.7 and 8.2 for the BAG1 (D) reactor (Figure 9a), between 7.3 and 7.8 for the BAG2 (S) reactor (Figure 9b), between 6.7 and 8, 3 for the BAG3 (DB) reactor (Figure 9c), between 6.8 and 8 for the BAG4 (SB) reactor (Figure 9d), and between 6.4 and 7.4 for the BAG5 reactor (B) (Figure 9e). The test results show that all of the reactors operated in the pH range of 6.4 and 8.3. The pH must be between 6 and 8.5 for good fermentation and that the pH is not an inhibitor, which justifies the proper functioning of digesters (Tcha-thom, 2019). Figure 10 illustrates the evolution of VFAs in reactors. From the start to the 15th day the VFAs underwent an increase in the reactors except in the BAG2 reactor where there was a decrease until the 8th day before a slight increase until the 15th day. This increase in VFAs in the 4 reactors up to day 15 is explained by a decrease in pH. Indeed, the main cause of acidification of the environment is the accumulation of volatile fatty acids (Buffiere et al., 2007; Berthe, 2006; Bautista, 2019). The decrease of VFAs in the BAG2 reactor up to day 8 is explained by an increase in pH. From the 15 to 30th day there was a decrease in VFAs in all the reactors. Indeed, the re-increase of the pH and its stabilization is justified by an overconsumption of VFAs for the production of biogas. After 30 days  the VFAs  are less than 3 g/L in all the reactors. A VFAs concentration of less than 3 g/L is recommended for proper operation of the reactors (Farquhar and Rovers, 2013). Methanogenic potential of reactors by incubation and fermentation After 30 days, the incubation reactor containing 20 g of waste (BAC1) produced 4.6 mL of biogas while the fermentation reactor containing 20 g of waste and 8.6 g of cow dung (BAC3) produced 126 mL of biogas by withdrawing the production of biogas from 8.6 g of cow dung which is 42 mL if 20 g of cow dung gives 97.75 mL, 84 mL of biogas is obtained for the waste. It was deduced that the production of biogas during the fermentation of the waste is 18.3 times greater than that of the incubation. The incubation digester containing 20 g of sand (BAC2) produces 4.35 mL of biogas while the fermentation digester containing 20 g of sand and 17.47 g cow dung (BAC4) produces 119.75 mL of biogas by withdrawing the production biogas from 17.47 g of cow dung which is 85.4 mL if 20 g of cow dung gives 97.75 mL, 34.35 mL of biogas is obtained for the sand. It was deduced that the biogas production at the level of the fermentation of the sand is 7.9 times higher than that of the incubation. After 30 days, the digester by incubation containing 20 g of waste (BAC1) produces 2.1 mL of methane while the digester by fermentation containing 20 g of waste and 8.6 g of cow dung (BAC3) produces 79.85 mL of methane by withdrawing the production of methane from 8.6 g of cow dung which is 39.45 mL if 20 g of cow dung gives 91.75 mL, 40.40 mL of methane is obtained for the waste. It was deduce that the production of methane during the fermentation of the waste is 19.2 times greater than that of the incubation. The incubation digester containing 20 g of sand (BAC2) produces 2.05 mL of methane while the fermentation digester containing 20 g of sand and 17.47 g cow dung (BAC4) produces 110.45 mL of methane by withdrawing the production methane from 17.47 g of cow dung which is 80.1 mL if 20 g of cow dung gives 91.75 mL, 30.35 mL of methane is obtained for the sand. It was deduced that the production of methane during fermentation is 14.8 times greater than that of incubation. Consequently, anaerobic digestion by fermentation produces more biogas and methane than an anaerobic digestion by incubation starting from the same mass and duration (Table 2). The values obtained are comparable to those of Tanios (2017). The results of the calculation of the potential of biogas and methane in m3/ kg MV are presented in Table 3. The results of biogas production are shown in Table 4. CONCLUSION The objective of this study is to assess the potential for anaerobic digestion by incubation and anaerobic fermentation of waste from an abandoned landfill in Lomé, Togo. The results obtained show that the waste from the Agoè-Nyivé landfill can be treated by anaerobic digestion. The experimental results showed that all the reactors operated at mesophilic temperature and that the presence of inoculum in a reactor is essential in the fermentation process of biodegradable waste. To optimize the anaerobic digestion of waste, it is essential to control several factors including pH, TAC, VFA, agitation as well as temperature. In landfill, these parameters cannot be controlled. On the other hand, in anaerobic digestion they are controllable. This study has shown that the waste from an abandoned landfill has methanogenic potential and the biogas could be recovered in the form of energy and thus reduce GHGs. Indeed,  thanks   to  this  study,  the  determination of  the potential for anaerobic digestion by incubation and fermentation of waste from an abandoned landfill was possible. This study opens up new horizons for better recovery of organic waste by deploying a biogas installation on site for conversion into electricity. CONFLICT OF INTERESTS The authors have not declared any conflict of interests. ACKNOWLEDGEMENT The authors thank the Laboratory of Waste Management, Treatment and Recovery (LWMTR) in which this work was carried out. REFERENCES Adjiri O, Koudou A, Soro G, Biémi J (2018). Study of the energy recovery potential of biogas from the Akouédo landfill (Abidjan, Ivory Coast). Waste, science and technology, n°77.  Crossref Amarante J, Lima A (2010). "Anaerobic digestion of putrescible municipal waste - available technologies and challenges for Quebec", 99. Bautista A (2019). "Feasibility study of micro-methanization by co-digestion at the district level". IMT Atlantique University. Berthe C (2006). Study of the Organic Matter contained in leachate from different household and similar waste treatment channels. Thesis, University of Limoges. Buffiere P, Carrere M, Lemaire O, Vasquez J (2007). "Methodological guide for the operation of solid waste methanization units", Repport 40 p. Farquhar G, Rovers F (2013). Gas production during refuses decomposition. Water, Air, and Soil Pollution. Solid Waste Treatment Routes 2(4):483 495. Crossref Jung C (2013). Solid waste treatment methods: material and energy recovery. Bull. Sci. Inst. Natl. Conserv. Nat., 50-54. http://bi.chm-cbd.net/biodiversity. Koledzi E (2011). "Valorization of solid urban waste in the districts of Lomé (Togo): Methodological approach for a sustainable production of compost.," PhD,thesis, University of Lomé,Togo P 224. Lacour J (2012). Valorization of the organic fraction of agricultural residues and other similar wastes using anaerobic biological treatments. INSA of Lyon; Quisqueya University (Port-au Prince). NNT: 2012ISAL0026ff. tel-00825479:218. Morais D, Joácio J (2006). "Influence of the mechanical and biological pre-treatment of Residual Household Waste (OMR) on their bio-physico-chemical behavior in Waste Storage Facilities (ISD)". National Institute of Applied Sciences of Lyon. Samah K (2015). Diagnosis of the energy situation in Togo. Ministry of the Environment, Department of Water and Forests. Togolese Republic. Tanios C (2017). Characterization, evaluation of the toxicity of biogas from household waste and recovery by catalytic reforming. Cote d'Opale Coastal University and Lebanese University. Tcha-Thom M (2014). Assessment of the impact of organic matter fractions on the characteristics of Togolese and French soils. Research master's thesis, University of Limoges. Tcha-thom M (2019). Search for a sustainable sector for the methanization of fruit and slaughterhouse waste from Togo: Assessment of the agronomic potential of digestates on soils in the Kara region.," Geochemistry P 205. Thonart P, Steyer E, Drion R, Hiligsmann S (1998). Biological management of a landfill. Report pp. 590-591. */?>
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886421322822571, "perplexity": 4263.519834283178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00343.warc.gz"}
https://mathoverflow.net/questions/167764/status-of-baranys-conjecture
# Status of Barany's conjecture? One of Barany's most intriguing conjectures is about the $f$-vectors of convex polytopes. It asks: Let $P$ be a convex $d$-polytope. Is it always true that $f_k \geq \min(f_0, f_{d-1})$? A convex $d$-polytope is the convex hull of finitely many vertices $v_1, \ldots, v_n \in \mathbb{R}^d$. A face of the polytope is whatever you can shave off with a hyperplane. Its dimension is the dimension of the affine hull it generates. The number of faces are collected in the so-called $f$-vector $f = (f_0, \ldots, f_{d-1})$, where $f_k$ denotes the number of $k$-dimensional faces. Further information on polytopes can be found in Ziegler's "Lectures on Polytopes". My question is What's the status of Barany's conjecture? For which classes of polytopes is it known to hold? Here is what I know: By double counting it is easy to see that the conjecture is true for simple and simplicial polytopes. Furthermore, it is nothing but a boring calculation to prove that the basic polytope constructions like taking products, prisms, pyramids, etc. retain the the conjectured inequality. Given that the face lattice of a convex $d$-polytope is a graded, atomic, coatomic lattice of rank $d+2$, it makes sense to ask a far more general question: Given such a lattice, denote the number of elements of the $k$th level again by $f_k$. Is it still true that $f_k \geq \min(f_0, f_{d-1})$? Here the answer is no since I was able to construct a counterexample. But if the lattice also has what Ziegler calls the "diamond property" (every interval of length two looks like a diamond), then I don't know the answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8306543827056885, "perplexity": 105.2748683595602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00092.warc.gz"}
https://slideplayer.com/slide/3525390/
# Ch. 8.3 Newton’s Laws of Motion ## Presentation on theme: "Ch. 8.3 Newton’s Laws of Motion"— Presentation transcript: Ch. 8.3 Newton’s Laws of Motion Motion and Forces Ch. 8.3 Newton’s Laws of Motion Section 8.3 Objectives State Newton’s three laws of motion, and apply them to physical situations. Calculate force, mass, and acceleration with Newton’s second law. Recognize that the free-fall acceleration near Earth’s surface is independent of the mass of the falling object. Explain the difference between mass and weight. Identify paired forces on interacting objects. Newton’s First Law Newton's 1st Law States : An object at rest remains at rest and an object in motion maintains its velocity unless it experiences an unbalanced force. Newton’s 1st Law is also known as the Law of Inertia. Inertia is the tendency of an object to remain at rest or in motion with a constant velocity. Newton’s Second Law This means : force = mass x acceleration F = ma Where :F is in Newton’s (N) m is in kilograms ( kg) a is in meters per second squared (m/s2) Newton’s 2nd Law States: The unbalanced force acting on an object equals the object’s mass times its acceleration. Free fall and Weight Free fall is the motion of a body when only the force of gravity is acting on it. Free fall acceleration near Earth’s surface constant. Disregarding air resistance, all objects on Earth’s surface (regardless of their mass) accelerate at 9.8 m/s2. Weight is the force on an object due to gravity. Weight equals mass times free fall acceleration or w = mg. Weight is a force and has SI units of Newtons (N). Terminal Velocity Terminal velocity is the maximum velocity reached by a falling object that occurs when the resistance of the medium is equal to the force due to gravity. At terminal velocity, air resistance and force of gravity are equal. Sky divers reach a terminal velocity of about 320 km/h (200 mi/h). Newton’s Third Law Newton’s Third Law states: For every action force, there is an equal and opposite reaction force. Forces always occur in pairs but act on different objects. The upward push on the rocket equals the downward push on the exhaust gases. Section 8.3 Summary An object at rest remains at rest and an object in motion maintains a constant velocity unless it experiences an unbalanced force (Newton’s 1st law). The unbalanced force acting on an object equals the object’s mass times its acceleration, or F = ma (Newton’s 2nd Law). The SI unit for force is the Newton (N). Weight equals mass times free fall acceleration, or W = mg. For every action force, there is an equal and opposite reaction force (Newton’s 3rd law).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548929929733276, "perplexity": 385.9548727067017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00018.warc.gz"}
https://en.wikipedia.org/wiki/Refinable_function
# Refinable function In mathematics, in the area of wavelet analysis, a refinable function is a function which fulfils some kind of self-similarity. A function $\varphi$ is called refinable with respect to the mask $h$ if $\varphi(x)=2\cdot\sum_{k=0}^{N-1} h_k\cdot\varphi(2\cdot x-k)$ This condition is called refinement equation, dilation equation or two-scale equation. Using the convolution (denoted by a star, *) of a function with a discrete mask and the dilation operator $D$ one can write more concisely: $\varphi=2\cdot D_{1/2} (h * \varphi)$ It means that one obtains the function, again, if you convolve the function with a discrete mask and then scale it back. There is a similarity to iterated function systems and de Rham curves. The operator $\varphi\mapsto 2\cdot D_{1/2} (h * \varphi)$ is linear. A refinable function is an eigenfunction of that operator. Its absolute value is not uniquely defined. That is, if $\varphi$ is a refinable function, then for every $c$ the function $c\cdot\varphi$ is refinable, too. These functions play a fundamental role in wavelet theory as scaling functions. ## Properties ### Values at integral points A refinable function is defined only implicitly. It may also be that there are several functions which are refinable with respect to the same mask. If $\varphi$ shall have finite support and the function values at integer arguments are wanted, then the two scale equation becomes a system of simultaneous linear equations. Let $a$ be the minimum index and $b$ be the maximum index of non-zero elements of $h$, then one obtains $\begin{pmatrix} \varphi(a)\\ \varphi(a+1)\\ \vdots\\ \varphi(b) \end{pmatrix} = \begin{pmatrix} h_{a } & & & & & \\ h_{a+2} & h_{a+1} & h_{a } & & & \\ h_{a+4} & h_{a+3} & h_{a+2} & h_{a+1} & h_{a } & \\ \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \\ & h_{b } & h_{b-1} & h_{b-2} & h_{b-3} & h_{b-4} \\ & & & h_{b } & h_{b-1} & h_{b-2} \\ & & & & & h_{b } \end{pmatrix} \cdot \begin{pmatrix} \varphi(a)\\ \varphi(a+1)\\ \vdots\\ \varphi(b) \end{pmatrix}.$ Using the discretization[disambiguation needed] operator, call it $Q$ here, and the transfer matrix of $h$, named $T_h$, this can be written concisely as $Q\varphi = T_h \cdot Q\varphi. \,$ This is again a fixed-point equation. But this one can now be considered as an eigenvector-eigenvalue problem. That is, a finitely supported refinable function exists only (but not necessarily), if $T_h$ has the eigenvalue 1. From the values at integral points you can derive the values at dyadic points, i.e. points of the form $k\cdot 2^{-j}$, with $k\in\mathbb{Z}$ and $j\in\mathbb{N}$. $\varphi = D_{1/2} (2\cdot (h * \varphi))$ $D_2 \varphi = 2\cdot (h * \varphi)$ $Q(D_2 \varphi) = Q(2\cdot (h * \varphi)) = 2\cdot (h * Q\varphi)$ The star denotes the convolution of a discrete filter with a function. With this step you can compute the values at points of the form $\frac{k}{2}$. By replacing iteratedly $\varphi$ by $D_2 \varphi$ you get the values at all finer scales. $Q(D_{2^{j+1}}\varphi) = 2\cdot (h * Q(D_{2^j}\varphi))$ ### Convolution If $\varphi$ is refinable with respect to $h$, and $\psi$ is refinable with respect to $g$, then $\varphi*\psi$ is refinable with respect to $h*g$. ### Differentiation If $\varphi$ is refinable with respect to $h$, and the derivative $\varphi'$ exists, then $\varphi'$ is refinable with respect to $2\cdot h$. This can be interpreted as a special case of the convolution property, where one of the convolution operands is a derivative of the Dirac impulse. ### Integration If $\varphi$ is refinable with respect to $h$, and there is an antiderivative $\Phi$ with $\Phi(t) = \int_0^{t}\varphi(\tau)\mathrm{d}\tau$, then the antiderivative $t \mapsto \Phi(t) + c$ is refinable with respect to mask $\frac{1}{2}\cdot h$ where the constant $c$ must fulfill $c\cdot(1 - \sum_j h_j) = \sum_j h_j \cdot \Phi(-j)$. If $\varphi$ has bounded support, then we can interpret integration as convolution with the Heaviside function and apply the convolution law. ### Scalar products Computing the scalar products of two refinable functions and their translates can be broken down to the two above properties. Let $T$ be the translation operator. It holds $\langle \varphi, T_k \psi\rangle = \langle \varphi * \psi^*, T_k\delta\rangle = (\varphi*\psi^*)(k)$ where $\psi^*$ is the adjoint of $\psi$ with respect to convolution, i.e. $\psi^*$ is the flipped and complex conjugated version of $\psi$, i.e. $\psi^*(t) = \overline{\psi(-t)}$. Because of the above property, $\varphi*\psi^*$ is refinable with respect to $h*g^*$, and its values at integral arguments can be computed as eigenvectors of the transfer matrix. This idea can be easily generalized to integrals of products of more than two refinable functions.[1] ### Smoothness A refinable function usually has a fractal shape. The design of continuous or smooth refinable functions is not obvious. Before dealing with forcing smoothness it is necessary to measure smoothness of refinable functions. Using the Villemoes machine[2] one can compute the smoothness of refinable functions in terms of Sobolev exponents. In a first step the refinement mask $h$ is divided into a filter $b$, which is a power of the smoothness factor $(1,1)$ (this is a binomial mask) and a rest $q$. Roughly spoken, the binomial mask $b$ makes smoothness and $q$ represents a fractal component, which reduces smoothness again. Now the Sobolev exponent is roughly the order of $b$ minus logarithm of the spectral radius of $T_{q*q^*}$. ## Generalization The concept of refinable functions can be generalized to functions of more than one variable, that is functions from $\R^d \to \R$. The most simple generalization is about tensor products. If $\varphi$ and $\psi$ are refinable with respect to $h$ and $g$, respectively, then $\varphi\otimes\psi$ is refinable with respect to $h\otimes g$. The scheme can be generalized even more to different scaling factors with respect to different dimensions or even to mixing data between dimensions.[3] Instead of scaling by scalar factor like 2 the signal the coordinates are transformed by a matrix $M$ of integers. In order to let the scheme work, the absolute values of all eigenvalues of $M$ must be larger than one. (Maybe it also suffices that $|\det M|>1$.) Formally the two-scale equation does not change very much: $\varphi(x)=|\det M|\cdot\sum_{k\in\Z^d} h_k\cdot\varphi(M\cdot x-k)$ $\varphi=|\det M|\cdot D_{M^{-1}} (h * \varphi)$ ## Examples • If the definition is extended to distributions, then the Dirac impulse is refinable with respect to the unit vector $\delta$, that is known as Kronecker delta. The $n$-th derivative of the Dirac distribution is refinable with respect to $2^n\cdot\delta$. • The Heaviside function is refinable with respect to $\frac{1}{2}\cdot\delta$. • The truncated power functions with exponent $n$ are refinable with respect to $\frac{1}{2^{n+1}}\cdot\delta$. • The triangular function is a refinable function.[4] B-spline functions with successive integral nodes are refinable, because of the convolution theorem and the refinability of the characteristic function for the interval $[0,1)$ (a boxcar function). • All polynomial functions are refinable. For every refinement mask there is a polynomial that is uniquely defined up to a constant factor. For every polynomial of degree $n$ there are many refinement masks that all differ by a mask of type $v * (1,-1)^{n+1}$ for any mask $v$ and the convolutional power $(1,-1)^{n+1}$.[5] • A rational function $\varphi$ is refinable if and only if it can be represented using partial fractions as $\varphi(x) = \sum_{i\in\mathbb{Z}} \frac{s_i}{(x-i)^k}$, where $k$ is a positive natural number and $s$ is a real sequence with finitely many non-zero elements (a Laurent polynomial) such that $s | (s \uparrow 2)$ (read: $\exists h(z)\in\mathbb{R}[z,z^{-1}]\ h(z)\cdot s(z) = s(z^2)$). The Laurent polynomial $2^{k-1}\cdot h$ is the associated refinement mask.[6] ## References 1. ^ Dahmen, Wolfgang; Micchelli, Charles A. (1993). "Using the refinement equation for evaluating integrals of wavelets". Journal Numerical Analysis (SIAM) 30: 507–537. 2. ^ Villemoes, Lars. "Sobolev regularity of wavelets and stability of iterated filter banks" (PostScript). Retrieved 2006. 3. ^ Berger, Marc A.; Wang, Yang (1992), "Multidimensional two-scale dilation equations (chapter IV)", in Chui, Charles K., Wavelet Analysis and its Applications 2, Academic Press, Inc., pp. 295–323 Missing or empty |title= (help) 4. ^ Nathanael, Berglund. "Reconstructing Refinable Functions". Retrieved 2010-12-24. 5. ^ Thielemann, Henning (2012-01-29). "How to refine polynomial functions". arXiv:1012.2453. 6. ^ Gustafson, Paul; Savir, Nathan; Spears, Ely (2006-11-14), "A Characterization of Refinable Rational Functions" (PDF), American Journal of Undergraduate Research 5 (3): 11–20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 96, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686212539672852, "perplexity": 472.6511021767361}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00043-ip-10-171-96-226.ec2.internal.warc.gz"}
https://slideplayer.com/slide/6004803/
# Induced emf Experiment: –When the switch is closed, the ammeter deflects momentarily –There is an induced emf in the secondary coil (producing an induced. ## Presentation on theme: "Induced emf Experiment: –When the switch is closed, the ammeter deflects momentarily –There is an induced emf in the secondary coil (producing an induced."— Presentation transcript: Induced emf Experiment: –When the switch is closed, the ammeter deflects momentarily –There is an induced emf in the secondary coil (producing an induced current) due to the changing magnetic field through this coil When there is a steady current in the primary coil, the ammeter reads zero current –Steady magnetic fields cannot produce a current Magnetic Flux In general, induced emf is actually due to a change in magnetic flux Magnetic flux is defined similarly to electric flux: –Units of flux are T  m 2 (or Wb) –Flux can be positive or negative The value of the magnetic flux through a loop/surface is proportional to the total number of field lines passing through the loop/surface Faraday’s Law Faraday’s Law says that the magnitude of the induced emf in a circuit is equal to the time rate of change of the magnetic flux through the circuit The average emf  induced in a circuit containing N tightly wound loops is given by: –Magnetic flux changes by amount  B during the time interval  t The minus sign in the equation concerns the sense of the polarity of the induced emf in the loop (and how it makes current flow in the loop) –The direction follows Lenz's law:Lenz's law The direction of the induced emf is such that it produces a current whose magnetic field opposes the change in magnetic flux through the loop The induced current tends to maintain the original flux through the loop Example Problem #20.58 Solution (details given in class): 0.121 A clockwise The square loop shown is made of wires with a total series resistance of 10.0 . It is placed in a uniform 0.100-T magnetic field directed perpendicular into the plane of the paper. The loop, which is hinged at each corner, is pulled as shown until the separation between points A and B is 3.00 m. If this process takes 0.100 s, what is the average current generated in the loop? What is the direction of the current? Interactive Example Problem: Flux and Induced emf in a Loop Animation and solution details given in class. (PHYSLET Physics Exploration 29.3, copyright Pearson Prentice Hall, 2004) Example Problem #20.29 Solution (details given in class): (a)Right to left (b)Right to left (c)Left to right (d)Left to right Find the direction of the current in the resistor R shown in the figure after each of the following steps (taken in the order given): (a) The switch is closed. (b) The variable resistance in series with the battery is decreased. (c) The circuit containing resistor R is moved to the left. (d) The switch is opened. Interactive Example Problem: Lenz’s Law Animation and solution details given in class. (PHYSLET Physics Exploration 29.1, copyright Pearson Prentice Hall, 2004) Applications of Faraday’s Law Ground fault interrupter (GFI) Electric guitar pickups Magnetoencephalography (brain), magnetocardiograms (heart and surrounding nerves) Motional emf Consider a straight conducting bar moving perpendicularly to the direction of a uniform external magnetic field –Free charges (e – ) experience downward force F –Creates a charge separation in the conductor –Charge separation leads to a downward electric field E –Eventually equilibrium is reached and the (downward) magnetic force qvB on the electrons balances the (upward) electric force qE qE = qvB  E = vB  V = El (since E uniform)  –A potential difference is maintained across the conductor as long as there is motion through the field If motion is reversed, polarity of potential difference is also reversed l Motional emf Now consider the same bar as part of a closed conducting path –Since the area of the current loop changes as the bar moves, an induced emf is created due to a changing magnetic flux through the loop –An induced current is created –The direction of the current is counterclockwise (Lenz’s law) –Assume bar moves a distance  x in time  t  B = B  A = Bl  x From Faraday’s Law (and noting that N = 1 ): (motional emf) Motional emf Notes on this experiment: –The moving bar acts like a battery with an emf given by the previous formula –Although the current tends to deplete the accumulated charge at each end of the bar, the magnetic force pumps more charge to maintain a constant potential difference –At right is the equivalent circuit diagram –The work done by the force pulling the rod is the source of the electrical energy –This setup is not practical; a more practical use of motional emf is through a rotating coil of wire (electric generators) –Note that a battery can be used to drive a current, which establishes a magnetic field, which can propel the bar Inductive Magnetoreception in Sharks Physics Today, March 2008 Launching Mechanism (“Rail Gun”) Example Problem #20.21 Solution (details given in class): (a)Toward the east (b)4.58  10 –4 V A car has a vertical radio antenna 1.20 m long. The car travels at 65.0 km/h on a horizontal road where the Earth’s magnetic field is 50.0  T, directed toward the north and downwards at an angle of 65.0° below the horizontal. (a) Specify the direction the car should move in order to generate the maximum motional emf in the antenna, with the top of the antenna positive relative to the bottom. (b) Calculate the magnitude of this induced emf. Electric Generators Alternating current (AC) generators Direct current (DC) generators N = # turns in loop coil B = magnetic field strength A = Area of the loop coil  = angular rotation speed  max = NBA  (plane of loop parallel to magnetic field)  min = 0 (plane of loop perpendicular to magnetic field) f = 60 Hz (U.S.) (  2  f ) (  =  t) (note similarity to DC motor – Chap. 19) Motors and Back emf Generators convert mechanical energy to electrical energy Motors convert electrical energy to mechanical energy  generators run in reverse As the coil in the motor rotates, the changing magnetic flux through it induces an emf –This emf acts to reduce the current in the coil (in agreement with Lenz’s law) –For this reason, the emf is called a back emf –Power requirements for starting a motor or running it under heavy loads larger than running under average loads Self-Inductance Consider the effect of current flow in a simple circuit with a battery and a resistor –Faraday’s law prevents the current from reaching its maximum value (  / R ) immediately after switch is closed Magnetic flux through the loop changes as current increases Resulting induced emf acts to reduce total emf of circuit –When the rate of current change lessens as current increases, induced emf decreases Another example: increasing and decreasing current in a solenoid coil Self-Inductance These effects are called self-induction The emf that is produced is called self-induced emf (or back emf) – L is a constant called the inductance –The minus sign indicates that a changing current induces an emf that is in opposition to that change –Units of inductance is the henry (H)  1 H = 1 V  s / A An expression for inductance can be found by equating Faraday’s law with the above: RL Circuits A circuit element having a large inductance is called an inductor From we see that an inductor acts as a current stabilizer –The larger the inductance in a circuit, the larger the opposition to the rate of change of the current –Remember that resistance is a measure of the opposition to current Consider a series circuit consisting of a battery, resistor, and inductor –Current changes most rapidly when switch is first closed Back emf largest at this point (Series RL circuit) RL Circuits Using calculus, we can derive an expression for the current in the circuit as a function of time –Similar to charge q as a function of time for a charging capacitor –Like RC circuits, there is a time constant for the circuit: When the switch is opened, the current in the circuit decreases exponentially with time (rise of current) ( decay of current ) decay of current Example Problem #20.53 Solution (details given in class): (a)20.0 ms (b)37.9 V (c)1.52 mV (d)51.8 mA An 820-turn wire coil of resistance 24.0  is placed on top of a 12500-turn, 7.00-cm-long solenoid as shown. Both coil and solenoid have cross-sectional areas of 1.00×10 –4 m 2. (a) How long does it take the solenoid current to reach 0.632 times its maximum value? (b) Determine the average back emf caused by the self- inductance of the solenoid during this interval. (c) The magnetic field produced by the solenoid at the location of the coil is ½ as strong as the field at the center of the solenoid. Determine the average rate of change in magnetic flux through each turn of the coil during the stated interval. (d) Find the magnitude of the average induced current in the coil. Download ppt "Induced emf Experiment: –When the switch is closed, the ammeter deflects momentarily –There is an induced emf in the secondary coil (producing an induced." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9038072228431702, "perplexity": 964.9022037149953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00234.warc.gz"}
http://tex.stackexchange.com/questions/34249/minitoc-on-the-part-page
# \minitoc on the \part-page I would like to modify the \partpage such that it shows a table of contents of that part. Preferably also one or two levels deeper than the toc in the beginning of the document. In styling the \part page Alan Munn shows how to edit the page, \documentclass{report} \usepackage{titlesec,minitoc} \titleclass{\part}{top} \titleformat{\part} {\centering\normalfont\Huge\bfseries} {} {0pt} {} \begin{document} \tabkeofcontents \part{This is the Part} \minitoc %\parttoc,\doparttoc \chapter{This is its first chapter} \chapter{This is its second chapter} \end{document} but how do I add the 'minitoc'? In the above snippet neither \minitoc nor \parttocor \doparttoc have any effect. \minitoc only works for chapters - not parts. Is there any other option or a smart modification I did not find? - ... edited the code snippet –  D.Roepo Nov 9 '11 at 17:59 Since you are using titlesec it's not convenient to use minitoc. The following remark appears in the minitoc documentation: The titlesec package redefines the sectionning commands in a way completely alien to the standard LaTeX way; hence minitoc and titlesec-titletoc are fundamentaly incompatible, and it is very sad. However, you can easily produce the ToCs using titletoc: \documentclass{report} \usepackage{titlesec,titletoc} \titleclass{\part}{top} \titleformat{\part} {\centering\normalfont\Huge\bfseries}{}{0pt}{} \setcounter{secnumdepth}{5} \begin{document} \tableofcontents \part{This is the Part} \startcontents[parts] \printcontents[parts]{}{-1}{\setcounter{tocdepth}{5}} \chapter{First part. This is its first chapter} \section{First part. Section} \subsection{First part. Subsection} \subsubsection{First part. Subsubsection} \paragraph{First part. Paragraph} \subparagraph{First part. Subparagraph} \chapter{First part. This is its second chapter} \stopcontents[parts] \part{This is the Part} \startcontents[parts] \printcontents[parts]{}{-1}{\setcounter{tocdepth}{5}} \chapter{Second part. This is its first chapter} \section{Second part. Section} \subsection{Second part. Subsection} \subsubsection{Second part. Subsubsection} \paragraph{Second part. Paragraph} \subparagraph{Second part. Subparagraph} \end{document} - adding \usepackage{hyperref} to the solution yields an error perhaps a missing \item in the created '.ptc' file. I am not able to solve this, is it best to post a new question? –  D.Roepo Jul 8 '13 at 18:53 @D.Roepo I just did a test adding hyperref to my answer and everything worked OK (no errors) on my system (TeX Live2013). If you are getting some error(s), it would be best to open a new follow-up question including there a MWE allowing us to reproduce the problem (and perhaps providing a link to this question). –  Gonzalo Medina Jul 8 '13 at 20:25 Is it possible to alter the tocdepth of the 'minitoc' displayed on the part page within a part? I do not want multi-page tocs in the middle of the document. Simply changing it by \printcontents[parts]{}{-1}{\setcounter{tocdepth}{1}} did not do it... –  D.Roepo Nov 28 '13 at 11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90151047706604, "perplexity": 4215.074316522125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138501.67/warc/CC-MAIN-20140914011218-00088-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/190037-finding-limit-print.html
# Finding the limit • October 10th 2011, 06:06 PM bhuang Finding the limit Hi, can anyone help me approach the following problems: Find the limit, or explain why it does not exist. 1) lim (1-|x|) / (x-1) as x-->1 2) lim ((sq. rt(1-h))-1) / h as h-->0 Using a computer or calculator, estimate the following limits. Use the technology to sketch the graph of the function to check the accuracy of your estimate. lim (e^(2x) -1 ) / x as x-->0 I know for this last one it says to use a calculator or computer.. but then I really find no point in even doing the problem at all if we can just use a calculator. So is there a way to solve this limit algebraically? Is there an easy way to graph this as well? Or do I just have to sub in values and graph from the table of values? Thanks! • October 10th 2011, 06:18 PM TheChaz Re: Finding the limit For the third one, you'll have more tools for computing this limit later (L'Hospital's rule)... • October 10th 2011, 07:04 PM Prove It Re: Finding the limit Quote: Originally Posted by bhuang Hi, can anyone help me approach the following problems: Find the limit, or explain why it does not exist. 1) lim (1-|x|) / (x-1) as x-->1 2) lim ((sq. rt(1-h))-1) / h as h-->0 Using a computer or calculator, estimate the following limits. Use the technology to sketch the graph of the function to check the accuracy of your estimate. lim (e^(2x) -1 ) / x as x-->0 I know for this last one it says to use a calculator or computer.. but then I really find no point in even doing the problem at all if we can just use a calculator. So is there a way to solve this limit algebraically? Is there an easy way to graph this as well? Or do I just have to sub in values and graph from the table of values? Thanks! For the first, note that $\displaystyle |x| = x \textrm{ if }x \geq 0$, so since you are making the limit approach 1, $\displaystyle 1 - |x| = 1 - x = -(x - 1)$. Can you go from here? For the second, try multiplying numerator and denominator by the numerator's conjugate... • October 10th 2011, 07:07 PM Deveno Re: Finding the limit with regards to question number 3, sort of: if you are willing to accept that: $e^x = 1 + x + x^2/2 + x^3/6 + ....+ x^n/n! +....$ (which you probably have not demonstrated yet, but it is true) then, $e^{2x} = 1 + 2x + 4x^2/2 + 8x^3/6 +....+ 2^nx^n/n! +....$ and $e^{2x} - 1 = 2x + 4x^2/2 + 8x^3/6 +....+ 2^nx^n/n! +....$ so $\frac{e^{2x}-1}{x} = 2 + x(2x + 4x^3/3 +....+2^nx^{n-1}/n!+....)$ if, further, you are willing to accept that the stuff in the parentheses is finite (even though it has an infinite number of terms) then: $\lim_{x \to 0} \frac{e^{2x}-1}{x} = 2$ now, that's a lot of stuff to take on faith, and so its appropriate to just make a guess based on how it behaves. my suggestion is that you guess "2", and remember this limit, so you can check later, when you have more theorems under your belt. • October 11th 2011, 08:50 AM HallsofIvy Re: Finding the limit For (2), $\lim_{h\to 0} \frac{\sqrt{1- h}- 1}{h}$, multiply both numerator and denominator by $\sqrt{1- h}+ 1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834043145179749, "perplexity": 570.2946256723225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463122.1/warc/CC-MAIN-20150226074103-00247-ip-10-28-5-156.ec2.internal.warc.gz"}
http://joaoff.com/2008/02/11/direct-proofs/?replytocom=135
Calculational proofs are usually direct jd2718 asked in his blog if anyone knew a direct proof of the irrationality of $\sqrt{2}$. In this post I present a proof that, even if some don’t consider it direct, is a nice example of the effectiveness of calculational proof. But first, there are two concepts that need to be clarified: direct proof and irrational number. Direct proofs The concept of direct proof can vary slightly from person to person. For instance, Wikipedia defines it as: In mathematics and logic, a direct proof is a way of showing the truth or falsehood of a given statement by a straightforward combination of established facts, usually existing lemmas and theorems, without making any further assumptions. Alternatively, in Larry W. Cusick’s website we can read: A direct poof [sic] should be thought of as a flow of implications beginning with “P” and ending with “Q”. P -> … -> Q Most proofs are (and should be) direct proofs. Always try direct proof first, unless you have a good reason not to. I consider the wording ‘without making any further assumptions‘ in the first definition ambiguous and I don’t understand why the second definition only applies to implications. But anyway, with these definitions in mind, a direct proof for the irrationality of $\sqrt{2}$ can be something like: $\beginproof \pexp{\text{\sqrt{2} is irrational}} \hint{=}{justification} \pexp{true~~.} \endproof$ Or, alternatively, we can also use a proof of the following shape: $\beginproof \pexp{\text{\sqrt{2} is irrational}} \hint{\Leftarrow}{justification} \pexp{true~~.} \endproof$ Irrational numbers An irrational number is a real number that can’t be expressed as a simple fraction. Therefore, the number $\sqrt{2}$ is irrational because for all integers $m$ and $n$, with $n$ non-negative, we have that: $\sqrt{2} \neq \frac{m}{n} ~~.$ A direct proof for the irrationality of $\sqrt{2}$ Now that we have clarified the concepts above, we prove that $\sqrt{2}$ is irrational. For all integers $m$ and $n$, with $n$ non-negative, we have: $\beginproof \pexp{\sqrt{2} \neq \frac{m}{n}} \hint{=}{Use arithmetic to eliminate the square root operator.} \pexp{{n^2{\times}2}\neq{m^2}} \hintf{\Leftarrow}{Two values are different if applying the same function to them} \hintl{yields different values.} \pexp{\fapp{exp}{(n^2{\times}2)} \neq \fapp{exp}{m^2}} \hintf{=}{Now we choose the function exp.} \hintm{Let \fapp{exp}{k} be the number of times that 2 divides k.} \hintm{The function exp has two important properties:} \hintm{~~~\fapp{exp}{2}=1 and} \hintm{~~~\fapp{exp}{k{\times}l} = \fapp{exp}{k} + \fapp{exp}{l}} \hintl{We apply these properties to simplify the left and right sides.} \pexp{1{\,+\,}2{\times}\fapp{exp}{n} ~\neq~ 2{\times}\fapp{exp}{m}} \hintf{=}{The left side is an odd number and the right side is an even} \hintl{number. Odd numbers and even numbers are different.} \pexp{true ~~.} \endproof$ Note that, unlike traditional proofs, we don’t assume that $m$ and $n$ are co-prime, nor that $\sqrt{2}$ is a rational number. We simply derive the boolean value of the expression $\sqrt{2}~{\neq}~{\frac{m}{n}}$. Note: I learnt the contrapositive of this proof from Roland Backhouse (page 38, Program Construction — Calculating Implementations from Specifications). 17 thoughts on “Calculational proofs are usually direct” 1. Slick. Thank you. 2. No problem 3. by the way, comments on wordpress support the wordpress subset of LaTeX. But since there’s no preview, if you goof, you have to ask the host to nicely fix things up. Let’s try: $latex e^{ipi}+1=0$ 4. nope 5. Hmmm. I’m using a plugin called wp-latexrenderer. I have to investigate whether it works for comments. 6. Did you keep that sqrt on a little longer than necessary, or am I completely misunderstanding this proof? 7. Dear Erik, of course I have. It was a typo, I’m sorry. It is corrected now. Thanks a lot! 8. Hi. delicious!truly elegant! isn’t the first equivalence actually an implication? m/n could be minus sqrt(2)… tks 9. Mike, The so-called Leibniz rule (also called ‘substitution of equals by equals’) states that, for all a, b, and function f: a = b => f.a = f.b . Hence, you can see the first step as a mutual implication where, in one direction, function f is taking the square root, and in the other direction, function f corresponds to squaring. 10. I am just a layperson who’s interested in logic. Your proof looks very intuitive and direct, and I have to admit it’s more elegant. However, I have a small question from my ignorance on your notation. In-between the statements, there are lines that explain the transformation. All of them start with an equal sign except the second one, which is left-arrow. What does that mean and why is this the only one? 11. Jane, Each step in the proof format I use has the form: A R { hint } B . This means that A is related with B by relation R, and the reason is shown in the hint. If I instantiate R with equality, I get: A = { hint } B , which means that A=B. Similarly, if I use the arrow, as in A < = {hint} B , then it means that B implies A. *** In this particular example, I am using the contrapositive of the so-called Leibniz rule: f(a) =/= f(b) => a =/= b . In words, if the value of f(a) is different from f(b), then a must be different from b. I hope this helps, Joao 12. And why are two values different if applying the same function to them yields different values? Because if they were same, function would return same values 13. Dear XSLX, that’s the contrapositive of Leibniz’s rule. Leibniz rule states that, for all a, b, and function f: a=b => f.a = f.b The contrapositive is obtained by negating both sides and changing the direction of the implication: a=/=b <= f.a =/= f.b Joao 14. Argh, I don’t completely understand. Why must exp(n) and exp(m) be integers? Why cant exp(m)=exp(n)+0.5? Thanks. =) • Hi Simon, By definition, exp(m) is the number of times that 2 divides m; this means that exp(m) is always a natural number. For example, exp(4)=2, exp(5)=0, exp(6)=1, exp(8)=3. Hence, you can never find m and n, such that exp(m)=exp(n)+0.5, because the left-hand side is a natural number and the right-hand side is not. I hope this helps, Joao 15. I’m a little late to the game here, but I think it’s very interested that both of us had early forays into the calculational style, inspired by Roland Backhouse’s calculation in Program Construction. My effort at the same thing was recorded in JAW2, which can be read here: http://mathmeth.com/jaw/main/jaw0xx/jaw2.pdf . Cheers!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.874864399433136, "perplexity": 872.3427794781652}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593937.27/warc/CC-MAIN-20200118193018-20200118221018-00274.warc.gz"}
https://www.reproducibility.org/RSF/book/tccs/emdpf/paper_html/node2.html
Random noise attenuation by - empirical mode decomposition predictive filtering Next: Empirical mode decomposition Up: Chen & Ma: EMD Previous: Introduction # f-x predictive filtering Let be the signal of trace and be the number of traces. If the slope of a linear event with constant amplitude in a seismic section is , then: (1) where denotes the trace interval. Equation 1 can be transformed into the frequency domain in order to give: (2) For a specific frequency , from equation 2 we can obtain a linear recursion, which is given by: (3) where . This recursion is a first-order difference equation, also known as an auto-regressive (AR) model of order 1. Similarly, superposition of linear events in the domain can be represented by an AR model of order (Harris and White, 1997; Tufts and Kumaresan, 1980) as the following equation: (4) where denotes the predictive error filter, with a length of . The prediction error energy is given by the following equation: (5) where symbol denotes convolution, and denotes the least-squares energy. By minimizing the prediction error energy , we can get the filtering operator . Applying this operator to the spatial trace yields the denoised results for the frequency slice . predictive filtering works perfectly on a single event. Figures 1(a)-1(c) show and compare the denoised results for a single flat synthetic event. The denoised result (Figure 1(b)) is quite good, with the random noise in Figure 1(a) largely removed and only a small amount of the useful component in the noise section, Figure 1(c). For a single dipping event, Figures 1(d)-1(f), the results are similar. However, when the number of different dips is increased, the seismic section becomes more complex and predictive filtering is not as effective. Figure 1(j) shows a synthetic section containing four events with differing dips. In the removed noise section, Figure 1(i), there remains a significant amount of residual useful energy. The synthetic data shown in Figures 1(a), 1(d), and 1(j) were all generated by SeismicLab (Sacchi, 2008), with a signal-to-noise ratio (SNR) of 2.0 for all of them. Here we define the SNR as the ratio of maximum amplitude of useful energy and the maximum amplitude of Gaussian white noise. Note that the same parameters were used for the predictive filters in each case shown in Figure 1. We now conclude that the effectiveness of predictive filtering deteriorates as the number of different dips increases, mainly because the total of leaked useful energy increases at the same time. In particular, when the number of dips is extremely large, as occurs with hyperbolic events, predictive filtering fails to achieve acceptable results. It is natural to infer that if we can first reduce the number of dips, or in other words pick the very steep events and total random noise out, then by applying the same predictive filtering, the predictive precision will improve. That is the subject of the section on empirical mode decomposition predictive filtering. syn01-flat,syn01-flat-fxdecon,syn01-flat-fxdecon-noise,syn01-dip,syn01-dip-fxdecon,syn01-dip-fxdecon-noise,syn01-complex,syn01-complex-fxdecon,syn01-complex-fxdecon-noise,syn01-complex,syn01-complex-fxemdpf,syn01-complex-fxemdpf-noise Figure 1. Demonstration of predictive filtering (a-i) and EMDPF (j-l) on synthetic section with different number of dip components. (a) Single flat event. (b) Denoised single flat event. (c) Removed noise section corresponding to (a) and (b). (d) Single dipping event. (e) Denoised single dipping event. (f) Removed noise section corresponding to (d) and (e). (g) Complex events section. (h) Denoised complex events section. (i) Removed noise section corresponding to (g) and (h). (j) Same as (g). (k) Denoised result by EMDPF. (l) Removed noise section corresponding to (j) and (k). Random noise attenuation by - empirical mode decomposition predictive filtering Next: Empirical mode decomposition Up: Chen & Ma: EMD Previous: Introduction 2014-08-20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502111077308655, "perplexity": 1680.860726650773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00291.warc.gz"}
http://mymathforum.com/math-software/340421-solving-equation-matlab.html
My Math Forum Solving equation in Matlab Math Software Math Software - Mathematica, Matlab, Calculators, Graphing Software May 9th, 2017, 04:25 AM #1 Newbie   Joined: Nov 2016 From: Aus Posts: 11 Thanks: 0 Solving equation in Matlab Hi All I'm newbie and do not have that experience in Matlab but really in urgent help with this question? Could you please help with solving this equation for X in Matlab? S*e=(C.^2)/E*X^b+f*C*(X)^(b+a) S,e, E are constants C,f,b, and a are known row vectors of 12 elements, so expect 12 value for X S=328, e=0.0033, E=204000 this code comes with errors C= [777.5 777.5 798.75 820 832.75 832.75 841.25 862.5 883.75 905 926.25 947.] f=[0.630796209 0.652441176 0.642583333 0.632803922 0.626973922 0.626973922 0.623102941 0.613480392 0.603936275 0.594470588 0.585083333 0.57577451] b=[-0.09 -0.09 -0.09 -0.09 -0.09 -0.156 -0.09 -0.09 -0.09 -0.09 -0.09 -0.09 ] a=[-0.56 -0.56 -0.56 -0.56 -0.56 -0.485 -0.56 -0.56 -0.56 -0.56 -0.56 -0.56 ] b=[-0.09 -0.09 -0.09 -0.09 -0.09 -0.156 -0.09 -0.09 -0.09 -0.09 -0.09 -0.09 ]; c=[-0.56 -0.56 -0.56 -0.56 -0.56 -0.485 -0.56 -0.56 -0.56 -0.56 -0.56 -0.56 ]; N=[1 12]; SF=[777.5 777.5 798.75 820 832.75 832.75 841.25 862.5 883.75 905 926.25 947.5 ]; ef=[0.630796209 0.652441176 0.642583333 0.632803922 0.626973922 0.626973922 0.623102941 0.613480392 0.603936275 0.594470588 0.585083333 0.57577451 ]; E=204000; SMx=328.4; de2=0.033; for k=1:12 eqn=(de2*SMx-((SF(k))^2/E)*((2*N(k))^b(k))-ef(k)*SF(k)*((2*N(k))^((b(k)+c(k))))== 0); vpasolve(eqn,N(k)) end; Thank you so much Last edited by Naz Ar; May 9th, 2017 at 05:11 AM. May 9th, 2017, 05:27 AM #2 Senior Member   Joined: Feb 2016 From: Australia Posts: 1,591 Thanks: 546 Math Focus: Yet to find out. Could you please write out what you are trying to do exactly, and what errors you are getting. This is a mess. May 9th, 2017, 05:58 AM   #3 Newbie Joined: Nov 2016 From: Aus Posts: 11 Thanks: 0 Quote: Originally Posted by Joppy Could you please write out what you are trying to do exactly, and what errors you are getting. This is a mess. I want to solve the equation in terms of different inputs and get the N value for each input I do not how to define the vector of the output, and how to recall the output from the loop The error is ((Index exceeds matrix dimensions. Error in sym/subsref (line 776) R_tilde = builtin('subsref',L_tilde,Idx); Error in equ_tion (line 14) eqn=(de2*SMx-((SF(k))^2/E)*((2*N(k))^b(k))-ef(k)*SF(k)*((2*N(k))^((b(k)+c(k))))== 0); ))) the code is (( b=[-0.09 -0.09 -0.09 -0.09 -0.09 -0.156 -0.09 -0.09 -0.09 -0.09 -0.09 -0.09 ]; c=[-0.56 -0.56 -0.56 -0.56 -0.56 -0.485 -0.56 -0.56 -0.56 -0.56 -0.56 -0.56 ]; syms N ; SF=[777.5 777.5 798.75 820 832.75 832.75 841.25 862.5 883.75 905 926.25 947.5 ]; ef=[0.630796209 0.652441176 0.642583333 0.632803922 0.626973922 0.626973922 0.623102941 0.613480392 0.603936275 0.594470588 0.585083333 0.57577451 ]; E=204000; SMx=328.4; de2=0.033; for k=1:12 eqn=(de2*SMx-((SF(k))^2/E)*((2*N(k))^b(k))-ef(k)*SF(k)*((2*N(k))^((b(k)+c(k))))== 0); vpasolve(eqn,N(k)) end; ))) May 9th, 2017, 06:17 AM   #4 Math Team Joined: Oct 2011 Posts: 12,415 Thanks: 830 Quote: Originally Posted by Naz Ar S*e=(C.^2)/E*X^b+f*C*(X)^(b+a) As Joppy says: what a mess! Is that same as: S*e = [C^2 / E] * [X^b] + [f*C*X^(b+a)] ? Simplify it a bit: u = S*e, v = f*C, w = b+a to get: u = [C^2 / E] * [X^b] + [v*X^w] Anyhow, with X^b and X^(b+a), can't be solved directly... Agree Sir Joppy? May 13th, 2017, 07:47 PM #5 Newbie   Joined: Nov 2016 From: Aus Posts: 11 Thanks: 0 Please help with this problem. I tried to make it clearer '' syms N b, c, e, and s =row vectors of ten elements A=6; (scalar value) B=e*s; M=b+c; eqn=(A-(s^2)*((2*N)^b)-B*((2*N)^(M))== 0); for k=1:10 vpasolve(eqn,N(k)); end; '' how to solve N for all the vector elements (inputs) This loop does not work as error come as follows Error using ^ Inputs must be a scalar and a square matrix. To compute elementwise POWER, use POWER (.^) instead. (tried but did not work) Error in equ_tion (line 16) eqn=(A-(SF^2)*((2*N)^b)-B*((2*N)^(M))== 0); (cannot figure out what is the wrong here) Thanks May 13th, 2017, 08:41 PM   #6 Senior Member Joined: Feb 2016 From: Australia Posts: 1,591 Thanks: 546 Math Focus: Yet to find out. Quote: Originally Posted by Naz Ar Please help with this problem. I tried to make it clearer '' syms N b, c, e, and s =row vectors of ten elements A=6; (scalar value) B=e*s; M=b+c; eqn=(A-(s^2)*((2*N)^b)-B*((2*N)^(M))== 0); for k=1:10 vpasolve(eqn,N(k)); end; '' how to solve N for all the vector elements (inputs) This loop does not work as error come as follows Error using ^ Inputs must be a scalar and a square matrix. To compute elementwise POWER, use POWER (.^) instead. (tried but did not work) Error in equ_tion (line 16) eqn=(A-(SF^2)*((2*N)^b)-B*((2*N)^(M))== 0); (cannot figure out what is the wrong here) Thanks Code: syms N b = randn(10,1)'; c = randn(10,1)'; e = randn(10,1)'; s = randn(10,1)'; A = 6; %(scalar value) B = e.*s; M = b + c; f1 = A -(s.^2).*((2.*N).^b) f2 = B.*((2.*N).^(M)) eqn=((f1 - f2) == 0); I'm pretty sure you can't use vpasolve for this. May 15th, 2017, 08:16 PM #7 Newbie   Joined: Nov 2016 From: Aus Posts: 11 Thanks: 0 Thank you Joppy for your response and suggestion When applied your code it does not solve the equation. do you advise any function to solve this eqn please Regards Tags equation, matlab, solving Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post M94Antony Calculus 0 August 27th, 2016 01:52 PM xsw001 Differential Equations 1 August 30th, 2014 04:16 PM Amaz1ng Algebra 8 April 23rd, 2011 05:15 PM Orient Blue Applied Math 6 May 9th, 2010 08:15 AM jhon13 Complex Analysis 0 December 31st, 1969 04:00 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8025285005569458, "perplexity": 4998.515329644577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865830.35/warc/CC-MAIN-20180523215608-20180523235608-00610.warc.gz"}
https://mlr3measures.mlr-org.com/reference/prauc.html
Computes the area under the Precision-Recall curve (PRC). The PRC can be interpreted as the relationship between precision and recall (sensitivity), and is considered to be a more appropriate measure for unbalanced datasets than the ROC curve. The PRC is computed by integration of the piecewise function. prauc(truth, prob, positive, na_value = NaN, ...) ## Arguments truth (factor()) True (observed) labels. Must have the exactly same two levels and the same length as response. (numeric()) Predicted probability for positive class. Must have exactly same length as truth. (character(1)) Name of the positive class. (numeric(1)) Value that should be returned if the measure is not defined for the input (as described in the note). Default is NaN. (any) Additional arguments. Currently ignored. ## Value Performance value as numeric(1). ## Note This measure is undefined if the true values are either all positive or all negative. ## Meta Information • Type: "binary" • Range: $$[0, 1]$$ • Minimize: FALSE • Required prediction: prob ## References Davis J, Goadrich M (2006). “The relationship between precision-recall and ROC curves.” In Proceedings of the 23rd International Conference on Machine Learning. ISBN 9781595933836.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8997389078140259, "perplexity": 3437.47279047774}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00011.warc.gz"}
https://www.physicsforums.com/threads/peculiar-feature-of-a-commutator-can-anyone-explain.291008/
# Peculiar feature of a commutator, can anyone explain? 1. Feb 9, 2009 ### haitao23 http://img209.imageshack.us/img209/4922/14662031eo8.jpg [Broken] Last edited by a moderator: May 4, 2017 2. Feb 9, 2009 ### weejee 1. Note that we are only interested in wavefunctions such that $$f(\phi) = f(\phi+2\pi)$$. 2. When the operator $$-i\hbar \frac{\partial}{\partial\phi}$$ is applied to a $$2\pi$$ periodic function, the resulting function is also $$2\pi$$ periodic. -> No problem 3. Yet, the operator $$\phi$$ applied to a $$2\pi$$ periodic function doesn't give a periodic function. -> This is problematic. $$\phi$$ is not a legitimate operator in the Hilbert space considered. 3-2. Instead, one should understand "$$\phi$$" as a periodic function which has period $$2\pi$$ and coincide with $$\phi$$ in the interval $$[0,2\pi]$$ (Just imagine the sawtooth wave) 4. Now, if one evaluates the commutator with more carefully defined "$$\phi$$", additional delta function term follows, and everything should agree with the other method(utilizing Bra-Ket notations). 5. I think my explanation isn't rigorous in mathematicians' sense but still suffices to give the right answer. Hope it helped. Last edited: Feb 9, 2009 3. Feb 9, 2009 ### xepma What you proved is that the expectation value of the commutator with respect to some spherical symmetric eigenstate of L_z is zero. But this is always the case, namely, suppose if: $[A,B] = C$ For some hermitian operators A, B and a third operator C. Now take an eigenstate of the operator B, let's name it $|b\rangle$, such that $B|b\rangle = b|b\rangle$. Then: $\langle b|[A,B]|b\rangle = \langle b|AB|b\rangle - \langle b|BA|b\rangle =b\langle b|A|b\rangle - b\langle b|A|b\rangle = 0$ Which is precisely what you showed, and I haven't even defined the operators yet. The fact is that we cannot conclude that the commutator is zero, because we have only considered a very particular set of expecation values. For instance, consider the following (b and b' correspond to different states): $\langle b'|[A,B]|b\rangle = b'\langle b'|A|b\rangle - b\langle b'|A|b\rangle$ In general, this will not be identical to zero. Last edited: Feb 9, 2009 4. Feb 9, 2009 ### haitao23 Hey weejee! You are really astute! I think You pinpointed the problem that has been bothering me for a month! I believe you got the point, just to clarify How is the wavefunction in spherical coordinate defined? Is it defined with phi belonging to [0,2pi) or is it defined by imposing the condition f(phi+2pi)=f(phi)? And please give me a book (where it is to refered to also... I did not find a definition even in the dictionary like Cohen-Tannoudji book) What I am wondering is that if the wavefunction is indeed defined with phi belonging to [0,2pi) (which in my humble opinion sounds far more natural to associate each point in space with a single point in the function) then the periodity problem of phi operator really dosen't matter as we would never get out of the 2pi domain... Last edited: Feb 9, 2009 5. Feb 9, 2009 ### haitao23 Hi,xepma I do not agree with you. If the operator is indeed as the coordinate representation calculated, i.e. ih(bar), then no matter what state we use, we should always get the expectation value is ih(bar), since the commutator is essentially the operator ih(bar)I. So if we can show that in a PARTICULAR state, the expectation value is NOT ih(bar), then we have shown that the representation is wrong (Proof by finding a counterexample) So I am not saying that the operator is 0, but I am say the operator is definitely not ih(bar) as calculated in the coordinate representation 6. Feb 9, 2009 ### jensa Hi, I think I can add a reference, although it is not a textbook. There is a nice old article by Carruthers et al. in Reviews of Modern Physics, 40, 411 (1968) that discusses this issue. If you do not have access to PROLA just google "phase and angle variables in quantum mechanics" and you should be able to find a downloadable version around 6th place (I don't want to give direct link due to forum policy) The base manifold (space on which function is defined) is a circle, i.e. the points phi and phi+2pi are the same points, so it does not really make sense to say that the wave-function is defined outside [0,2pi). The topological constraint imposes the periodicity of the wave-functions, and any operator on this (reduced)Hilbert space must transform a periodic function to another periodic function. This is not satisfied by phi as pointed out by weejee, so it must be made periodic. The paradox in the commutation relation calculation you showed is a direct consequence of the fact that phi*f(phi) is not periodic. It is easy to see that in fact L_z=id/dphi is not self adjoint unless it acts on periodic functions (check it). I am not completely sure this answered your question, I realize I mostly repeated what weejee said.. anyway hope it helped in some way. 7. Feb 9, 2009 ### weejee Well, thanks. Actually I had pondered over this problem for months and reached this conclusion after debates with my friends. I think the condition ( $$f(\phi)=f(\phi+2\pi)$$) should be imposed, as $$\phi$$ and $$\phi+2\pi$$ corresponds to the same physical point. To fit the function $$f(\phi)=\phi$$ in such scheme, the sudden $$2\pi$$ drop between $$2\pi-\epsilon$$ and $$0+\epsilon$$ should be accounted for. Therefore, its derivative is not 1 but additional delta function term follows. Last edited: Feb 9, 2009 8. Feb 9, 2009 ### haitao23 Yes, you definitely answered my question. And your reference makes excellent reading. Thanks a lot. Although I am still reading the article, I am eager to clarify a bit more on what u just said, So do u mean by what u said in red that the wavefunction is NOT defined outside [0,2pi)? Or from what u said in blue u seems to be defining the wavefunction outside [0,2pi) by imposing periodicity? So after all is the wavefunction defined outside [0,2pi) ? 9. Feb 9, 2009 ### lbrits You can also ask what is the operator conjugate to the number operator of the harmonic oscillator. Supposedly this is how Dirac found the creation and annihilation operators (though I'm not sure). In any event, you might want to look at the Weyl representation of the canonical commutation relations. 10. Feb 9, 2009 ### jensa I was afraid that I was not very clear in what I said. The reason for this is that the issue involves topology which I am only vaguely familiar with. But I will try to explain what I meant. As I mentioned before, the space that the wavefunction is defined on is the circle. The problem is to find a proper coordinate system for this space. If we simply use the angle $\phi$ then we run into the problem that the points $\phi=0, \phi=2\pi$ describe the same point but have different "labels". Hence it does not correctly represent the non-trivial topology of the cirlce. So how do we go about dealing with this difficulty? In topology one makes use of several coordinate systems on different "patches" of the manifold and "glue them togehter" (via a map/relation between the two coordinate systems). In most cases it is much simpler to do the following: Define the angle to be in the interval [0,2pi), i.e. excluding the point 2pi, and then impose the limiting behaviour $\psi(\phi\rightarrow 2\pi)=\psi(0)$. This makes sure the wave-function knows about the topological properties of the space it is defined on (this is what I mean when I say impose periodic boundary conditions). Expectation values and such can then be obtained by integrating over [0,R] and take the limit R goes to 2pi. Effectively this last step just means to integrate over [0,2pi]. The important thing is just to make sure the wave-functions obey the topological constraints.. Last edited: Feb 9, 2009 Similar Discussions: Peculiar feature of a commutator, can anyone explain?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9208280444145203, "perplexity": 503.11327111394945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891808539.63/warc/CC-MAIN-20180217224905-20180218004905-00573.warc.gz"}
http://www.eels.info/how/quantification/plural-scattering-and-sample-thickness
# Plural scattering and sample thickness Plural scattering occurs when a significant fraction of incident electrons that pass through a sample are scattered inelastically more than once. The inelastic mean free path $\lambda$ represents the mean distance between inelastic scattering events for these electrons. When you regard inelastic scattering as a random event, the probability of n-fold inelastic scattering follows a Poisson distribution. $$P_{n}=\frac{I_{n}}{I_{t}}=\frac{\left ( \frac{t}{\lambda } \right )^{n}}{n!}exp\left ( \frac{-t}{\lambda } \right )$$ where • $$I_{n}$$ = intensity of n-fold scattering • $$I_{t}$$ = total intensity For $$n=0$$, we get the simple relationship $$t/\lambda = -ln( I_{o}/I_{t} )$$ where • $$I_{o}$$ = intensity of 0-fold scattering (the elastic scattering signal) This method is known as the log-ratio method. From a spectrum, the integral of the ZLP will give $$I_{o}$$, while the integral of the entire spectrum will give $$I_{t}$$. You can compute this easily from low-loss spectrum via the Thickness button located in the EELS processing palette in the Techniques panel. GMS will fit the ZLP to extract the $$I_{o}$$ contribution. While the total spectrum is never truly measured, $$I_{t}$$ can be approximated from a measured energy range since a majority of the signal is contained in the first 50 – 100 eV. GMS uses a power law tail beyond the measured spectrum to estimate its contribution to $$I_{t}$$. When you perform this analysis, you can determine if the region is thin enough for EELS and if plural scattering has significant impact. Typically, just the ratio of $$t/\lambda$$ is needed, but methods are available to estimate the value of the mfp for your material. Another method to determine the absolute thickness of the sample is the Kramers-Kronig Sum Rule. ## References Malis, T.; Cheng, S. C.; Egerton, R. F. EELS log ratio technique for specimen-thickness measurement in the TEM. J. Electron Microscope Technique. 8:193; 1988. Iakoubovskii, K.; Mitsuishi, K.; Nakayama, Y.; Furuya, K. Thickness measurements with electron energy loss spectroscopy. Microscopy Research and Technique. 71(8):626 – 31; 2008. Iakoubovskii, K.; Mitsuishi, K.; Nakayama, Y.; Furuya, K. Mean free path of inelastic electron scattering in elemental solids and oxides using transmission electron microscopy: Atomic number dependent oscillatory behavior. Physical Review B. 77(10):104102; 2008. Egerton, R. F., Cheng, S. C. Measurement of local thickness by electron energy-loss spectroscopy. Ultramicroscopy. 21(3):231 – 244; 1987.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459991455078125, "perplexity": 1986.5342304863505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249504746.91/warc/CC-MAIN-20190223142639-20190223164639-00122.warc.gz"}
http://arxiv.org/abs/1303.0289
hep-ph (what is this?) # Title: Higgs diphoton rate and mass enhancement with vector-like leptons and the scale of supersymmetry Abstract: Analysis of contributions from vector-like leptonic supermultiplets to the Higgs diphoton decay rate and to the Higgs boson mass is given. Corrections arising from the exchange of the new leptons and their super-partners, as well as their mirrors are computed analytically and numerically. We also study the correlation between the enhanced Higgs diphoton rate and the Higgs mass corrections. Specifically, we find two branches in the numerical analysis: on the lower branch the diphoton rate enhancement is flat while on the upper branch it has a strong correlation with the Higgs mass enhancement. It is seen that a factor of 1.4-1.8 enhancement of the Higgs diphoton rate on the upper branch can be achieved, and a 4-10 GeV positive correction to the Higgs mass can also be obtained simultaneously. The effect of this extra contribution to the Higgs mass is to release the constraint on weak scale supersymmetry, allowing its scale to be lower than in the theory without extra contributions. The vector-like supermultiplets also have collider implications which can be tested at the LHC and at the ILC. Comments: 20 pages, 4 figures Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Experiment (hep-ex) Journal reference: Phys.Rev.D87:075018,2013 DOI: 10.1103/PhysRevD.87.075018 Cite as: arXiv:1303.0289 [hep-ph] (or arXiv:1303.0289v2 [hep-ph] for this version) ## Submission history From: Wan-Zhe Feng [view email] [v1] Fri, 1 Mar 2013 21:00:02 GMT (427kb,D) [v2] Mon, 20 May 2013 15:48:26 GMT (421kb,D)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249591827392578, "perplexity": 2131.4542995651022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00117-ip-10-164-35-72.ec2.internal.warc.gz"}
https://research.tue.nl/nl/publications/probing-the-local-electronic-structure-of-isovalent-bi-atoms-in-i
Probing the local electronic structure of isovalent Bi atoms in InP C.M. Krammel, A.R. da Cruz, M.E. Flatté, M. Roy, P.A. Maksym, L.Y. Zhang, K. Wang, Y.Y. Li, S.M. Wang, P. M. Koenraad Samenvatting Cross-sectional scanning tunneling microscopy (X-STM) is used to experimentally study the influence of isovalent Bi atoms on the electronic structure of InP. We map the spatial pattern of the Bi impurity state, which originates from Bi atoms down to the sixth layer below the surface, in topographic, filled state X-STM images on the natural $\{110\}$ cleavage planes. The Bi impurity state has a highly anisotropic bowtie-like structure and extends over several lattice sites. These Bi-induced charge redistributions extend along the $\left\langle 110\right\rangle$ directions, which define the bowtie-like structures we observe. Local tight-binding calculations reproduce the experimentally observed spatial structure of the Bi impurity state. In addition, the influence of the Bi atoms on the electronic structure is investigated in scanning tunneling spectroscopy measurements. These measurements show that Bi induces a resonant state in the valence band, which shifts the band edge towards higher energies. This is in good agreement to first principles calculations. Furthermore, we show that the energetic position of the Bi induced resonance and its influence on the onset of the valence band edge depend crucially on the position of the Bi atoms relative to the cleavage plane. Originele taal-2 Engels 1906.01790 10 arXiv Gepubliceerd - 5 jun. 2019 Trefwoorden • cond-mat.mtrl-sci • cond-mat.mes-hall Vingerafdruk Duik in de onderzoeksthema's van 'Probing the local electronic structure of isovalent Bi atoms in InP'. Samen vormen ze een unieke vingerafdruk.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894358277320862, "perplexity": 4704.143123232349}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00720.warc.gz"}
https://ru.kattis.com/problems/anotherbrick
OpenKattis Reykjavik University # Another Brick in the Wall The construction worker previously known as Lars has many bricks of height $1$ and different lengths, and he is now trying to build a wall of width $w$ and height $h$. Since the construction worker previously known as Lars knows that the subset sum problem is $\mathsf{NP}$-hard, he does not try to optimize the placement but he just lays the bricks in the order they are in his pile and hopes for the best. First he places the bricks in the first layer, left to right; after the first layer is complete he moves to the second layer and completes it, and so on. He only lays bricks horizontally, without rotating them. If at some point he cannot place a brick and has to leave a layer incomplete, then he gets annoyed and leaves. It does not matter if he has bricks left over after he finishes. Yesterday the construction worker previously known as Lars got really annoyed when he realized that he could not complete the wall only at the last layer, so he tore it down and asked you for help. Can you tell whether the construction worker previously known as Lars will complete the wall with the new pile of bricks he has today? ## Input The first line contains three integers $h$, $w$, $n$ ($1 \leq h \leq 100$, $1 \leq w \leq 100$, $1 \leq n \leq 10\, 000$), the height of the wall, the width of the wall, and the number of bricks respectively. The second line contains $n$ integers $x_ i$ ($1 \leq x_ i \leq 10$), the length of each brick. ## Output Output YES if the construction worker previously known as Lars will complete the wall, and NO otherwise. Sample Input 1 Sample Output 1 2 10 7 5 5 5 5 5 5 5 YES Sample Input 2 Sample Output 2 2 10 7 5 5 5 3 5 2 2 NO
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173988461494446, "perplexity": 570.6417378633091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00728.warc.gz"}
https://en.m.wikipedia.org/wiki/Gauss%E2%80%93Bonnet_theorem
# Gauss–Bonnet theorem The Gauss–Bonnet theorem, or Gauss–Bonnet formula, is a relationship between surfaces in differential geometry. It connects the curvature of a surface (from geometry) to its Euler characteristic (from topology). An example of a complex region where Gauss–Bonnet theorem can apply. Shows the sign of geodesic curvature. In the simplest application, the case of a triangle on a plane, the sum of its angles is 180 degrees.[1] The Gauss–Bonnet theorem extends this to more complicated shapes and curved surfaces, connecting the local and global geometries. The theorem is named after Carl Friedrich Gauss, who developed a version but never published it, and Pierre Ossian Bonnet, who published a special case in 1848.[not verified in body] ## Statement Suppose ${\displaystyle M}$  is a compact two-dimensional Riemannian manifold with boundary ${\displaystyle \partial M}$ . Let ${\displaystyle K}$  be the Gaussian curvature of ${\displaystyle M}$ , and let ${\displaystyle k_{g}}$  be the geodesic curvature of ${\displaystyle \partial M}$ . Then[2][3] ${\displaystyle \int _{M}K\;dA+\int _{\partial M}k_{g}\;ds=2\pi \chi (M),\,}$ where dA is the element of area of the surface, and ds is the line element along the boundary of M. Here, ${\displaystyle \chi (M)}$  is the Euler characteristic of ${\displaystyle M}$ . If the boundary ${\displaystyle \partial M}$  is piecewise smooth, then we interpret the integral ${\displaystyle \int _{\partial M}k_{g}\;ds}$  as the sum of the corresponding integrals along the smooth portions of the boundary, plus the sum of the angles by which the smooth portions turn at the corners of the boundary. Many standard proofs use the theorem of turning tangents, which states roughly that the winding number of a Jordan curve is exactly ±1.[2] ## Interpretation and significance The theorem applies in particular to compact surfaces without boundary, in which case the integral ${\displaystyle \int _{\partial M}k_{g}\;ds}$ can be omitted. It states that the total Gaussian curvature of such a closed surface is equal to 2π times the Euler characteristic of the surface. Note that for orientable compact surfaces without boundary, the Euler characteristic equals ${\displaystyle 2-2g}$ , where ${\displaystyle g}$  is the genus of the surface: Any orientable compact surface without boundary is topologically equivalent to a sphere with some handles attached, and ${\displaystyle g}$  counts the number of handles. If one bends and deforms the surface ${\displaystyle M}$ , its Euler characteristic, being a topological invariant, will not change, while the curvatures at some points will. The theorem states, somewhat surprisingly, that the total integral of all curvatures will remain the same, no matter how the deforming is done. So for instance if you have a sphere with a "dent", then its total curvature is 4π (the Euler characteristic of a sphere being 2), no matter how big or deep the dent. Compactness of the surface is of crucial importance. Consider for instance the open unit disc, a non-compact Riemann surface without boundary, with curvature 0 and with Euler characteristic 1: the Gauss–Bonnet formula does not work. It holds true however for the compact closed unit disc, which also has Euler characteristic 1, because of the added boundary integral with value 2π. As an application, a torus has Euler characteristic 0, so its total curvature must also be zero. If the torus carries the ordinary Riemannian metric from its embedding in R3, then the inside has negative Gaussian curvature, the outside has positive Gaussian curvature, and the total curvature is indeed 0. It is also possible to construct a torus by identifying opposite sides of a square, in which case the Riemannian metric on the torus is flat and has constant curvature 0, again resulting in total curvature 0. It is not possible to specify a Riemannian metric on the torus with everywhere positive or everywhere negative Gaussian curvature. ## For triangles Sometimes the GB formula is stated as ${\displaystyle \int _{T}K=2\pi -\sum \alpha -\int _{\partial T}\kappa _{g}}$ where T is a geodesic triangle. Here we define a "triangle" on M to be a simply-connected region whose boundary consists of three geodesics. We can then apply GB to the surface T formed by the inside of that triangle and the piecewise boundary of the triangle. The geodesic curvature the bordering geodesics is 0, and the Euler characteristic of T being 1. Hence the sum of the turning angles of the geodesic triangle is equal to 2π minus the total curvature within the triangle. Since the turning angle at a corner is equal to π minus the interior angle, we can rephrase this as follows:[4] The sum of interior angles of a geodesic triangle is equal to π plus the total curvature enclosed by the triangle. ${\displaystyle \sum (\pi -\alpha )=\pi +\int _{T}K}$ In the case of the plane (where the Gaussian curvature is 0 and geodesics are straight lines), we recover the familiar formula for the sum of angles in an ordinary triangle. On the standard sphere, where the curvature is everywhere 1, we see that the angle sum of geodesic triangles is always bigger than π. ## Special cases A number of earlier results in spherical geometry and hyperbolic geometry, discovered over the preceding centuries, were subsumed as special cases of Gauss–Bonnet. ### Triangles In spherical trigonometry and hyperbolic trigonometry, the area of a triangle is proportional to the amount by which its interior angles fail to add up to 180°, or equivalently by the (inverse) amount by which its exterior angles fail to add up to 360°. The area of a spherical triangle is proportional to its excess, by Girard's theorem – the amount by which its interior angles add up to more than 180°, which is equal to the amount by which its exterior angles add up to less than 360°. The area of a hyperbolic triangle, conversely is proportional to its defect, as established by Johann Heinrich Lambert. ### Polyhedra Descartes' theorem on total angular defect of a polyhedron is the polyhedral analog: it states that the sum of the defect at all the vertices of a polyhedron which is homeomorphic to the sphere is 4π. More generally, if the polyhedron has Euler characteristic ${\displaystyle \chi =2-2g}$  (where g is the genus, meaning "number of holes"), then the sum of the defect is ${\displaystyle 2\pi \chi .}$  This is the special case of Gauss–Bonnet, where the curvature is concentrated at discrete points (the vertices). Thinking of curvature as a measure, rather than as a function, Descartes' theorem is Gauss–Bonnet where the curvature is a discrete measure, and Gauss–Bonnet for measures generalizes both Gauss–Bonnet for smooth manifolds and Descartes' theorem. ## Combinatorial analog There are several combinatorial analogs of the Gauss–Bonnet theorem. We state the following one. Let ${\displaystyle M}$  be a finite 2-dimensional pseudo-manifold. Let ${\displaystyle \chi (v)}$  denote the number of triangles containing the vertex ${\displaystyle v}$ . Then ${\displaystyle \sum _{v\in {\mathrm {int} }{M}}\left(6-\chi (v)\right)+\sum _{v\in \partial M}\left(3-\chi (v)\right)=6\chi (M),\ }$ where the first sum ranges over the vertices in the interior of ${\displaystyle M}$ , the second sum is over the boundary vertices, and ${\displaystyle \chi (M)}$  is the Euler characteristic of ${\displaystyle M}$ . Similar formulas can be obtained for 2-dimensional pseudo-manifold when we replace triangles with higher polygons. For polygons of n vertices, we must replace 3 and 6 in the formula above with n/(n − 2) and 2n/(n − 2), respectively. For example, for quadrilaterals we must replace 3 and 6 in the formula above with 2 and 4, respectively. More specifically, if ${\displaystyle M}$  is a closed 2-dimensional digital manifold, the genus turns out [5] ${\displaystyle g=1+{\frac {M_{5}+2M_{6}-M_{3}}{8}},}$ where ${\displaystyle M_{i}}$  indicates the number of surface-points each of which has ${\displaystyle i}$  adjacent points on the surface. This is the simplest formula of Gauss–Bonnet theorem in 3D digital space. ## Generalizations The Chern theorem (after Shiing-Shen Chern 1945) is the 2n-dimensional generalization of GB (also see Chern–Weil homomorphism). The Riemann–Roch theorem can also be seen as a generalization of GB to complex manifolds. An extremely far-reaching generalization that includes all the above-mentioned theorems is the Atiyah–Singer index theorem, which won both Michael Atiyah and Isadore Singer the Abel Prize. A generalization to 2-manifolds that need not be compact is the Cohn-Vossen's inequality. ## In popular culture In Greg Egan's novel Diaspora, two characters discuss the derivation of this theorem. The theorem can be used directly as a system to control sculpture. For example in work by Edmund Harriss in the collection of the University of Arkansas Honors College.[6] Sculpture made from flat materials using the Gauss-Bonnet Theorem 4. ^ Weeks, Jeffrey R. (2001-12-12). "The Shape of Space". CRC Press. doi:10.1201/9780203912669. ISBN 9780203912669 – via Taylor & Francis. Cite journal requires |journal= (help)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335668087005615, "perplexity": 380.5811923076943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00537.warc.gz"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Multinomial_theorem
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Multinomial theorem In mathematics, the multinomial formula is an expression of a power of a sum in terms of powers of the addends. For any positive integer m and any nonnegative integer n, the multinomial formula is $(x_1 + x_2 + x_3 + \cdots + x_m)^n = \sum_{k_1,k_2,k_3,\ldots,k_m} {n \choose k_1, k_2, k_3, \ldots, k_m} x_1^{k_1} x_2^{k_2} x_3^{k_3} \cdots x_m^{k_m}$ The summation is taken over all combinations of the indices k1 through km such that k1 + k2 + k3 + ... + km = n; some or all of the nonnegative indices may be zero. The numbers ${n \choose k_1, k_2, k_3, \ldots, k_m} = \frac{n!}{k_1! k_2! k_3! \cdots k_m!}$ are the multinomial coefficients. The multinomial coefficients have a direct combinatorial interpretation, as the number of ways of depositing n distinguished objects in m bins, with k1 in the first, and so on. This is an equivalent assertion. The binomial theorem and binomial coefficient are special cases, for m = 2, of the multinomial formula and multinomial coefficient, respectively. Therefore this is also called the multinomial theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8412906527519226, "perplexity": 1066.7314052706897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436824/warc/CC-MAIN-20130516123036-00002-ip-10-60-113-184.ec2.internal.warc.gz"}
https://kullabs.com/class-10/compulsory-mathematics/sets/cardinality-of-a-set?type=video
## Cardinality of a set Subject: Compulsory Mathematics #### Overview The number of distinct element in a given set A is called the cardinal number of A. It is denoted by n(A). If A = { 1, 2, 3, 4, 5 }, then the cardinality of set A is denoted by n (A) = 5. ##### Cardinality of a set The cardinality of set A is defined as the number of elements in the set A and is denoted by n(A). For example, if A = {a,b,c,d,e} then cardinality of set A i.e.n(A) = 5 Let A and B are two subsets of a universal set U. Their relation can be shown in Venn-diagram as: $$n(A) = n_o( A) + n(A \cap B)$$ $$\text{or,}\: n(A) - n (A \cap B)= n_o(A)$$ $$n(B) = n_o(B) + n(A \cap B)$$ $$\text {or,}\: n(B) - n(A \cap B) = n_o(B)$$ Also, \begin{align*} n(A∪B) &= n_o(A) + n(A∩B) + n_o(B)\\ n(A∪B) &= n(A) - n(A∩B) + n(A∩B) + n(B) - n(A∩B)\\ n(A∪B) &= n(A) + n(B)- n(A∩B)\\ \therefore n(A∪B) &= n(A) + n(B) - n(A∩B)\\ \end{align*} If A and B are disjoint sets then: $n(A \cap B) = 0, n(A \cup B) =n(A) + n(B)$ Again, $n(U) = n(A \cup B) + n(\overline {A\cup B)}$ If $n(\overline {A \cup B)}$=0, then $n(U) = n(A \cup B)$ #### Problems involving three sets Let A, B and C are three non-empty and intersecting sets, then: $n(A \cup B \cup C) = n(A) + n(B) +n(C) - n(A \cap B) -n(B \cap C) -n(C \cap A) +n(A \cap B \cap C).$ In Venn-diagram $n(A)$ = Number of elements in set A. $n(B)$ = Number of elements in set B. $n(C)$=Number of element in set C. $n_o(A)$ = Number of elements in set A only. $n_o(B)$ = Number of elements in set B only. $n_o(C)$ = Number of elements in set C only. $n_o(A \cap B)$ = Number of elements in set A and B only. $n_o(B \cap C)$ = Number of elements in set B and C only. $n_o(C \cap A)$ = Number of elements in set A and C only. $n(A \cap B \cap C)$ = Number of elements in set A, B and C. #### From the Venn-diagram \begin{align*} n(A \cup B \cup C) &= n_o(A) +n_o(B) +n_o(C) +n_o(A \cap B) +n_o(B \cap C) +n_o(C \cap A) + n(A \cap B \cap C)\\ &= n(A) - n_o(A \cap B) - n_o(C \cap A) - n(A \cap B \cap C) + n(B) - n_o(B \cap C) - n_o(C\cap B) - n(A \cap B \cap C) + n(C) - n_o(A \cap C) - n_o(B \cap C) - n(A \cap B \cap C) + n_o(A \cap B) +n_o(B \cap C) +n_o(C \cap A) + n(A \cap B \cap C)\\ &= n(A) + n(B) + n(C) - [n_o(A \cap B) +n(A \cap B \cap C)] - [n_o(A \cap B) +n(A \cap B \cap C)] - [n_o(B \cap C) +n(A \cap B \cap C)] - [n_o(C \cap A) +n(A \cap B \cap C)]+n(A \cap B \cap C)\\ &= n(A) + n(B) + n(C) - n(A \cap B) -n(B \cap C) -n(A \cap C) +n(A \cap B \cap C)\\ \end{align*} $$\boxed{\therefore (A \cup B \cup C)= n(A) + n(B) + n(C) - n(A \cap B) -n(B \cap C) -n(C \cap A) +n(A \cap B \cap C)}$$ If A, B and C are disjoint sets, $n(A \cup B \cup C) = n(A) + n(B) + n(C)$ ##### Things to remember • The cardinality of a set is a positive integer but it is not decimal. So, n(A)  is not equal to 50% because  50% = 0.5. • If A, B and C are disjoint sets, $n(A \cup B \cup C) = n(A) + n(B) + n(C).$ • $(A \cup B \cup C)= n(A) + n(B) + n(C) - n(A \cap B) -n(B \cap C) -n(C \cap A) +n(A \cap B \cap C)$ • It includes every relationship which established among the people. • There can be more than one community in a society. Community smaller than society. • It is a network of social relationships which cannot see or touched. • common interests and common objectives are not necessary for society. ##### Venn Diagrams and Sets Solution: n(U) = 43 n(A) = 25 n(B) = 18 n(A ∩ B)= 7 The Venn-diagram of above information is as follow: \begin{align*}\overline {A \cup B} &= ? \\ n(A \cup B) &= n(A) + n(B) - n(A \cap B) \\ &= 25 + 18 -7 \\ &= 43 - 7 \\ &= 36\\ Again, n(\overline {A \cup B}) &= n(U) - n(A \cup B) \\ &= 43 - 36\\ &= 7 Ans. \end{align*} Solution: Let N and H be the set of people who like to see Nepali movies and Hindi movies respectively From question, n(U) = 80, n(N) = 47, n(H) = 31 , $(\overline {N \cup H})$ = 21 (i) \begin{align*} n(N \cup H) &= n(U) - (\overline {N \cup H}) \\ &= 80 - 21 \\ &= 59 \\ Again, n(N \cup H) &= n(N) + n(H) - n(N \cap H)\\ or \: 59 &= 47 + 31 -n(N \cap H) \\ or, \: 59 &= 78 -n(N \cap H) \\ or, \:n(N \cap H) &= 78 - 59 \\ &= 19 \\ n_o(N) &= n(N) -n(N \cap H) \\ &= 47 -19 \\ &= 28 Ans. \end{align*} (ii) The no. of people who like to see Hindi movies only \begin{align*} n_o (H) &= n(H) -n(N \cap H) \\ &= 31 - 19 \\ &= 12 \end{align*} (iii) The above information is shown in the Venn-diagram. solution: Let F and V represent the sets of people who like football and volleyball respectively and U be the universal sets. From question, $n(F) = 40, n(F \cap V) = 10, \: n(U) = n(F \cup V) = 65 , \: n(V) = ?$ (i) We know that, \begin{align*} n(F \cup V) &= n(F) + n(V) - n(F \cap V) \\ or, \: 65 &= 40 + n(V) - 10 \\ or, \: 65 &= 30 + n(V)\\ or, \: n(V) &= 65 - 30 \\ \therefore n(V) &= 35 Ans. \end{align*} (ii) $n_o (F) = n(F) - n(F \cap V) \: = 40 -10 \: = 30 Ans.$ (iii) $n_o (V) = n(V) -n(F \cap V) \: = 35 - 10 \: = 25 Ans.$ Solution: Let, F & M denote the set of people who like folk and modern song respectively. U be the universal set. From question, n(U) = 100 n(F) = 65 n(M) = 55 n(F∩M) = 35 (i) The above information represents in Venn diagram as follow: From Venn-diagram \begin{align*} n(F \cup M) &= n(F) + n(M) - n(F \cap M)\\ &= 65 + 55 - 35 \\&= 85 \: Ans. \end{align*} (ii) \begin{align*} n(\overline {F \cup M} ) &= 100 - 85 \\ &= 15 Ans. \end{align*} Solution: Let, C and V represent the sets of student who like to play cricket and volleyball. Let, U be the universal set. From question, $n(C) = 20, \: n(V) = 15, \: n(U) = n(C \cup V) = 30, \: n(C \cap V) = ?$ We know that, \begin{align*} n(C \cup V) &= n(C) + n(V) - n(C \cap V) \\ or, \: 30 &= 20 + 15 -n(C \cap V) \\or, \:n(C \cap V) &= 35 - 30\\ \therefore \: n(C \cap V) &= 5 \: Ans. \end{align*} The above information is shown in the following venn-diagram. Solution: Let, T and C represent the set of students who like to drink tea & coffee respectively. Let, U be the universal set. From question, $n(U) = 120, \: n(T) = 88, \: n(C) = 26, \: n(\overline{T \cup C})= 17, \: Let \: n(T \cap C)= x$ The above information is shown in Venn-diagram. We know that, \begin{align*} n(T \cup C) &= n(U) - n(\overline{T \cup C})\\ n(T \cup C) &= 120 - 17 \\ &= 103 \\ Again, \: n(T \cup C) &= n(T) + n(C) - n(T \cap C)\\ or, \: 103 &= 88 + 26 -n(T \cap C) \\ or, \: 103 &= 114 - n(T \cap C)\\ \therefore \:n(T \cap C) &= 114 - 103 \\ &= 11 \: Ans. \end{align*} Solution: Let, T and C denote sets of students who like tea and coffee respectively. Let, U be the universal sets. From question, $n(U)= 130,\: n(T)=70, \: n(C)=40, n(\overline {T \cup C})= 30$ We know that, \begin{align*} n(T \cup C ) &= n(U) - n(\overline{T \cup C})\\ n(T \cup C ) &= 130 - 30 \\ &= 100 \: Ans. \\ Again, \\n(T \cup C ) &= n(T) + n(C) - n(T \cap C)\\ or, \: 100 &= 40 + 70 -n(T \cap C)\\ or, \: 100 &= 110 -n(T \cap C) \\ \therefore n(T \cap C) &= 10 \: Ans. \\ \end{align*} The Venn-diagram showing the given information as follows. Solution: Let, A and B denote the set of students used the autorickshaw and bus. Let, U be the universal set, From question, $n(U) = 131, \: n(A) = 56, \: n(B) = 103, \: n_o (B) = 65 \\ Let, n(A \cap B )= x, \: n(\overline{A \cup B}) = y$ The above information is shown in Venn-diagram as follows. From Venn-diagram, \begin{align*} x + 65 &= 103 \\ or, \: x &= 103 - 65 \\ \therefore x= 38 \\ \end{align*} (i) \begin{align*} \text {The no. of students used autorickshaw only} \: n_o (A) &= 56 - x \\ &= 56 - 38 \\ &= 18 \: Ans. \end{align*} (ii)\begin{align*} n(\overline {A \cup B}) &= n(U) - n(A \cup B)\\ &= 131 - (65 + 38 + 18) \\ &= 131 - 121\\ &= 10 \: Ans. \end{align*} Solution: Let, B and L denote the set of tourist who like to visit Bhaktapur and Lalitpur respectively. Let, U be the universal set. From question, $n(U) = 2400, \: n(B) = 1650, \: n(L) = 850, \: n(\overline{B \cup L}) = 150$ (i) The above information is shown in Venn-diagram as follows. (ii) \begin{align*} n(B \cup L) &= n(U) - n(\overline{B \cup L})\\ or, \:n(B \cup L) &= 2400 - 150 \\ &= 2250 \\ Again, \\n(B \cup L) &= n(B) + n(L) - n(B \cap L) \\ or, \: 2250 &= 1650 + 850 -n(B \cap L)\\ or, \:n(B \cap L) &= 2500 - 2250 \\ \therefore \: n(B \cap L) &= 250 \: Ans. \\ \end{align*} (iii) \begin{align*} n_o (B) &= n(B) - n(B \cap L)\\ &= 850 -250 \\ &= 600 \: Ans. \end{align*} Solution: Let, M and S denote the set of students who passed in mathematics and Science respectively. Let, U be the universal set, From question, $n(M) = 60 \: n(S) = 45, \: n(M \cap S )= 30$ (i) \begin{align*} n_o (M) &= n(M) - n(M \cap S)\\ &= 60 -30 \\ &= 30 \: Ans. \end{align*} (ii)\begin{align*} n_o (S) &= n(S) - n(M \cap S)\\ &= 45 -30 \\ &= 15 \: Ans. \end{align*} (iii) \begin{align*}n(M \cup S) &= n(M) + n(S) -n(M \cap S)\\ &= 60 + 45 - 30 \\ &= 75 \: Ans. \end{align*} (iv)The above information is shown in Venn-diagram as follows. Solution: Let, M and E denote the set of students who like Maths and English respectively. Let, U be the universal set. From question, $n(U)=55, \: n_o(M) = 15, \: n_o(E) = 18, \: n(\overline{M \cup E} )= 5 \\ Let \: n(M \cap E)= x$ Now, \begin{align*} n(M \cup E) &= n(U) - n(\overline{M \cup E})\\ &= 55 -5 \\ &= 50 \end{align*} The above information is shown in Venn-diagram as follows. From Venn-diagram, \begin{align*} 15 + x + 18 &= 50\\ or, \: x + 33 &= 50\\ or, \: x &= 50 - 33 \\ &= 17 \end{align*} $\therefore \text {The no. of students who like both subject is 17.}$ Solution: Let, M and S denote the set of students who like Mathematics and Science respectively. Let, U be the universal set, From question, $n(U)=50 \: = n(M \cup S), \: n(M \cap S) = 20, \: Let, \: n(M)=3x \: and \: n(S)=2x$ We know that, \begin{align*} n(M \cup S) &= n(M) + n(S) - n(M \cap S)\\ 50 &= 3x + 2x - 20 \\ or, \: 5x &= 50 + 20\\ or, \: x &= \frac{70}{5}\\ &= 14 \end{align*} (i)\begin{align*} \text {The no. of student who like Mathematics, n(M)} &= 3x \\ &= 3 \times 14 \\ &= 42 \: Ans. \end{align*} (ii)\begin{align*} \text {The no. of student who like Science, n(S)} &= 2x \\ &= 2 \times 14 \\ &= 28 \end{align*} So, $n_o(S) = n(S) - n(M \cap S) \: \:= 28 - 20 \: \: = 8\: Ans.$ (iii) The above information is shown in Venn-diagram as follows: Solution: Let, M and B denote the set of students who have taken Mathematics and Biology respectively. Let, U be the universal set. From question, $n(M \cup B)=25 = n(U), \: n(M) = 12, \: n_o(M) = 8$ Now, \begin{align*} n(M \cap B)&= n(M) - n_o(M) \\ &= 12 - 8 \\ &= 4 \end{align*} \begin{align*} n_o(B) &= n(M \cup B) - n(M) \\ &= 25 - 12 \\ &= 13 \end{align*} The no. of students who have taken Maths & Biology = 4 Ans. The no. of students who have taken Biology but not Math = 13 Ans. The above information is shown in Venn-diagram as follows: Solution: Let, M and S denote the set of students who passed in mathematics and Science respectively. Let, U be the universal set, From question, $n_o(M) = 40\%, \: n_o(S) = 30\%, \: n(\overline{M \cup S })= 10\%, \: n(U) = 100\%$ \begin{align*}n(M \cup S)&= n(U) -n(\overline{M \cup S }) \\ &= 100\% - 10\% \\ &= 90\% \end{align*} (i) \begin{align*} n(M \cup S) &=n_o(M) +n_o(S) + n(M \cap S)\\ 90\% &= 40\% + 30\% + n(M \cap S)\\ or, \: 90\% - 70\% &= n(M \cap S)\\ \therefore n(M \cap S) &= 20\% \end{align*} (ii) \begin{align*} n(M) &= n_o(M) + n(M \cap S)\\ &= 40\% + 20\% \\ &= 60\% \end{align*} (iii) The above information is shown in Venn-diagram as follows. Solution: Let F & M denote the set of people who liked folk & modern song respectively. Let U be the universal set. From question, $n(F)= 70\% , \: n(M)= 60\%, \: n(\overline{F \cup M})= 10\%, \: n(F \cap M)= 4000, \: n(U)= 100\%$ The above information represents in Venn-diagram as follows: \begin{align*} n(F \cup M) &= n(U) - n(\overline{F \cup M}) \\ &= 100\% -10\% \\ &= 90\% \end{align*} From Venn-diagram \begin{align*} 70\% - x\% + x\% + 60\% - x\% &=90\% \\ or, \: x &= (130 - 90)\% \\ \therefore x &= 40\% \end{align*} Let total number of people be 'y' \begin{align*} 40\% \; of\; y &= 4000 \\ or, \: y \times \frac{40}{100} &= 4000\\ or, \: y &= \frac{4000 \times 100}{40} \\ \therefore y &= 10000 \: Ans. \end{align*} Solution: From question, $n(U)=30, \: n(A)=20,\: n(B)= 10$ $3x + y = n(A) \\ 3x + y = 20 \: \: \: \: \: .............(1)$ $x + y = n(B) \\ x + y = 10 \: \: \: \: \: \: ..............(2)$ Subtracting eqn (2) from eqn (1) \begin{array}{rrrr} 3x&+ &y&=&20 \\ x&+&y&=&10 \\ -&&-&&-\\ \hline\\ &&2x&=&10\\\end{array} \begin{align*} x &= \frac{10}{2} \\ \therefore x &= 5 \end{align*} Putting value of x in eqn (1) \begin{align*} 3x + y &= 20 \\ or, \: 3 \times 5 + y &= 20 \\ or, \: y &= 20 - 15 \\ y &= 5 \end{align*} (i) \begin{align*} n(A \cap B) &= y\\ &= 5 \: Ans.\end{align*} (ii) \begin{align*} n(A \cup B) &= 3x + y + x \\ &= 3 \times 5 + 5 + 5 \\ &= 15 + 10\\ &= 25 \: Ans. \end{align*} (iii) \begin{align*}n(\overline {A \cup B}) &= n(U) - n(A \cup B) \\ &= 30 - 25 \\ &= 5 \end{align*} Solution: Let, F and M be the sets of people who liked folk and modern songs. Let U be the universal set. From Question, $n(U) = 100\%, \: n(F) = 77\%, \: n(M) = 63\%, \: n(\overline {F \cup M} )= 5\%$, n(M∩N) = 135 $Let, \: n(M \cap N) = x\%$ We know that, \begin{align*} n(F \cup M) &= n(U) - n(\overline {F \cup M})\\ &= 100\% - 5\% \\ &= 95\% \\ Again,\\ n(F \cup M) &= n(F) + n(M) - n(F \cap M)\\ 95\% &= 77\% + 63\% -x\%\\ or, \: x\% &= 140\% - 95\%\\ \therefore x &= 45\%\end{align*} (i) From question, \begin{align*} 45\% \: of \: n(U) &= 135\\ or, \: n(U) \times \frac{45}{100} &= 135\\ or, \: n(U) &= \frac {135 \times 100}{45}\\ \therefore n(U) &= 300 \: Ans.\end{align*} (ii) \begin{align*}n(F) &= 300 \:of \: 77\%\\ &= 300 \times \frac {77}{100}\\ &= 231\\ Now, \\ n_o (F) &= n(F) - n(F \cap M)\\ &= 231 - 135 \\ &=96 \: Ans. \end{align*} (iii) The above information represent in Venn-diagram as follow: Solution: Let M and S be the student who liked math and science respectively. Let U be the universal sets then, From question, $n(U)= 95, \: Let\: n(M)= 4x, \: n(S)= 5x, \: n(M \cap S)= 10, \: n(\overline{M \cup S})= 15$ The above information is shown in the venn-diagram. \begin{align*} n(M \cup S) &=n(U) - n(\overline{M \cup S})\\ &= 95 -15 \\ &= 80 \end{align*} \begin{align*} n(M \cup S) &=n_o(M)+n_o(S) + n(M \cap S)\\ 80 &= 4x - 10 + 10 + 5x -10\\ or, \: 80 &= 9x - 10\\ \therefore x &= \frac{90}{9} &= 10 \end{align*} (a) The number of students who liked mathematics only \begin{align*} 4x - 10 &= 4 \times 10 - 10 \\ &= 40 - 10 \\ &= 30 \: Ans. \end{align*} (b) The number of students who liked science only \begin{align*} 5x - 10 &= 5 \times 10 - 10 \\ &= 50 - 10 \\ &= 40 \: Ans. \end{align*} Solution: Let E, M and N represents the set of students who passed in English, Mathematics and Nepali respectively. Let U be the universal set. $n(U) = 200, \: \: \: \: \: n(E) = 70, \: \: \:\: \: n(M) = 80\: \:\: \: \: n(N) = 60, \\ n(E \cap M)=35, \: \:\: \: \: n(E \cap N)=25 \: \:\: \: n(M \cap N)=35 \: \:\: \: n(E \cap M \cap N) = 10$ (a) The above information is shown in venn-diagram as follows. (b) \begin{align*} n(E \cap M \cap N) &= n(E) + n(M) + n(N) - n(E \cap M) - n(M \cap N) - n(N \cap E) + n(E \cap M \cap N)\\ &= 70 + 80 + 60 - 35 -25 -35 + 10 \\ &= 125 \end{align*} \begin{align*} n(\overline{E \cap M \cap N}) &= n(U) -n(E \cap M \cap N) \\ &=200 - 125 \\ &= 75 \: Ans. \end{align*} Let E, A and S denote the set of students who failedin English, Account and statistics respectively. Let U be the universal set. From question, $n(U) = 100\% , \: n(E)=58\%, \: n(A) = 39\%, \: n(S)= 25\%, \: n(E \cap A)=32\%, \\ \: n(A \cap S)= 17\%, \: n(E \cap S)=19\% \: n(E \cap A \cap S)= 13\%$ (a)The above information is shown in the venn-diagram as follow: \begin{align*}n(E \cup A \cup S) &= n(E) + n(A) + n(S) - n(E \cap A) - n(A \cap S) - n(S \cap E) + n(E \cap A \cap S)\\ &= (58 + 39 + 25 - 32 - 17 - 19 - 13)\% \\ &= 67\% \end{align*} (b) \begin{align*} n(\overline {E \cup A \cup S}) &= n(U) - n(E \cup A \cup S )\\ &= 100\% -67\% \\ &= 33\% \: Ans. \end{align*}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 3506.2225319009463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107866404.1/warc/CC-MAIN-20201019203523-20201019233523-00608.warc.gz"}
https://research-information.bris.ac.uk/en/publications/conformal-dimension-via-subcomplexes-for-small-cancellation-and-r
# Conformal dimension via subcomplexes for small cancellation and random groups Research output: Contribution to journalArticle (Academic Journal)peer-review 2 Citations (Scopus) ## Abstract We find new bounds on the conformal dimension of small cancellation groups. These are used to show that a random few relator group has conformal dimension 2+o(1) asymptotically almost surely (a.a.s.). In fact, if the number of relators grows like lK in the length l of the relators, then a.a.s. such a random group has conformal dimension 2+K+o(1). In Gromov’s density model, a random group at density d<18 a.a.s. has conformal dimension ≍dl/|logd|. The upper bound for C′(18) groups has two main ingredients: ℓp-cohomology (following Bourdon–Kleiner), and walls in the Cayley complex (building on Wise and Ollivier–Wise). To find lower bounds we refine the methods of Mackay (Geom Funct Anal 22(1):213–239, 2012) to create larger ‘round trees’ in the Cayley complex of such groups. As a corollary, in the density model at d<18, the density d is determined, up to a power, by the conformal dimension of the boundary and the Euler characteristic of the group. Original language English 937-982 46 Mathematische Annalen 364 3 13 Jun 2015 https://doi.org/10.1007/s00208-015-1234-8 Published - Apr 2016 ## Keywords • Primary 20F65 • Secondary 20F06 • 20F67 • 20P05 • 57M20 ## Fingerprint Dive into the research topics of 'Conformal dimension via subcomplexes for small cancellation and random groups'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8855035305023193, "perplexity": 3137.753067464875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00218.warc.gz"}
https://www.statease.com/docs/v11/navigation/post-analysis/
# Post Analysis¶ The sections below will be empty until at least one model has been fit to a response. Point prediction allows the analyst to set the factor values. The models are used to provide the predictions and interval estimates. Confirmation compares the prediction interval of the model to a follow up sample’s average. If the samples average is inside the prediction interval then the model is confirmed. Confirmation is usually done at or near factor settings recommended by numerical optimization. Coefficients Table shows the coefficient magnitudes and significance for all analyzed responses. It is color coded by significance. Makes it easy to see what terms are common to all the response models.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692408800125122, "perplexity": 867.5949966257302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304872.21/warc/CC-MAIN-20220125190255-20220125220255-00279.warc.gz"}
http://math.stackexchange.com/questions/222210/f-n-in-l-2-mu-the-limit-f-in-l-2-mu
# $f_n$ $\in$ $L_2(\mu)$, the limit $f \in L_2(\mu)$ If $f_n \in L_2(\mu)$, $f_n\rightarrow f$ almost everywhere, this is not enough to conclude $f\in L_1(\mu)$. But is it enough to conclude whether $f\in L_2(\mu)$ or $$\lim_{n \to \infty}\int_{R}{|f_n(x)-f(x)|}^2<\infty$$ What about the assumption change to $\sup\int_{R}{|f_n(x)|}^2d(\mu)<\infty$ - If you change the assumption to $$\sup_n \int_{\mathbb{R}} |f_n(x)|^2 dx < \infty$$ then you can at least conclude that $f\in L^2$ by Fatou's lemma. –  Chris Janjigian Oct 27 '12 at 19:04 No, it is not true. $$f_n(x) = \begin{cases} n & x \in [0,1/n)\\ 0 & \text{else}\end{cases}$$ • Take $\Bbb R$ with Lebesgue measur, and $f_n=\chi_{(0,n)}$. Then $f_n\in L^2$ and $f_n\to \chi_{(0,+\infty)}$, which is not (square)-integrable. • The fact that $\sup_n\int_X f_n^2d\mu$ implies by Fatou's lemma that $f\in L^2$. • But even in this case, we don't have necessarily convergence in $L^2$. Taking $f_n:=\sqrt n \chi_{(n,n+n^{—1})}$, we can see that $\int_Xf_n^2=1$ for all $n$ and $f_n\to 0$ almost everywhere. Thanks, I am considering when u(x) is finite, is that convergence in $L^2$ right? –  user46007 Oct 28 '12 at 1:00 I got that just need make a little change of your counterexample,when u(x) is finite,$f_n:=\sqrt{ n(n+1) }\chi_{(1/(n+1),1/n}$,consider [0,1]. –  user46007 Oct 28 '12 at 1:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892481565475464, "perplexity": 385.7869485393754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658061.59/warc/CC-MAIN-20150417045738-00072-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1546961/partial-fraction-for-complex-roots
# partial fraction for complex roots While solving Laplace transform using Partial fraction expansion. I have confusion in solving partial fraction for complex roots. I have this equation $$\frac {2s^2+5s+12} {(s^2+2s+10)(s+2)}$$ Please anyone help to tell me to understand the steps for solving partial fraction for complex roots It is the same story as with real roots. Consider $$A=\frac {2s^2+5s+12} {(s^2+2s+10)(s+2)}$$ the first partial fraction decomposition gives $$A=\frac{s+1}{s^2+2 s+10}+\frac{1}{s+2}$$ So, now, just focus on $$B=\frac{s+1}{s^2+2 s+10}=\frac{s+1}{(s+1+3i)(s+1-3i)}=\frac \alpha {(s+1+3i)}+\frac \beta {(s+1-3i)}$$ Reducing to same denominator $$s+1=\alpha(s+1+3i)+\beta (s+1-3i)=(\alpha+\beta)(s+1)+3(\alpha-\beta)i$$ Idenitfyng the real and imaginary parts than gives $$\alpha+\beta=1 \, \, \, \quad \alpha-\beta=0$$ that is to say $$\alpha=\beta=\frac 12$$. Edit To clarify the first partial fraction decomposition $$\frac {2s^2+5s+12} {(s^2+2s+10)(s+2)}=\frac a{s+2}+\frac {b+cs}{s^2+2s+10}$$ Reducing to same denominator and removing the common denominator $$2s^2+5s+12=a(s^2+2s+10)+(b+cs)(s+2)$$ Expanding the rhs and grouping terms $$2s^2+5s+12=(a+c)s^2+ (2 a+b+2 c) s+(10 a+2 b)$$ Comparing the coefficients $$a+c=2 \quad, 2a+b+2c=5 \quad, 10a+2b=12$$ Solving the three equations for the three unknowns leads to $$a=b=c=1$$. • Here I want to ask 2 things, 1) how you separated numerator after the first partial fraction decomposition gives: 2) how can I make pairs of $s^2+2s+10$ in the form of $i$? – Shinning Eyes Nov 26 '15 at 7:39 • The first point is the standard decomposition. For the second, just solve the quadratic equation as usual. – Claude Leibovici Nov 26 '15 at 7:59 • I am asking about $2s^2 + 5s +12$ how you decomposed in to $s+1$? – Shinning Eyes Nov 26 '15 at 9:48 • $$\frac {2s^2+5s+12} {(s^2+2s+10)(s+2)}=\frac a{s+2}+\frac {b+cx}{s^2+2s+10}$$ Reduce to same denominator and identify the coefficients for the same power of $x$. This will give $a,b,c$. – Claude Leibovici Nov 26 '15 at 9:57 • how can we find the values of $a,b, and c$? – Shinning Eyes Nov 26 '15 at 10:39 You would write it as $${s+1\over (s+1)^2+3^2}+{1\over s+2}$$ and use the formula $$L(e^{at}\cos(bt))={s-a\over (s-a)^2+b^2}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9574456810951233, "perplexity": 308.9917912428576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147116.85/warc/CC-MAIN-20200228073640-20200228103640-00555.warc.gz"}
http://mathmisery.com/wp/2017/03/08/simple-but-evil-1-integrals-to-test-your-calculus-students/?replytocom=20865
# Simple But Evil #1 — Integrals To Test Your Calculus Students This article is the first in a new series of “Simple But Evil” articles [and hopefully soon, videos]. The idea of the “Simple But Evil” series is to help your students [or you, if you are the student] fill in some inevitable knowledge gaps in math content. This article will be about “Simple But Evil” integrals with the purpose of closing some Algebra and Pre-Calculus gaps. A typical Calculus I course covers integration techniques up to $$u$$-substitution. In Calculus II, a larger set of integration techniques are introduced. These are typically integration by parts, trig integrals, trig substitutions, partial fraction decomposition, and working with improper integrals. Within these broad topics, there are numerous tricks [not the bad kind aka gimmicks] that help to simplify matters. However, if you’ve taught a Calculus sequence you know that when it comes time to techniques of integration, there can be cognitive overload for students. Part of this overload is a result of knowledge gaps with content covered in Algebra and Pre-Calculus. Below is a list of simple but evil integrals to help close these knowledge gaps. The purpose of this is to expose them to new patterns that often get overlooked in a Calculus course in a similar way that fractions are overlooked in Algebra. That is, these are problems that, while in textbooks in some form, are not emphasized enough. It’s as much a matter of giving these problems as it is in giving them in a good sequential exposition. Techniques of integration are often taught disjointed without revisiting previously taught techniques. The initial separation is fine, but eventually, students have to be able to mix concepts from previous courses or just even from the previous week. This article is a “living” article. I will be updating it from time to time as I rediscover things from one class to another or from one student to another. ### Constantly Confusing With high confidence, the majority of you will have the majority of your students get the majority of these problems wrong. Every one of these problems can be integrated directly. The stumbling block for many students is that they haven’t had enough exposure to $$e$$, $$\pi$$, and other versions of constants thrown together the way I’ve put them here. There is a chronic misreading and misunderstanding of $$e$$ versus $$e^{x}$$, for example. 1. $$\displaystyle \int 3 \ dx$$ 2. $$\displaystyle \int e \ dx$$ 3. $$\displaystyle \int \pi \ dx$$ 4. $$\displaystyle \int e^{\pi} \ dx$$ 5. $$\displaystyle \int \pi e \ dx$$ 6. $$\displaystyle \int \frac{1}{e} \ dx$$ 7. $$\displaystyle \int \frac{\pi}{e} \ dx$$ 8. $$\displaystyle \int e^{2} \ dx$$ 9. $$\displaystyle \int e^{e} \ dx$$ 10. $$\displaystyle \int ex \ dx$$ 11. $$\displaystyle \int x^{e} \ dx$$ 12. $$\displaystyle \int x^{-\pi} \ dx$$ 13. $$\displaystyle \int x^{e + \pi} \ dx$$ 14. $$\displaystyle \int e^{\pi} x \ dx$$ 15. $$\displaystyle \int e e^{-x} \ dx$$ 16. $$\displaystyle \int \pi e^{x} \ dx$$ 17. $$\displaystyle \int \cos(1) \ dx$$ 18. $$\displaystyle \int \sin(\cos(1)) \ dx$$ 19. $$\displaystyle \int \sin(\cos(1))x \ dx$$ 20. $$\displaystyle \int e^{\sin(\cos(1))} \ dx$$ 21. $$\displaystyle \int z \ dx$$ You get the idea. Mix and match these constants and toss in a multiplication by $$x$$ or $$x^{n}$$ and you’ll see who sees the integrands correctly. ### Similar But Different Give these three integrals in the order given as one exercise once you’ve covered integration by parts. Too often, students see a polynomial with an exponential and jump immediately to integration by parts because integration by parts exercises are filled with these types. But as you can see, the third integral is a $$u$$-substitution. While an integral like the third one is covered when introducing $$u$$-substitution, the form is quickly forgotten. You can add another layer of difficulty by changing the exponent to $$ax$$ or $$ax^{2}$$ for $$a \neq 0$$. 1. $$\displaystyle \int xe^{x}\ dx$$ 2. $$\displaystyle \int x^{2}e^{x}\ dx$$ 3. $$\displaystyle \int xe^{x^{2}}\ dx$$ ### It’s Natural Natural logs are confusing. In the first, the integrand is a constant, the second and third are integration by parts, the fourth and fifth are $$u$$-substitutions. For the second problem, students often can’t see that “$$dv = dx$$” is the choice in integration by parts. Also, pay close attention to your students in the fourth and fifth problems. Many may get the fourth problem correct, but then they’ll botch the fifth one because they got in their head that $$\frac{d[\ln(f(x))]}{dx} = \frac{1}{f(x)}$$ — a common mistake. 1. $$\displaystyle \int \ln(4) \ dx$$ 2. $$\displaystyle \int \ln(x) \ dx$$ 3. $$\displaystyle \int x\ln(x) \ dx$$ 4. $$\displaystyle \int \frac{\ln(x)}{x} \ dx$$ 5. $$\displaystyle \int \frac{2x\ln(x^{2}+1)}{x^{2}+1} \ dx$$ ### Not The Fractions! Fractions still remain a hopeless mess for a number of students. Put them together with integrals and you have chaos that rivals government politics. The first problem tends to be more confusing than the second problem even though they are identical. Vary this by using $$e^{x}$$ or $$\sin(x)$$ or any other elementary function in place $$x$$. For sake of completeness, partial fractions are a disaster for students and there are a variety of simple but evil integrals. However, fractions by themselves are enough to befuddle students. So, the standard battery of partial fraction integrals are often enough evil! Here is very simple but slightly evil involving fractions. 1. $$\displaystyle \int \frac{x}{2} \ dx$$ 2. $$\displaystyle \int \frac{1}{2}x \ dx$$ ### The Arguments of Functions Students get confused about the difference between $$f(x)$$, $$f(2x)$$, and $$2f(x)$$. These are concepts that should’ve been clarified in a pre-Calculus course and resolved completely by the first part of a Calculus I course, but alas, there are cracks through which things slip. What brings these misunderstandings to bear is some trigonometry. Toss in some fractions for good measure. 1. $$\displaystyle \int \sin(x) \ dx$$ 2. $$\displaystyle \int \frac{1}{2}\sin(x) \ dx$$ 3. $$\displaystyle \int \sin\Big(\frac{1}{2}x\Big) \ dx$$ These integrals begin to confuse students once they are introduced to integration by trig substitution because of the hyper-conditioning on the $$\sqrt{x^{2} \pm a^{2}}$$ or $$\sqrt{a^{2}-x^{2}}$$ forms. There are other more elementary mistakes students make like $$\sqrt{x^{2} + a^{2}} = x + a$$, but that’s a whole ‘nother problem. 1. $$\displaystyle \int x^{2}+1 \ dx$$ — I have seen enough students try $$x = \tan(\theta)$$. It’s the long way around the barn. 2. $$\displaystyle \int \frac{x}{\sqrt{x^{2}+1}} \ dx$$ — They miss the $$u$$ substitution in favor of $$x = \tan(\theta)$$ 3. $$\displaystyle \int \frac{x^{3}}{\sqrt{x^{2}+1}} \ dx$$ — They miss the $$u$$ substitution in favor of $$x = \tan(\theta)$$ 4. $$\displaystyle \int \frac{1}{\sqrt{-x^{2}+1}} \ dx$$ — The negative sign in front of $$x^{2}$$ effectively forbids them from seeing a standard inverse trig integral or the substitution $$x = \sin(\theta)$$ I’ll save improper integrals in a separate write-up. In the meanwhile, try these with your class and let me know how it goes! New here? Check out the "About The Blog" page and say "Hello" to @shahlock! Do you enjoy this blog? Consider supporting with a contribution! ## 2 thoughts on “Simple But Evil #1 — Integrals To Test Your Calculus Students” 1. Joseph Bravo. I will be eagerly awaiting the rest of this series. In regards to the “constantly confusing” set: I frequently ask my calculus students to take the derivative of a weird constant, like e^pi. Somehow, it never occurred to me that I should have them integrate a weird constant. I’ll have to give that a try sometime.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858683168888092, "perplexity": 749.2141841718629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688103.0/warc/CC-MAIN-20170922003402-20170922023402-00644.warc.gz"}
https://research-repository.uwa.edu.au/en/publications/heterogeneous-proportionality-constants-a-challenge-to-taylors-po
# Heterogeneous ‘proportionality constants’ – A challenge to Taylor's Power Law for temporal fluctuations in abundance M. Kiflawi, O. Mann, Mark Meekan Research output: Contribution to journalArticle 1 Citation (Scopus) ## Abstract © 2016 Elsevier LtdTaylor's Power Law for the temporal fluctuation in population size (TL) posits that the variance in abundance scales according to aM b; where M is the mean abundance and a and b are the ‘proportionality’ and ‘scaling’ coefficients. As one of the few empirical rules in population ecology, TL has attracted substantial theoretical and empirical attention. Much of this attention focused on the scaling coefficient; particularly its ubiquitous deviation from the null value of 2. Here we present a line of reasoning that challenges the power-law interpretation of the empirical log-linear relationship between the mean and variance of population size. At the core of our reasoning is the proposition that populations vary not only with respect to M but also with respect to a; which leaves the log-linear relationship intact but forfeits its power-law interpretation. Using the stochastic logistic-growth model as an example, we show that ignoring among-population variation in a is akin to ignoring the variation in the intrinsic rate of growth (r). Accordingly, we show that the slope of the log-linear relationship (b) is a function of the among-population (co)variation in r and the carrying-capacity. We further demonstrate that local environmental stochasticity is sufficient to generate the full range of observed values of b, and that b can in fact be insensitive to substantial differences in the balance between variance-generating and stabilizing processes. Original language English 155-160 Journal of Theoretical Biology 407 https://doi.org/10.1016/j.jtbi.2016.07.014 Published - 2016
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086928963661194, "perplexity": 1457.6816457325326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00322.warc.gz"}
https://zbmath.org/?q=an:0858.34045
× ## Existence and uniqueness of solutions of a semilinear evolution nonlocal Cauchy problem.(English)Zbl 0858.34045 Without proofs and references the author presents some results concerning the existence and uniqueness of solutions of the Cauchy problem for a semilinear evolution equation in Banach spaces. ### MSC: 34G20 Nonlinear differential equations in abstract spaces 35G10 Initial value problems for linear higher-order PDEs 35K25 Higher-order parabolic equations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420607089996338, "perplexity": 464.2651207253682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00648.warc.gz"}
http://mathhelpforum.com/algebra/220255-quadratic-function-question.html
Math Help - Quadratic function question Question: Given that f(x) = x^2 + 6x - 5 and g(x) = 8x + k. Find the range of values of k for which f(x) > g(x). How should I approach this question? Thank you $x^2 + 6x - 5 > 8x + k\iff x^2 - 2x - (5+k) > 0$. Do you know how to determine if a quadratic polynomial has no roots? Another way is to sketch the graphs of f(x) and g(x) and to understand that the line that is the graph of g(x) with the critical value of k is a tangent to the parabola that is the graph of f(x). Find the slope of f(x) (it depends on x) and equate it to the slope of g(x) (which is constant). From this equation, find x and y = f(x) and then find k such that g(x) passes through (x, y). $x^2 + 6x - 5 > 8x + k\iff x^2 - 2x - (5+k) > 0$. Do you know how to determine if a quadratic polynomial has no roots?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639750480651855, "perplexity": 145.47830434699353}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447543571.19/warc/CC-MAIN-20141224185903-00046-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.starlanguageblog.com/how-to-write-a-matrix-in-latex/
0 2 # How To Write A Matrix In Latex? There are a few different ways to type a matrix in LaTeX. Some are easier than others.A matrix is a rectangular arrangement of numbers into rows and columns. Matrices can be used to compactly write and work with multiple linear equations.Matrixes are a useful tool in technical documentation, as they can be easily understood by readers. However, there are some rules when writing matrices in latex. ## & And Symbols When writing matrices in latex, it is important to follow the right rules. This can be a confusing formatting area for authors new to LaTeX, but once you understand the basics, it will become much easier. The first rule is to enclose all mathematical material within dollar signs. Math formulas should be enclosed in a dollar sign pair (or equivalent environment), while nonmathematical material should be left outside the dollar sign pair. A common mistake is to include punctuation signs within the dollar sign pair, delimiting a math expression. This is incorrect and may result in a strange typeset output. Instead, the correct way to do it is to leave the punctuation signs outside the dollar sign pair, as they are part of the nonmathematical text. Similarly, large expressions that occur “inline” (i.e., embedded within a paragraph of regular text) are typeset differently than expressions that appear in displayed formulas. Specifically, fractions set inline have their numerator and denominator reduced, while displayed fractions have their normal size. This ensures that math expressions set inline do not protrude too much into the surrounding text or consume too much vertical space. It also prevents the display of fractions or sums as separate symbols that consume too much horizontal space, as they would if the expression were embedded in a displayed equation. Another important rule is to use the correct format for functions such as sin, cos, tan, log, and ln. They are typically expressed using a forward slash followed by a function name. In addition, integrals and derivatives are typically expressed using a forward slash with a sub and superscript. Limits for these symbols can be set using the same command for ordinary functions. Finally, a special symbol can be used to represent square roots. The square root symbol is written as $sqrtexpression$, and the nn-th root is written as $sqrtnn$. Other commonly used symbols can be derived from these examples and are found in the master list of mathematical symbols below. These lists are categorized by function and topic, so you can quickly locate the appropriate symbols for your project. ## Brackets Brackets are symbols used to group expressions and clarify the order of operations in an algebraic expression. These include parentheses, square brackets, and curly brackets. They are also used to denote certain mathematical operations, such as the commutator in group theory or ring theory. In addition, they may be used to represent Lie brackets in associative algebra. For instance, the commutator [g,h] of a group is commonly defined as g-1h-1gh. In ring theory, the commutator is often represented by square brackets. There are many different types of brackets in LaTeX, each with its own set of rules for writing them. The most common ones are parentheses, square brackets, and curly brackets. When writing a matrix in latex, it is important to understand the structure of the brackets and how to write them correctly. You need to define the environment where the matrix will be bound by which bracket and then pass this environment as an argument between begin and end commands. In addition, you need to pay attention to the spacing between the brackets and the cells of the matrix. You can use the array command to control this. Using this command, you can adjust the spacing between the cells of a matrix to make them look better. This can be especially useful if you have more complicated terms in your matrix, such as fractions. Some of the most common brackets in LaTeX are parentheses and square brackets, but you can also use curly brackets and angle brackets. You can even use a combination of these for a more complex symbol. You can also use several types of dots inside or outside the brackets. For example, if you want to show that y is greater than or less than mx+c, you can use a multiplication-dot; or if you want to show the nth product of two numbers, you can use the symbol xx. Besides these, there are a few other symbols that you can use in the brackets of your matrix. These include int for integrals, the sum for sigma-notation, lim for limits, and prod products. ## Array Environment Using an array environment is the easiest way to write a matrix. But it is important to understand its rules deeply. This will help you to make your matrix as elegant and as correct as possible. Arrays are used for writing matrices in math mode (see Modes). They can have different column alignments: left, center, or right, and optional vertical lines separating the columns. They are dynamic, meaning they expand and change their size according to what they hold. In the array environment, cells are typeset in math mode, except if they have the p… and m… and b… and c… specifiers, which switch them to text mode. This is a good way to save some typing if you have lots of mathematical material in your table. A special feature of the array environment is that it uses arraycolsep to govern how much space is allocated between columns. This differs from tabular’s talcose, which gives half the width between two columns. For example, if an array has three columns, beginarrayrl tells LaTeX to give each column a first flush right, a second centered, and a third flush left. However, it is also possible to create an array with one or more columns that are not aligned either left or right. This can be done by inserting a |cbar or a line between the column parameters at the beginning of the array. The amsmath package provides many environments that can turn an array into a matrix, as well as commands to convert the matrix back to its original format. For instance, it has a command for converting an array with parentheses into a matrix surrounded by parentheses, a command for converting an array with square brackets into a matrix surrounded by square brackets, and a command for converting an array with curly braces into a matrix surrounded by curly braces. Besides these standard commands, the amsmath package provides several other special ones useful for typing matrices. The most popular is lots, which creates a row of dots spanning n columns, centered relative to the height of a letter. The command is sometimes called a “double lot.” In addition, there is a dot operator that creates a binary multiplication operation. ## Amsmath Package One of LaTeX’s greatest strengths is the ability to typeset mathematics, both in general and specifically matrices. Math can be complex and require a lot of care and attention, so it’s crucial to get the typesetting right. However, even when a document is typeset properly, there can be occasions where LaTeX has done its job, but you want to add some text or make an adjustment. For example, if a matrix is too long or has a lot of whitespaces, you may wish to place some comments between the equations. Fortunately, there are many latex packages available that give you the tools you need to achieve this. These can be installed using the package manager on your computer or by downloading them directly from their website. The amsmath package is one of the most popular of these. It’s loaded by using the package amsmath in the preamble of your source file (between the document class and the beginning document lines). This package has several environment settings that can be used to align equations. It’s also possible to specify the height of the equations, so they’re displayed as if they were on their line. In addition, it offers an option to prevent an equation from being numbered and a way to surround math expressions with parentheses whose height automatically matches the expressions. It also includes delimiters that can be used as auto-scaled versions of standard brackets, square brackets, curly brackets, and angle brackets. Some of these delimiters are more appropriate for specific types of mathematics, such as absolute value or fractions, and others have additional useful features in different situations. For instance, dblPipe and round-off Gauss brackets are commonly used in matrices, and floor and floor are often needed to describe the horizontal spacing of a fraction. Another feature is the ability to create a set of absolute value bars that are the proper height for a fraction. This function is particularly useful when you need to show fractions that are more than a certain number of decimal places. It’s also handy when you need to define new delimiters for a mathematical structure. ## How To Write A Matrix In Latex? A Step-By-Step Guide With Examples To Follow Sure, Here Is A Long Guide On How To write a matrix in LaTeX. LaTeX is a powerful document preparation system many professionals use for typesetting documents. One of the useful features of LaTeX is the ability to typeset matrices. Matrices can represent systems of linear equations, transformations, and other mathematical concepts. In this guide, we will explore how to write a matrix in LaTeX. Before you begin writing the matrix, you need to include some code in the preamble that tells LaTeX that you will be using the amsmath package, which provides support for mathematical notation, including matrices. ### Here Is An Example Of How To Include The Necessary Code In Your Latex Document: \documentclass{article} \usepackage{amsmath} \begin{document} ## Define The Matrix: The next step is to define the matrix. The matrix can be defined using the matrix environment. The matrix environment is a feature of the amsmath package. ### Here Is An Example Of How To Define A 3 X 3 Matrix: $\begin{matrix} a & b & c \\ d & e & f \\ g & h & i \end{matrix}$ This will produce a matrix that looks like this: | a b c | | d e f | | g h I | ### Adding Dots To The Matrix: Dots can be added to the matrix using the following commands: \begin{matrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \dots & a_{mn} \end{matrix} ## FAQ’s ### How to write a matrix? A matrix equation is an equation with the form Ax = b, where A is a m n matrix, b is a vector in R m, and x is a vector with unknown coefficients x 1 through x n. ### How do you write an inline matrix in LaTeX? All you need to do is place your code between two dollar signs ($). The matrix will be written in a new line if you use two dollar signs ($\$), and the remainder of the phrase will continue in the line after that. ### How to write vector and matrix in LaTeX? When creating the vector brackets, the “bmatrix” environment is used (enclosed by begin and end statements). When creating a new row in a vector, we write “”. We use the LaTeX command “vdots” to create the vertical dots. ### How to write an array in LaTeX? An array environment’s start is indicated by the beginarray command. The arguments for the array’s columns are stored in the curly brackets that follow this command. Any combination of the three letters l (left-align), c (centre), and r, constitutes these parameters (right-align). Even one column can be present in an array. ### How do you write an identity matrix? I = eye(n) returns an identity matrix of size n by n with zeros outside of the main diagonal and ones within. I = eye(n, m) yields a n by m matrix with zeros in all other locations and ones on the main diagonal. I = eye(sz) returns an array with zeros everywhere else and ones on the main diagonal. Size(I) can be defined by the size vector, sz.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9511173367500305, "perplexity": 528.894725457297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00767.warc.gz"}
https://competitive-exam.in/questions/discuss/the-ratio-of-the-thickness-of-thermal-boundary
The ratio of the thickness of thermal boundary layer to the thickness of hydrodynamic boundary layer is equal to (Prandtl number) n, where n is equal to -1/3 -2/3 1 -1 Please do not use chat terms. Example: avoid using "grt" instead of "great".
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921392202377319, "perplexity": 1539.6664288194281}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00451.warc.gz"}
https://en.wikipedia.org/wiki/Lucas%27_theorem
# Lucas's theorem (Redirected from Lucas' theorem) In number theory, Lucas's theorem expresses the remainder of division of the binomial coefficient ${\displaystyle {\tbinom {m}{n}}}$ by a prime number p in terms of the base p expansions of the integers m and n. Lucas's theorem first appeared in 1878 in papers by Édouard Lucas.[1] ## Statement For non-negative integers m and n and a prime p, the following congruence relation holds: ${\displaystyle {\binom {m}{n}}\equiv \prod _{i=0}^{k}{\binom {m_{i}}{n_{i}}}{\pmod {p}},}$ where ${\displaystyle m=m_{k}p^{k}+m_{k-1}p^{k-1}+\cdots +m_{1}p+m_{0},}$ and ${\displaystyle n=n_{k}p^{k}+n_{k-1}p^{k-1}+\cdots +n_{1}p+n_{0}}$ are the base p expansions of m and n respectively. This uses the convention that ${\displaystyle {\tbinom {m}{n}}=0}$ if m < n. Proofs There are several ways to prove Lucas's theorem. Combinatorial proof — Let M be a set with m elements, and divide it into mi cycles of length pi for the various values of i. Then each of these cycles can be rotated separately, so that a group G which is the Cartesian product of cyclic groups Cpi acts on M. It thus also acts on subsets N of size n. Since the number of elements in G is a power of p, the same is true of any of its orbits. Thus in order to compute ${\displaystyle {\tbinom {m}{n}}}$ modulo p, we only need to consider fixed points of this group action. The fixed points are those subsets N that are a union of some of the cycles. More precisely one can show by induction on k-i, that N must have exactly ni cycles of size pi. Thus the number of choices for N is exactly ${\displaystyle \prod _{i=0}^{k}{\binom {m_{i}}{n_{i}}}{\pmod {p}}}$. Proof based on generating functions — This proof is due to Nathan Fine.[2] If p is a prime and n is an integer with 1 ≤ np − 1, then the numerator of the binomial coefficient ${\displaystyle {\binom {p}{n}}={\frac {p\cdot (p-1)\cdots (p-n+1)}{n\cdot (n-1)\cdots 1}}}$ is divisible by p but the denominator is not. Hence p divides ${\displaystyle {\tbinom {p}{n}}}$. In terms of ordinary generating functions, this means that ${\displaystyle (1+X)^{p}\equiv 1+X^{p}{\pmod {p}}.}$ Continuing by induction, we have for every nonnegative integer i that ${\displaystyle (1+X)^{p^{i}}\equiv 1+X^{p^{i}}{\pmod {p}}.}$ Now let m be a nonnegative integer, and let p be a prime. Write m in base p, so that ${\displaystyle m=\sum _{i=0}^{k}m_{i}p^{i}}$ for some nonnegative integer k and integers mi with 0 ≤ mip-1. Then {\displaystyle {\begin{aligned}\sum _{n=0}^{m}{\binom {m}{n}}X^{n}&=(1+X)^{m}=\prod _{i=0}^{k}\left((1+X)^{p^{i}}\right)^{m_{i}}\\&\equiv \prod _{i=0}^{k}\left(1+X^{p^{i}}\right)^{m_{i}}=\prod _{i=0}^{k}\left(\sum _{n_{i}=0}^{m_{i}}{\binom {m_{i}}{n_{i}}}X^{n_{i}p^{i}}\right)\\&=\prod _{i=0}^{k}\left(\sum _{n_{i}=0}^{p-1}{\binom {m_{i}}{n_{i}}}X^{n_{i}p^{i}}\right)=\sum _{n=0}^{m}\left(\prod _{i=0}^{k}{\binom {m_{i}}{n_{i}}}\right)X^{n}{\pmod {p}},\end{aligned}}} where in the final product, ni is the ith digit in the base p representation of n. This proves Lucas's theorem. ## Consequence • A binomial coefficient ${\displaystyle {\tbinom {m}{n}}}$ is divisible by a prime p if and only if at least one of the base p digits of n is greater than the corresponding digit of m. ## Variations and generalizations • Kummer's theorem asserts that the largest integer k such that pk divides the binomial coefficient ${\displaystyle {\tbinom {m}{n}}}$ (or in other words, the valuation of the binomial coefficient with respect to the prime p) is equal to the number of carries that occur when n and m − n are added in the base p. • Andrew Granville has given a generalization of Lucas's theorem to the case of p being a power of prime.[3] • The q-Lucas theorem is a generalization for the q-binomial coefficients, first proved by J. Désarménien.[4] ## References 1. ^ • Edouard Lucas (1878). "Théorie des Fonctions Numériques Simplement Périodiques". American Journal of Mathematics. 1 (2): 184–196. doi:10.2307/2369308. JSTOR 2369308. MR 1505161. (part 1); • Edouard Lucas (1878). "Théorie des Fonctions Numériques Simplement Périodiques". American Journal of Mathematics. 1 (3): 197–240. doi:10.2307/2369311. JSTOR 2369311. MR 1505164. (part 2); • Edouard Lucas (1878). "Théorie des Fonctions Numériques Simplement Périodiques". American Journal of Mathematics. 1 (4): 289–321. doi:10.2307/2369373. JSTOR 2369373. MR 1505176. (part 3) 2. ^ Fine, Nathan (1947). "Binomial coefficients modulo a prime". American Mathematical Monthly. 54: 589–592. doi:10.2307/2304500. 3. ^ Andrew Granville (1997). "Arithmetic Properties of Binomial Coefficients I: Binomial coefficients modulo prime powers" (PDF). Canadian Mathematical Society Conference Proceedings. 20: 253–275. MR 1483922. Archived from the original (PDF) on 2017-02-02. 4. ^ Désarménien, Jacques (March 1982). "Un Analogue des Congruences de Kummer pour les q-nombres d'Euler". European Journal of Combinatorics. 3 (1): 19–28. doi:10.1016/S0195-6698(82)80005-X.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311229586601257, "perplexity": 877.8082928725679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735989.10/warc/CC-MAIN-20200805212258-20200806002258-00219.warc.gz"}
https://www.preceden.com/timelines/37746-untitled-timeline
# Untitled timeline ## Model of the Atom ### Democritus 460 B.C. Democritus was a Greek philosopher, who decided to ask, "What happens if you decide to break a piece of matter in half continually, until you can break it no further?" He thought there must be an end point somewhere. A smallest possible bit of matter. He called these particles "Atoms". ### Dalton 1802 John Dalton was an English chemist born in 1766. Dalton is best known for his early work in the development of modern atomc thoery. John did much research on gases and the relationship between temperature and pressure. Through his many experiments he was able to come up with an early atomic theory which perisited of 5 "rules." 1. Elements are made of extremely small particles called atoms 2. Atoms of the same element are indentical in size, mass, and other properties. Atoms of different elements differ in these properties 3. Atoms cannot be subdivided created or destroyed. 4. Atoms of different elements combine in simple whole number ratios in order to form chemicla compunds. 5. In chemical reactions atoms are combined, seperated, or rearranged. ### Thompson 1897 Sir Joseph John Thomson was an acclaimed British physicist from the late 1800"s to the mid 1900's. Thomson was credited for discovering the electron and isotopes within atoms, and was awarded the Nobel prize in 1906 for his discoveries. Thomson used a cathode ray to prove the existence of electrons. His experiment proved that the rays emitted were actually 1000 times smaller than what they thought was the smallest particle at the time, the hydrogen atom. This proved the theory of subatomic particles, and secured Thomson as a major contributor to the atomic model. ### Planck 1900 Max Karl Ernst Planck was a german theoretical physicist born in the mid 1800's. Planck was recognized for pioneering quantun theory when he recieved the Nobel prize in physics in 1918. Planck deveolped the theory that the frequency of a wave was related to is energy. He based this theory on the idea that the energy emitted from a resonator could only take on discrete values. Through his experiments he was able to derive the constant "h" we know today as Plancks constant. ### Rutherford 1911 Ernest Rutherford is most noted for his famous gold foil experiment. In his experiment, he used radium to produce alpha rays, which were projected on to a piece of gold foil. Behind the foil, there was a detection screen where he wouldbe able to see the alpha particles impact. This determined that there must be some form positively charged matter must scatter the particles. He also inferred that atoms were mostly empty space, and that electrons orbited the positive center, much like planets around the sun. ### Bohr 1912 Niels Bohr was a physicist born in Denmark in 1885. His model of the atom is the one that we currently use, and that best explains how atoms behave. He proposed that electrons can orbit at certain distances form the nucleus (Energy Levels) and that atoms can radiate energy when an electron changes energy levels. ### Milikan 1913 Robert Andrews Millikan was born in 1868 in Morrison, Illinois. Millikan won the nobel prize in physics in 1928 for his work in finding the charge of an electron. Millikan was able to discover the charge in an experiment famously known as the "oil drop experiment." In this experiment Millikan suspended a charged oil drop between two metal electrodes, knowing the strength of the electric field he was able to determine the charge on the oil drop. Since J.J. Thomson has earlier discovered the charge to mass ratio of an electron it was then simple for Millikan to also calculate the mass of an electron. ### Pauli 1924 Wolfgang Pauli was a Austrian theoretical physicist in the early 1900’s. Wolfgang made many advancements in quantum theory, he as praised by Einstein who landed him the nobel prize 1945. Wolfgang developed the exclusion principle, and the theory of nonrelativistic spin. ### Broglie 1924 Louie deBroglie, a French physicist, is credited with the discovery of the "Wave-Particle Duality". This principle explains why light can travel as a wave, or a particele, but also, why electrons can travel like waves. That picture represents the wave nature of electrons. ### Schrodinger 1926 Erwin Schrodinger was an Austrian physicist born in the late 1800's. Schrodinger made many advancements in quantum mechanics, and developed the theory of wave mechanics. Schrodinger published a paper that gave the first wave equation for time independant systems. His famous "schrodingers cat" theory helped him to disprove the old Copenhagen theory. ### Heisenberg 1927 Werner Heisenberg was a German physicist and mathematician who was born on December 5th, 1901. His most notable discovery was the development of the matrix mechanics formulation of quantum mechanics. He worked closely with Neils Bohr, who he had first met a lecture being given by Bohr. Heisenberg's matrixes could give out either the position or momentum of a particle, but never both. Heisenberg was responsible for probability maps.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8419309258460999, "perplexity": 1189.523902785067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863259.12/warc/CC-MAIN-20180619232009-20180620012009-00359.warc.gz"}
https://brilliant.org/discussions/thread/it-can-be-done-i-just-need-to-get-a-breakthrough/
# It can be done, I just need to get a breakthrough This is not original but it is really interesting. Let $$x$$, $$y$$ and $$z$$ be positive reals such that $$xyz \geq 1$$. Prove that $\dfrac {x^5 - x^2}{x^5 + y^2 + z^2} + \dfrac {y^5 - y^2}{y^5 + x^2 + z^2}+ \dfrac {z^5 - z^2}{z^5 + x^2 + y^2} \geq 0.$ Note by Sharky Kesa 6 years, 10 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: @Sharky Kesa I know its not the right place to say this, but the problem of mine you just reshared has been deleted by me because I, by mistake, posted the same problem twice......maybe you'd like to reshare it again.... - 6 years, 10 months ago I had accidentally reshared it . Using a Samsung Galaxy. Was it the triangle one? - 6 years, 10 months ago Yup, the triangle one.......BTW which Samsung Galaxy? - 6 years, 10 months ago One of them, the latest. - 6 years, 10 months ago Lemme guess.......S5 ?? - 6 years, 10 months ago Probably. - 6 years, 10 months ago What d'you mean by "probably". Please don't tell me that you're not telling the model of your Galaxy for Cyber-safety............BTW I posted a problem named 'Special Speed', in which we had to find a way of solving a quartic.......and it was hardly 30 seconds that I had posted the problem, and you had solved it......HOW?/! Are you so fast? - 6 years, 10 months ago Practice and experience is very useful to solve problems quickly and effectively. We just got the Galaxy so I don't know. :D - 6 years, 10 months ago Will you please help me in some questions.....I'm callow to inequalities. For example, For any real number 'a', prove that $\normalsize 4a^4-4a^3+5a^2-4a+1\ge0$. I want to know what was the first thing that came to your mind when you saw this question......and then how you approached it. Thanks in advance.. - 6 years, 10 months ago When $a$ is positive, the whole thing results in a non-negative numeral ($\frac {1}{2}$ is the equality case). When $a$ is negative, the odd powers are in subtraction so the minus' turn to plus'. They also result in positive numbers. When $a$ is 0, the whole thing results in positive. That's one of the ways to approach it. A better way to approach it (another method in my mind) is to factorise it. Upon factorisation, you get $(1 - 2a)^2 (a^2 + 1) \geq 0$. Since $a$ is real, $(1 - 2a)^2$ and $a^2 + 1$ result in a non-negative integer. Hope that helps. :D - 6 years, 10 months ago Thanks......I could do it by the first method myself. I wanted to use the second method, but I couldn't factorize it. Did you factorize it by finding out the roots? If so, then what if it didn't have any real root? - 6 years, 10 months ago Nah, I simply rearranged the equation to this: $(4a^4 + 4a^2) - (4a^3 + 4a) + (a^2 + 1) = (a^2 + 1)(1 - 4a + 4a^2) = (a^2 + 1)(1 - 2a)^2$ Finding roots are too laborious in certain equations such as this. - 6 years, 10 months ago Ah...Thanks......BTW see my new problem...Factorial Funda - 6 years, 10 months ago You will not believe this! I first attempted the number 168 (number of primes under 1000, hence the number of numbers for which $n$ can not divide $(n-1)!$. Then, I relied what I did wrong and did 832. It was wrong, and I thought I was including 1 and 1000 and finally did 830. You know the answer and I couldn't believe it. - 6 years, 10 months ago The question is correct. You're missing something....... :DD - 6 years, 10 months ago What? Did I need to include 1? - 6 years, 10 months ago Since you've had all your tries, I can tell you the solution......but not here. What's your mail id?......If you don't want to give it here, you may send me a test mail at '[email protected]' - 6 years, 10 months ago Whaat...? Sharky couldnt solve this ? This seems familiar though. I would be thankful If you could provide me the source of this pro- @Satvik Golechha - 6 years, 10 months ago Whenever I do not mention at the end of the question that it is taken from somewhere, you can assume that the question is mine. And so, the source of this question is @Satvik Golechha I love tagging myself.. - 6 years, 10 months ago Hahahahahaahahahahahaha I just got a mail....."Satvik Golechha mentioned you on Brilliant." - 6 years, 10 months ago Try my new question Bits, Bytes and a GB - 6 years, 10 months ago Was easy.....I solved it.......Though I will really appreciate anybody who solves it without a calc. - 6 years, 10 months ago - 6 years, 10 months ago I've posted it.....its just above yours. Learnt it in 5th class. - 6 years, 10 months ago Wow! How can you think of such awesome simplification.... :-o? - 6 years, 10 months ago A load of practice and experience. - 6 years, 10 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670589566230774, "perplexity": 2263.921145397444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00639.warc.gz"}
http://mathhelpforum.com/algebra/135229-frequency-diagram-problem.html
# Math Help - frequency diagram problem 1. ## frequency diagram problem Question: Part c , estimate the total number of marks scored by all students in the group? Is the answer frequency X class width for each group? Regarding the class width, do we have to find the mid point of the class? Many thanks Attached Files 2. Originally Posted by nazz Question: Part c , estimate the total number of marks scored by all students in the group? Is the answer frequency X class width for each group? Regarding the class width, do we have to find the mid point of the class? Many thanks Yes, you'll need to find the mid-point of each class. For example, you've got 8 students who scored an average of 4.5 (the total of their marks is 8x4.5 or 36) and another 20 students who scored an average of 14.5 (the total of these students' marks is 20x14.5 or 290). Simply total up ALL the students' marks. 3. ## frequency diagram problem Thanks for the reply, one more question, as the bar charts are not joined together , regarding the class width do we have to link them by doing the following: 0-9.5 9.5- 19.5 19.5-29.5 29.5-39.5 39.5-49.5 Then take the mid point of each class and multiply it by number of students. Or just take the classes as the way they are shown in the diagram and then take the mid points. Both methods will give different mid points for the classes. 4. Originally Posted by nazz Thanks for the reply, one more question, as the bar charts are not joined together , regarding the class width do we have to link them by doing the following: 0-9.5 9.5- 19.5 19.5-29.5 29.5-39.5 39.5-49.5 Then take the mid point of each class and multiply it by number of students. Or just take the classes as the way they are shown in the diagram and then take the mid points. Both methods will give different mid points for the classes. Great question!! The frequency histogram you provide is actually a poorly constructed one. By definition (all the definitions I've seen anyway), the classes should be touching. And usually you would just see the center of each class listed instead of the entire range. You are correct about having to connect them prior to finding the midpoint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.818200945854187, "perplexity": 774.8758088914706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036337.8/warc/CC-MAIN-20150601214356-00080-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/subsets-of-two-dimensional-space.268084/
# Subsets of two-dimensional space 1. Oct 30, 2008 ### Dafe 1. The problem statement, all variables and given/known data a) construct a subset of two-dimensional space closed under vector addition and even subtraction, but not under scalar multiplication. b) construct a subset of two-dimensional space (other than two opposite quadrants) closed under scalar multiplication but not under vector addition. 3. The attempt at a solution a) A line going through a point on the y-axis. Not a line through the origin though. I figure since the line does not have a null-vector in it, it is not closed under scalar multiplication. b) I only know of two opposite quadrants :/ Anyone? Thanks! 2. Oct 30, 2008 ### HallsofIvy Staff Emeritus Un fortunately, if it does not go through the origin, that is, does not contain 0, it is not closed under "vector addition and even subtraction" because v-v= 0. 3. Nov 2, 2008 ### Dafe Maybe I've missunderstood what even subtraction is. Does that mean that I subtract a vector by the same vector? I thought it meant that it was closed under addition and also under subtraction.. If even subtraction really is v-v, then the answer to a) is a line from the origin into the first quadrant, right? Thank you 4. Nov 2, 2008 ### Office_Shredder Staff Emeritus It means if u,v are in your subset, u+v AND u-v are in the subset. One particular example is for all v in the subset, v-v = 0 is in the subset also 5. Nov 2, 2008 6. Dec 6, 2008 ### Dafe I just went back to this problem and still can't find an answer. 7. Dec 6, 2008 ### mutton Then for any v on this line, 0 - v must also be on it, but it's not because your line is only in the first quadrant. a) How about vectors with integer norm? b) Two non-parallel lines through the origin? 8. Dec 6, 2008 ### HallsofIvy Staff Emeritus You are looking for sets of numbers that are closed under addition and subtraction but not under multiplication by real numbers. Try {(n,n)} where n is an integer. 9. Dec 6, 2008 ### mutton Now I know that this is wrong; e.g. (1, 0) - (0, 1) does not have integer norm. See what HallsofIvy said. 10. Dec 7, 2008 ### Dafe HallsofIvy: does {(n,n)} mean vectors in R^2 like (1,1), (2,2), (-4,-4), etc.? 11. Dec 7, 2008 ### HallsofIvy Staff Emeritus I wrote what it meant: "Try {(n,n)} where n is an integer." Surely you know what an integer is? 12. Dec 8, 2008 ### Dafe Yes, I do know what they are. To me {(n,n)} is a line through the origin, 45 degrees on the x-axis, which is a subspace closed under addition and scalar multiplication. You do not have to reply to these stupid questions of mine HallsofIvy, so there is no need to be an ***. Thank you. 13. Dec 8, 2008 ### HallsofIvy Staff Emeritus Do you understand that insults will get you kicked of this board permanently? You were the one who asked if "Try {(n,n)} where n is an integer" meant "(1,1), (2,2), (-4,-4), etc." so my question was perfectly reasonable. In any case, do you now see what the answer to your problem is? 14. Dec 8, 2008 ### mutton {(n, n) : n is an integer} is not a line. If you connected the points for some reason, then it would be a line. 15. Dec 9, 2008 ### Dafe So it is closed by addition and subtraction but not by multiplication by real numbers because then they might not be integers any more? I apologize for being an ***. It sometimes happens without me knowing it :p 16. Dec 9, 2008 ### HallsofIvy Staff Emeritus Yes, that is correct. (n, n)+ (m, m)= (n+ m, n+ m) and (n, n)- (m, m)= (n-m, n-m), both of the form (k, k) for k integer. But (1/2)(1, 1)= (1/2, 1/2) which is NOT. 17. Dec 10, 2008 ### Dafe I am obviously quite bad at this and am not able to figure out the second problem. (subset closed under multiplication but not under addition) Am I allowed to say that it is closed under multiplication by integers or something like that? Thank you. 18. Dec 10, 2008 ### Office_Shredder Staff Emeritus For part (b) you know you need a non-zero point (call it A)... then as it's closed under scalar multiplication, you need the line through the origin and A. This is a subspace, so we can't use this. So pick another point B that's not on the line. Again, you need the line passing through the origin and B. Do you need any more points? 19. Dec 10, 2008 ### Dafe Two points in the 1st quadrant and two points in the 3. quadrant. I draw lines through the origin between the points in opposite quadrants. Then I take linear combinations of the vectors on the two lines, so I get a sector in the first quadrant, and a mirror image of the same sector in the 3. quadrant. Then it is closed under multiplication but not under addition. Is this right? I guess this would work for all "mirrored sectors".. Am I totally wrong? Thank you!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375570178031921, "perplexity": 1093.1448668165788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz"}
https://planetmath.org/RamanujansFormulaForPi
# Ramanujan’s formula for pi Around $1910$, Ramanujan proved the following formula: ###### Theorem. The following series converges and the sum equals $\frac{1}{\pi}$: $\frac{1}{\pi}=\frac{2\sqrt{2}}{9801}\sum_{n=0}^{\infty}\frac{(4n)!(1103+26390n% )}{(n!)^{4}396^{4n}}.$ Needless to say, the convergence is extremely fast. For example, if we only use the term $n=0$ we obtain the following approximation: $\pi\approx\frac{9801}{2\cdot 1103\cdot\sqrt{2}}=3.14159273001\ldots$ and the error is (in absolute value) equal to $0.0000000764235\ldots$ In $1985$, William Gosper used this formula to calculate the first 17 million digits of $\pi$. Another similar formula can be easily obtained from the power series of $\arctan x$. Although the convergence is good, it is not as impressive as in Ramanujan’s formula: $\pi=2\sqrt{3}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(2n+1)3^{n}}.$ Title Ramanujan’s formula for pi RamanujansFormulaForPi 2013-03-22 15:53:41 2013-03-22 15:53:41 alozano (2414) alozano (2414) 7 alozano (2414) Theorem msc 11-00 msc 51-00 CyclometricFunctions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 10, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865738153457642, "perplexity": 772.0000405255284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743521.59/warc/CC-MAIN-20181117123417-20181117145417-00459.warc.gz"}
https://scicomp.stackexchange.com/questions/25287/cg-question-is-symmetry-always-necessary
# CG question: is symmetry always necessary? Consider the 1D Poisson equation $$\nabla^2 u = f.$$ Using finite difference method on cell corner data and a uniform grid with ghost points, I think we can write the system of equations with Neumann BCs as: $$Au = f-Au_{BC},$$ where \begin{aligned} A &= \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc} -2 & 2 & & & & & \\ 1 & -2 & 1 & & & & \\ & & \ddots & \ddots & \ddots & & \\ & & & & 1 & -2 & 1 \\ & & & & & 2 & -2 \\ \end{array} \right] \\ Ax_{BC} &= \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc} 2 \hat{n} \theta \Delta x \\ 0 \\ \\ \vdots \\ \\ 0 \\ 2 \hat{n} \theta \Delta x \\ \end{array} \right] \end{aligned} And $\hat{n},\theta$ is the outward facing normal and the prescribed slope of the derivative at the boundaries. Notably, the system is singular, which can be addressed by removing the mean of the RHS. ### Question I know that $A$ must be symmetric, positive definite. I've done some tests without multiplying the first and last rows by 0.5 and they seem to work fine, so my question is: Must the first and last rows of the left and right hand side be multiplied by 0.5? In other words, are there cases where it will not work without multiplication of 0.5? ### Notes I imagine that both sides of the equation would be balanced the same with and without this multiplication. As a final note, I know that the code / algorithm may be more clear with the multiplication, since the equation explicitly satisfies symmetry, but I'm more interested in whether the multiplication is necessary or not. Any help is greatly appreciated. • – Kirill Oct 24 '16 at 4:02 • Due to the minimization properties the CG algorithm is based on, it will produce approximations to the solution of $\frac12(A+A^T)x = b$ -- which is a solution of $Ax=b$ iff $A$ is symmetric, although in your case the difference is probably very small, so you don't notice. For the algorithm to converge, of course, the symmetric part $\frac12(A+A^T)$ of $A$ needs to be positive definite. – Christian Clason Oct 24 '16 at 7:22 The CG method works by producing in every iteration a vector $x^k\in\mathbb{R}^n$ that solves the minimization problem (assuming $x^0=0$ for simplicity) $$\min_{x\in \mathcal{K}_k} \frac12(x-x^*)^TA(x-x^*),\tag{1}\label{cg1}$$ where $x^*=A^{-1}b$ and $\mathcal{K}_k$ is a Krylov space of (apart from exceptional circumstances) dimension $k$. (The CG algorithm is just a clever way of computing this minimizer efficiently by reusing as much information from the previous iteration as possible.) So, for $k=n$ (at the latest), $x^n$ minimizes the function $f(x) = \frac12(x-x^*)^TA(x-x^*)$ over $\mathbb{R}^n$, which means that the gradient has to vanish. After expanding the product and using the product rule, this yields $$0 = \nabla f(x^n) = \frac12(A+A^T)x^n - \frac12(A+A^T)x^*.\tag{2}\label{cg2}$$ If $A$ is symmetric ($A^T=A$) and $Ax^*=b$, this of course implies that $Ax^n=b$; otherwise you only get a solution to $(A+A^T)x = (A+A^T)A^{-1}b$. In your case, $A$ is nearly symmetric and fairly well-conditioned, so the difference between the solutions to these two equations is probably so small that you don't see it. Another question is whether the numerical algorithm breaks down if $A$ is not symmetric. Looking at the algorithm, this can only happen if $x^TA x =0$ for some $x\neq 0$ (which of course is impossible for positive definite matrices by definition). Now note that you can write any matrix $A$ as $$A = \frac12 (A+A^T) + \frac12(A-A^T) =: A_{\text{sym}} + A_{\text{skew}},$$ where $A_{\text{sym}}$ is the symmetric part and $A_{\text{skew}}$ the skew-symmetric part of $A$. Furthermore, $x^TA_{\text{skew}}x=0$ and hence $x^TAx = x^TA_{\text{sym}}x$ for all $x\in\mathbb{R}^n$. So the algorithm works as long as the symmetric part $A_{\text{sym}}$ of $A$ is positive definite. Conveniently, this implies that the minimization problem \eqref{cg1} is strictly convex and hence has a unique minimizer, which is the only solution to \eqref{cg2}.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998586177825928, "perplexity": 120.1752628140779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00231.warc.gz"}
http://papers.nips.cc/paper/4490-learning-from-the-wisdom-of-crowds-by-minimax-entropy
# NIPS Proceedingsβ ## Learning from the Wisdom of Crowds by Minimax Entropy [PDF] [BibTeX] ### Abstract An important way to make large training sets is to gather noisy labels from crowds of nonexperts. We propose a minimax entropy principle to improve the quality of these labels. Our method assumes that labels are generated by a probability distribution over workers, items, and labels. By maximizing the entropy of this distribution, the method naturally infers item confusability and worker expertise. We infer the ground truth by minimizing the entropy of this distribution, which we show minimizes the Kullback-Leibler (KL) divergence between the probability distribution and the unknown truth. We show that a simple coordinate descent scheme can optimize minimax entropy. Empirically, our results are substantially better than previously published methods for the same problem.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087940454483032, "perplexity": 829.8276378978646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157574.27/warc/CC-MAIN-20180921225800-20180922010200-00426.warc.gz"}
https://tex.stackexchange.com/questions/163627/customizing-the-style-of-the-table-of-contents-title
I used titlesec for customizing chapter titles. How can I put the Table of contents title above the horizontal line? (I am trying to customize the place of the title, not the title) Added figure is the result I get, from my existing code. \usepackage{titlesec} \titleformat{\chapter}[display] {\bfseries\large} {\filleft\MakeUppercase{\chaptertitlename} \large\thechapter} {2ex} {\titlerule \vspace{2ex}% \filleft} \titlespacing*{\chapter}{0pt}{120pt}{6pt} • Does this answer your question: How to change the title of toc? Mar 4 '14 at 13:41 • I tried this answer before, it doesn't work for me. I can change the title to "Table of Contents" from "Contents" but I cannot place it above the line as shown in the figure. Mar 4 '14 at 14:03 • @user47278 Yes, I've voted for re-opening the question. Mar 4 '14 at 14:07 • Thank you, I have been looking for a solution for 2 days, I read documentation and checked questions. Still I cannot worked a solution for this problem. Mar 4 '14 at 14:13 • @SahikaKoyun I already have the answer, but cannot post it while the question is closed :( Mar 4 '14 at 14:15 With your current settings, The rule is placed above the chapter title and below the string "Chapter #", for numbered chapters. You need a different specification for unnumbered chapters (such as the ToC), to make the rule appear below the title; this can be achieved using another \titleformat command with the numberless key: \documentclass{book} \usepackage{titlesec} \titleformat{\chapter}[display] {\bfseries\large} {\filleft\MakeUppercase{\chaptertitlename} \large\thechapter} {2ex} {\titlerule\vspace{2ex}\filleft} \titleformat{name=\chapter,numberless}[display] {\bfseries\large} {\titlerule} {-7ex} {\filleft\MakeUppercase}[\vspace{5ex}] \titlespacing*{\chapter}{0pt}{120pt}{6pt} \begin{document} \tableofcontents \chapter{Test chapter} \end{document} An image of the ToC: An image of a numbered chapter:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8064149022102356, "perplexity": 1276.4104390343302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363006.60/warc/CC-MAIN-20211204185021-20211204215021-00071.warc.gz"}
https://mathoverflow.net/questions/219009/standard-techniques-on-rationally-connected-varieties
# Standard techniques on rationally connected varieties Is there some standard technique or approach to determine when a (irreducible) subvariety of a rationally connected variety is again rationally connected? Any reference/text dealing with this kind of question will be most welcome. The example that I have in mind is the following: Let $P$ be the Hilbert polynomial of a complete intersection curve in $\mathbb{P}^3$ and $L$ an irreducible component of the corresponding Hilbert scheme parametrizing complete intersection curves in $\mathbb{P}^3$ with Hilbert polynomial $P$. Denote by $V \subset L$ the sublocus parametrizing curves which are not smooth. As far as I understand $L$ is rationally connected. Am I right? Then, I want to understand when is a connected component of $V$ again rationally connected. • Every projective variety embeds in projective space, and projective space is rationally connected. So, in complete generality, it cannot be easier to determine whether a general projective subvariety of a rationally connected variety is again rationally connected than it is to determine rational connectedness for an abstract projective variety. Having said this, let me be more honest: for specific pairs, of course you can try to use what you know about that pair to simplify the computation . . . – Jason Starr Sep 23 '15 at 10:29 • For instance, every singular complete intersection curve is singular at some point with a given 2-dimensional subspace of the Zariski tangent space. So you can form the incidence variety of a complete intersection curve together with a singular point and a 2-dimensional tangent space. There is a forgetful morphism from the incidence correspondence to the dual projectivized tangent bundle of $\mathbb{P}^3$ that only remembers the singular point and tangent space (a Cheshire cat leaving behind its grin) . . . – Jason Starr Sep 23 '15 at 10:32 • By homogeneity, the forgetful morphism is flat and surjective. Thus, by the rationally connected fibration theorem, it suffices to prove that the fibers are rationally connected. For a given point and tangent space, it is a linear condition on hypersurfaces to contain that point and to contain the given 2-dimensional subspace in its tangent space. Thus, the parameter space for tuples of hypersurfaces containing the given point and 2-dimensional subspace (i.e., non-reduced local Artin scheme) is unirational for the same reason that all of $L$ is rationally connected. – Jason Starr Sep 23 '15 at 10:37 • @JasonStarr Thank you very much for the answer. This is very helpful. May be you could put this as an answer. – Ron Sep 23 '15 at 11:00 Let $[x,y,z,w]$ be homogeneous coordinates on $\mathbb{P}^3$ so that $\Gamma_*(\mathcal{O}_{\mathbb{P}^3})$ equals $k[x,y,z,w]$. Let $p$ be the point $[0,0,0,1]$ in these coordinates, whose associated homogeneous ideal $\Gamma_*(\mathcal{I}_{p/\mathbb{P}^3})$ is $\langle x,y,z \rangle$. Let $A\subset \mathbb{P}^3$ be the nonreduced, local Artin scheme supported at $p$ whose associated homogeneous ideal $I=\Gamma_*(\mathcal{I}_{A/\mathbb{P}^3})$ is $\langle x,y^2,yz,z^2 \rangle$. For a given pair of positive integers, $(d,e)$, one parameter space (not the Hilbert scheme) for complete intersection curves is the open subscheme $L'$ of $$M':=\mathbb{P}k[x,y,z,w]_d \times_{\text{Spec}(k)} \mathbb{P}k[x,y,z,w]_e,$$ parameterizing pairs $([E(x,y,z,w)],[F(x,y,z,w)])$ of homogeneous polynomials of degree $d$, resp. $e$, such that the zero scheme $C:=\text{Zero}(E,F)\subset \mathbb{P}^3$ is a complete intersection curve. There is a dominant morphism, $$\pi: L'\to L, \ ([E(x,y,z,w)],[F(x,y,z,w)]) \mapsto [C].$$ The fiber over a point $[C]$ is a dense open subset of the product of projective spaces $$\mathbb{P} \Gamma(\mathcal{I}_{C/\mathbb{P}^3}(d))\times_{\text{Spec}(k)}\mathbb{P}\Gamma(\mathcal{I}_{C/\mathbb{P}^3}(e)).$$ With this description, the subscheme of $K'$ parameterizing curves $C$ that contain $A$ is the intersection of the open subset $L'$ with the subvariety $$N' = \mathbb{P}I_d \times_{\text{Spec}(k)}\mathbb{P}I_e.$$ In particular, $K'$ is a dense open subset of a product of projective spaces, hence $K'$ is rational. Finally, there is a natural action of the automorphism scheme $\text{Aut}(\mathbb{P}^3,\mathcal{O}(1)) = \textbf{GL}_{4,k}$ on $\Gamma_*(\mathcal{O}_{\mathbb{P}^3})$ (in the sense of GIT, $G$-linearizations, etc.) That action induces an action on every projective space $\mathbb{P}k[x,y,z,w]_m$, and thus also an action on $M'$. The open subset $L'$ of $M'$ is invariant for this action. Denote the action morphism as follows, $$m:\textbf{GL}_{4,k} \times_{\text{Spec}(k)} L' \to L'.$$ There is an induced composition morphism, $$\textbf{GL}_{4,k}\times_{\text{Spec}(k)} K' \xrightarrow{m} L' \xrightarrow{\pi} L.$$ I claim that the closure of the image is $V$. Thus, since $V$ is dominated by a rational variety, $V$ is unirational. • I am a little confused. Could you please clarify what is the role of the specific description of $A$? Is it related to the tangent space to the complete intersection curves at the singular point or the incidence correspondence you mention in your comments? In particular, I am not able to see the claim you make towards the end that "the closure of the image is $V$." – Ron Sep 27 '15 at 12:24 • By construction, the image of $\textbf{GL}_{4,k}\times K'$ parameterizes those complete intersection curves that contain the image of $A$ under some projective automorphism of $\mathbb{P}^3$, i.e., they contain a point $p$ and a $2$-dimensional subspace of the Zariski tangent space of $\mathbb{P}^3$ at $p$. This is precisely to say, the image parameterizes complete intersection curve that are singular at some point $p$. – Jason Starr Sep 27 '15 at 13:30 • To check if I am understanding correctly, are you saying that a complete intersection curve in $\mathbb{P}^3$ is singular if and only if after applying a projective automorphism the curve transforms into another curve whose ideal contains $A$? If so, is this fact obvious? – Ron Sep 27 '15 at 13:58 • I am saying that a curve $C$ in $\mathbb{P}^3$, whether or not it is a complete intersection, is singular if and only if it contains a closed subscheme that is the image of $A$ under a projective automorphism of $\mathbb{P}^3$. Indeed, being singular at $p$ precisely means that the curve contains a $2$-dimensional subspace of the Zariski tangent space of $\mathbb{P}^3$ at $p$. – Jason Starr Sep 27 '15 at 14:01 • Thank you. I had understood the last line ("Indeed, being singular at p ....") when you mentioned it in the comment itself, before the answer. What I was not sure was that having a two dimensional subspace of the Zariski tangent space is equivalent to $C$ containing $A$, upto projective automorphism. – Ron Sep 27 '15 at 14:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872891664505005, "perplexity": 133.326140767124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00124.warc.gz"}
https://mathoverflow.net/questions/270386/is-the-determinant-equal-to-a-determinant
# Is the determinant equal to a determinant? Let $\det_d = \det((x_{i,j})_{1 \leq i,j\leq d})$ be the determinant of a generic $d \times d$ matrix. Suppose $k \mid d$, $1 < k < d$. Can $\det_d$ be written as the determinant of a $k \times k$ matrix of forms of degree $d/k$? Even writing $\det_4$ as the determinant of a $2 \times 2$ matrix of quadratic forms seems impossible, just intuitively. The space of $2 \times 2$ matrices of quadratic forms in $16$ variables has dimension $4 \cdot \binom{17}{2} = 544$, while the space of quartics in $16$ variables has dimension $\binom{19}{4} = 3876$. • One can see that $\det_4$ is equal to a sum of three $2 \times 2$ determinants of quadratic forms: The Laplace expansion in say the first two rows involves $6$ terms (each of which is the product of the determinant of a $2 \times 2$ minor in the first $2$ rows with the determinant of the complementary minor); these $6$ terms can be packaged into $3$ determinants of $2 \times 2$ minors. I wanted to rule out the possibility of writing $\det_4$ as a sum of $2$ determinants but I can't even rule out $\det_4$ being equal to a single $2 \times 2$ determinant, sigh. – Zach Teitler May 22 '17 at 6:49 • IMHO the "intuitive" argument about dimensions is possible to be made precise, it would however require some basic algebraic geometry. – Dima Pasechnik May 22 '17 at 7:09 • "space of quadrics in 16 variables " must be space of quartics in 16 variables" – Dima Pasechnik May 22 '17 at 7:11 • but is is not right to count all the quartics - you must only take poly-linear quartics (ones where each monomial consists of 4 different variables). – Dima Pasechnik May 22 '17 at 7:13 • although even with the latter restriction you have to select only these poly-linear quartics which can be represented by determinants---it's not the whole space... – Dima Pasechnik May 22 '17 at 7:42 I think a result of Hochster allows to get a quick proof that it is not possible to express the determinant of the generic $d \times d$ matrix as the determinant of a $k \times k$ matrices with entries being homogeneous forms of degree $\dfrac{d}{k}$, provided that $1 <k <d$. I will work over an algebraically closed field. Relying on some cohomological methods (similar to the ones Jason Starr is using in his answer), Hochster proved that the variety of $n \times n$ matrices with $\textrm{rank} < n-1$ can be set-theoretically defined by $2n$ equations and not less (see this paper by Bruns and Schwänzl for some improvement of Hochster's result : http://www.home.uni-osnabrueck.de/wbruns/brunsw/pdf-article/NumbEqDet.published.pdf). Now, we proceed by absurd. Assume that the determinant of the generic $d \times d$ matrix can be written as the determinant of $k \times k$ matrix (say $A$) with forms homogeneous of degree $\dfrac{d}{k}$. We assume $1 < k < d$. We denote by $P_1, \ldots, P_{k^2}$ the entries $A$, which are homogeneous polynomials in $x_1, \ldots, x_{d^2}$ of degree $\dfrac{d}{k}$. Let $B$ be the generic $k \times k$ matrix with entries $Y_1, \ldots, Y_{k^2}$. Denote by $Q_1, \ldots Q_{k^2}$ the $k-1$ minors of $B$. The variety defined by the vanishing of the $\{Q_i\}_{i=1\ldots k^2}$ is non-empty of codimension $4$ in $\mathbb{A}^{k^2}$ (here I use that $k>1$). Hence, if we replace $Y_i$ by $P_i(x_1, \ldots, x_{d^2})$, we see that the scheme defined by the vanishing of the $\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$ has codimension at most $4$ in $\mathbb{A}^{d^2}$. Furthermore, a simple computation of partial derivatives shows that the scheme defined by the vanishing of the $\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$ is included in the singular locus of the variety defined by $\det A = 0$. But $\det A$ is the determinant of the generic $d \times d$ matrix, so that its singular locus is the variety of matrices of $\textrm{rank} < d-1$ : it is irreducible of codimension $4$ in $\mathbb{A}^{d^2}$. From the above, we deduce that the variety of $d \times d$ matrices of $\textrm{rank} < d-1$ is set-theoretically equal to the scheme defined by the $\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$. By Hochster's result (the existence part), one can find $2k$ polynomials (say $T_1, \ldots, T_{2k}$) in the ideal generated by $Q_1, \ldots, Q_{k^2}$ such that $$\textrm{rad}(T_1, \ldots, T_{2k}) = \textrm{rad}(Q_1, \ldots, Q_{k^2}).$$ Replacing $Y_i$ by $P_i(x_1,\ldots, x_{d^2})$ in the $\{T_j\}_{j=1 \ldots 2k}$, we find that the vanishing of the $\{T_j(P_1, \ldots, P_{k^2}) \}_{j=1 \ldots 2k}$ defines set-theoretically the variety of $d \times d$ matrices of $\textrm{rank} < d-1$. Since $2k < 2d$, we get a contradiction with Hochster's result. • I am just making this comment for readers who may not have noticed themselves: Libli's answer also works in arbitrary positive characteristic (the answer I posted does not). – Jason Starr May 22 '17 at 22:53 • You mean "of rank $<n-1$", not "of rank $\le n-1$", I suppose. For rank $\le n-1$, you just need one equation $\det=0$ ;-) – Vladimir Dotsenko May 22 '17 at 23:06 • @VladimirDotsenko : Of course, you're perfectly right. I will correct that! – Libli May 23 '17 at 8:39 • How does one find the codimension of the variety of the $\{Q_i\}$? – Jose Brox May 28 '17 at 14:08 • @JoseBrox : For instance here : en.wikipedia.org/wiki/Determinantal_variety – Libli May 28 '17 at 15:51 I will work over a field of characteristic $0$ so that reductive algebraic groups are linearly reductive; presumably there is a way to eliminate this hypothesis. In this case, for an integer $n>1$ and for a divisor $m$ such that $n>m>1$, there does not exist $f$ as above. The point is to consider the critical locus of the determinant. The computation below proves that "cohomology with supports" has cohomological dimension equal to $2(n-1)$ for the pair of the quasi-affine scheme $\textbf{Mat}_{n\times n} \setminus \text{Crit}(\Delta_{n\times n})$ and its closed subset $\text{Zero}(\Delta_{n\times n})\setminus \text{Crit}(\Delta_{n\times n})$. Since pushforward under affine morphisms preserve cohomology of quasi-coherent sheaves, that leads to a contradiction. Denote the $m\times m$ determinant polynomial by $$\Delta_{m\times m}:\textbf{Mat}_{m\times m} \to \mathbb{A}^1.$$ For integers $m$ and $n$ such that $m$ divides $n$, you ask whether there exists a homogeneous polynomial morphism $$f:\textbf{Mat}_{n\times n}\to \textbf{Mat}_{m\times m}$$ of degree $n/m$ such that $\Delta_{n\times n}$ equals $\Delta_{m\times m}\circ f$. Of course that is true if $m$ equals $1$: just define $f$ to be $\text{Delta}_{n\times n}$. Similarly, this is true if $m$ equals $n$: just define $f$ to be the identity. Thus, assume that $2\leq m < n$; this manifests below through the fact that the critical locus of $\Delta_{m\times m}$ is nonempty. By way of contradiction, assume that there exists $f$ with $\Delta_{n\times n}$ equal to $\Delta_{m\times m}\circ f$. Lemma 1. The inverse image under $f$ of $\text{Zero}(\Delta_{m\times m})$ equals $\text{Zero}(\Delta_{n\times n})$. In other words, the inverse image under $f$ of the locus of matrices with nullity $\geq 1$ equals the locus of matrices with nullity $\geq 1$. Proof. This is immediate. QED Lemma 2. The inverse image under $f$ of the critical locus of $\Delta_{m\times m}$ equals the critical locus of $\Delta_{n\times n}$. In other words, the inverse image under $f$ of the locus of matrices with nullity $\geq 2$ equals the locus of matrices with nullity $\geq 2$. Proof. By the Chain Rule, $$d_A\Delta_{n\times n} = d_{f(A)}\Delta_{m\times m}\circ d_Af.$$ Thus, the critical locus of $\Delta_{n\times n}$ contains the inverse image under $f$ of the critical locus of $\Delta_{m\times m}$. Since $m\geq 2$, in each case, the critical locus is the nonempty set of those matrices whose kernel has dimension $\geq 2$, this critical locus contains the origin, and this critical locus is irreducible of codimension $4$. Thus, the inverse image under $f$ of the critical locus of $\Delta_{m\times m}$ is nonempty (it contains the origin) and has codimension $\leq 4$ (since $\textbf{Mat}_{m\times m}$ is smooth). Since this is contained in the critical locus of $\Delta_{n\times n}$, and since the critical locus of $\Delta_{n\times n}$ is irreducible of codimension $4$, the inverse image of the critical locus of $\Delta_{m\times m}$ equals the critical locus of $\Delta_{n\times n}$. QED Denote by $U_n\subset \text{Zero}(\Delta_{n\times n})$, resp. $U_m\subset \text{Zero}(\Delta_{m\times m})$, the open complement of the critical locus, i.e., the locus of matrices whose kernel has dimension precisely equal to $1$. By Lemma 1 and Lemma 2, $f$ restricts to an affine morphism $$f_U:U_n\to U_m.$$ Proposition. The cohomological dimension of sheaf cohomology for quasi-coherent sheaves on the quasi-affine scheme $U_n$ equals $2(n-1)$. Proof. The quasi-affine scheme $U_n$ admits a morphism, $$\pi_n:U_n \to \mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*,$$ sending every singular $n\times n$ matrix $A$ parameterized by $U_n$ to the ordered pair of the kernel of $A$ and the image of $A$. The morphism $\pi_n$ is Zariski locally projection from a product, where the fiber is the affine group scheme $\textbf{GL}_{n-1}$. In particular, since $\pi_n$ is affine, the cohomological dimension of $U_n$ is no greater than the cohomological dimension of $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*$. This equals the dimension $2(n-1)$. More precisely, $U_n$ is simultaneously a principal bundle for both group schemes over $\mathbb{P}^{n-1}\times(\mathbb{P}^{n-1})^*$ that are the pullbacks via the two projections of $\textbf{GL}$ of the tangent bundle. Concretely, for a fixed $1$-dimension subspace $K$ of the $n$-dimensional vector space $V$ -- the kernel -- and for a fixed codimension $1$ subspace $I$ -- the image -- the set of invertible linear maps from $V/K$ to $I$ is simultaneously a principal bundle under precomposition by $\textbf{GL}(V/K)$ and a principal bundle under postcomposition by $\textbf{GL}(I)$. In particular, the pushforward of the structure sheaf, $$\mathcal{E}_n:=(\pi_n)_*\mathcal{O}_{U_n},$$ is a quasi-coherent sheaf that has an induced action of each of these group schemes. The invariants for each of these actions is just $$\pi_n^\#:\mathcal{O}_{\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*}\to \mathcal{E}_n.$$ Concretely, the only functions on an algebraic group that are invariant under pullback by every element of the group are the constant functions. The group schemes and the principal bundle are each Zariski locally trivial. Consider the restriction of $\mathcal{E}_n$ on each open affine subset $U$ where the first group scheme is trivialized and the principal bundle is trivialized. The sections on this open affine give a $\mathcal{O}(U)$-linear representation of $\textbf{GL}_{n-1}$. Because $\textbf{GL}_{n-1}$ is linearly reductive, there is a unique splitting of this representation into its invariants, i.e., $\pi_n^\#\mathcal{O}(U)$, and a complementary representation (having trivial invariants and coinvariants). The uniqueness guarantees that these splittings glue together as we vary the trivializing opens. Thus, there is a splitting of $\pi_n^\#$ as a homomorphism of quasi-coherent sheaves, $$t_n:(\pi_n)_*\mathcal{O}_{U_n} \to \mathcal{O}_{\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*}.$$ For every invertible sheaf $\mathcal{L}$ on $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*$, this splitting of $\mathcal{O}$-modules gives rise to a splitting, $$t_{n,\mathcal{L}}:(\pi_n)_*(\pi_n^*\mathcal{L})\to \mathcal{L}.$$ In particular, for every integer $q$, this gives rise to a surjective group homomorphism, $$H^q(t_{n,\mathcal{L}}):H^q(U_n,\pi_n^*\mathcal{L}) \to H^q(\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*,\mathcal{L}) .$$ Now let $\mathcal{L}$ be a dualizing invertible sheaf on $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*.$ This has nonzero cohomology in degree $2(n-1)$. Thus, the cohomological dimension of sheaf cohomology for quasi-coherent $\mathcal{O}_{U_n}$-modules also equals $2(n-1)$. QED Since $f_U$ is affine, the cohomological dimension for $U_n$ is no greater than the cohomological dimension for $U_m$. However, by the proposition, the cohomological dimension for $U_m$ equals $2(m-1)$. Since $1<m<n$, this is a contradiction. • Nice answer, thanks! I decided to accept the one that works in all characteristics. But I appreciate the detailed explanation you gave. – Zach Teitler May 23 '17 at 4:07 This is not an answer, but it is to big to be a comment. Let me write your matrix $X$ blockwise $(M_{\alpha\beta})_{1\le\alpha,\beta\le k}$, where the blocks are $k/d\times k/d$. If the blocks $M_{\alpha\beta}$ commutte to each other, then $$\det X=\det((d_{\alpha\beta})_{1\le\alpha,\beta\le k}$$ where $d_{\alpha\beta}:=\det M_{\alpha\beta}$. A particular case of this property is the formula $\det(A\otimes B)=(\det A)^m(\det B)^n$ where $m$ is the size of $B$, $n$ that of $A$ (Thanks to Suvrit !). As for the general case, I guess that the answer to your question is negative. • Actually, $\det(A\otimes B) \not=\det(A)\cdot \det(B)$, it equals $\det(A)^m\cdot \det(B)^n$, where $m$ and $n$ are the sizes of matrix $B$ and $A$, resp. – Suvrit May 22 '17 at 16:09 • @Suvrit. Oups ! Right. I fix it – Denis Serre May 22 '17 at 16:43 The answer is that if $1 < k < d$, then such a matrix is equivalent to a diagonal matrix with $\det(x_{ij})=: D$ on the diagonal and the rest of the diagonal entries equal to $1$. This does not require that $k$ divides $d$. The reason is the following: If $M$ is a $k\times k$ matrix with entries from the polynomial ring $S$ in the $x_{ij}$ over some field and $\det M = D$, then the cokernel of $M$, viewed as an $S$-linear map from $S^k$ to $S^k$ defines a maximal Cohen--Macaulay module of rank one over the hypersurface ring $R=S/(D)$, generated by at most $k$ elements. For this statement see Eisenbud's paper MR0570778 (82d:13013). In the book by Bruns and Vetter on determinantal rings MR0953963 (89i:13001) all reflexive modules of rank one over a determinantal ring are determined and in the situation here they show that only the cokernel of the generic matrix itself or its transpose gives a nontrivial maximal Cohen-Macaulay module of rank one. However these need at least $d$ generators. The trivial module is just $R$ itself and in terms of matrices it means that $k<d$ implies that $M$ is equivalent to a diagonal matrix with $D$ on the diagonal and the rest of the diagonal filled up by $1$'s. (The case I neglected in my original posting...) Another interesting source for similar questions (and some answers) is the article by G. Bergman MR2205070 (2006k:15037). • What do you mean by "regardless of the form of the entries"? The answer is yes if the $d\times d$ determinant itself is admissible as an entry. – Emil Jeřábek May 24 '17 at 9:45 • I will take a look at the Bergman article, thanks! – Zach Teitler May 24 '17 at 12:48 • as @EmilJeřábek points out, the entries have to be homogeneous. Otherwise take a diagonal matrix with only $1$ on the diagonal and $\textrm{det}_d$ as the last diagonal entry. The fact that the entries are homogeneous then forces $k$ divides $d$. – Libli May 24 '17 at 15:17 • I assume the idea is that the entries may be homogeneous of different degrees --- compatibly, of course, so that the degrees of entries satisfy $\deg(P_{i,j})+\deg(P_{k,l}) = \deg(P_{i,l})+\deg(P_{k,j})$ when $i\neq k$, $j \neq l$; and $\sum \deg(P_{i,i}) = d^2$. So for example, one might try to write the $3\times 3$ determinant as $\det \begin{pmatrix} L_1 & L_2 \\ Q_1 & Q_2 \end{pmatrix}$ for linear forms $L_i$ and quadratic forms $Q_i$, and I take it that @Ragnar's answer will rule this out. Do I understand correctly? (We had better assume strictly positive degrees to avoid silliness.) – Zach Teitler May 24 '17 at 20:31 • Even if you drop homogeneity, if you simply demand that the morphism $f:\textbf{Mat}_{n\times n} \to \textbf{Mat}_{m\times m}$ maps the origin to the origin, then the answers by Libli and myself still work because the image of $f$ intersects the critical locus of the determinant. – Jason Starr May 25 '17 at 9:13 As others fascinatingly settled negative answer for the general question, let me give a comment on certain situation when the answer is positive. So if one has dxd matrix and d=k*n, we may split the matrix into nxn matrix which elements are kxk matrices. Now the statement is - if these elements satisfy certain commutation relations (i.e. elements in each column commutate and cross-diagonal commutators are equal - see Manin matrix), then $det_d = det_k det_n$ - i.e. first calculate determinant for nxn matrix whose elements are matrices - get a kxk matrix as an answer and then calculate usual kxk determinant of the later. (See section 5.5 page 36 here - sorry for self-advertising ). • Isn't this answer just a paraphrase of Denis's answer below? – Libli May 28 '17 at 16:01 • @Libil No it is not. It can be seen as a generalization. But Manin matrices are far from being commuative in general. – Alexander Chervov May 28 '17 at 16:57 • in that case it might be interesting to mention in your answer that it is a generalization of Denis's one and give perhaps a concrete example where Manin matrices are not just matrices which blocks commute to each other. – Libli May 28 '17 at 18:18 • @Libli if Manin matrices would be the same as matrices with commuting entries what would be the purpose to devise the special name ? It is obvious from the definition that they are not the same. Most simple examples comes from so-called Cartier-Foata matrices - elements from different rows commute, but absolutely no relation within same row. (Manin matrices are more general). – Alexander Chervov May 28 '17 at 18:35 • I don't really care what are Manin matrices and if they are more general or not than matrices with commuting entries. I am just trying to give some tips for your answer to be more interesting and more comprehensible to a beginner/non-expert in non-commutative algebra. – Libli May 28 '17 at 19:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.943333089351654, "perplexity": 181.8441400748284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572517.50/warc/CC-MAIN-20190916100041-20190916122041-00198.warc.gz"}
https://bibbase.org/network/publication/huang-andrews-guo-heathjr-berry-spatialinterferencecancellationformultiantennamobileadhocnetworks-2012
Spatial interference cancellation for multiantenna mobile ad hoc networks. Huang, K., Andrews, J., Guo, D., Heath Jr., R., & Berry, R. IEEE Transactions on Information Theory, 2012. Interference between nodes is a critical impairment in mobile ad hoc networks. This paper studies the role of multiple antennas in mitigating such interference. Specifically, a network is studied in which receivers apply zero-forcing beamforming to cancel the strongest interferers. Assuming a network with Poisson-distributed transmitters and independent Rayleigh fading channels, the transmission capacity is derived, which gives the maximum number of successful transmissions per unit area. Mathematical tools from stochastic geometry are applied to obtain the asymptotic transmission capacity scaling and characterize the impact of inaccurate channel state information (CSI). It is shown that, if each node cancels L interferers, the transmission capacity decreases as ? (?1/L+1) as the outage probability ? vanishes. For fixed ?, as L grows, the transmission capacity increases as ? (L 1-2/?) where ? is the path-loss exponent. Moreover, CSI inaccuracy is shown to have no effect on the transmission capacity scaling as ? vanishes, provided that the CSI training sequence has an appropriate length, which we derive. Numerical results suggest that canceling merely one interferer by each node may increase the transmission capacity by an order of magnitude or more, even when the CSI is imperfect. ? 2012 IEEE. @article{ title = {Spatial interference cancellation for multiantenna mobile ad hoc networks}, type = {article}, year = {2012}, identifiers = {[object Object]}, volume = {58}, id = {28e1a677-9efc-3ef8-a0ac-26157ae16c95}, created = {2016-09-06T19:48:09.000Z}, file_attached = {false}, profile_id = {be62f108-1255-3a2e-9f9e-f751a39b8a03}, last_modified = {2017-03-24T19:20:02.182Z}, }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.86825031042099, "perplexity": 1289.1332864617982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00482.warc.gz"}
http://www.aanda.org/articles/aa/abs/2005/25/aa2339-04/aa2339-04.html
Free access Issue A&A Volume 437, Number 1, July I 2005 189 - 208 Interstellar and circumstellar matter http://dx.doi.org/10.1051/0004-6361:20042339 A&A 437, 189-208 (2005) DOI: 10.1051/0004-6361:20042339 ## A 10 m spectroscopic survey of Herbig Ae star disks: Grain growth and crystallization R. van Boekel1, 2, M. Min1, L. B. F. M. Waters1, 3, A. de Koter1, C. Dominik1, M. E. van den Ancker2 and J. Bouwman4 1  Astronomical Institute "Anton Pannekoek", University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands e-mail: [email protected] 2  European Southern Observatory, Karl-Schwarzschildstrasse 2, 85748 Garching bei München, Germany 3  Instituut voor Sterrenkunde, Katholieke Universiteit Leuven, Celestijnenlaan 200B, 3001 Heverlee, Belgium 4  Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany (Received 9 November 2004 / Accepted 4 March 2005) Abstract We present spectroscopic observations of a large sample of Herbig Ae stars in the 10 m spectral region. We perform compositional fits of the spectra based on properties of homogeneous as well as inhomogeneous spherical particles, and derive the mineralogy and typical grain sizes of the dust responsible for the 10 m emission. Several trends are reported that can constrain theoretical models of dust processing in these systems: i) none of the sources consists of fully pristine dust comparable to that found in the interstellar medium; ii) all sources with a high fraction of crystalline silicates are dominated by large grains; iii) the disks around more massive stars ( , ) have a higher fraction of crystalline silicates than those around lower mass stars, iv) in the subset of lower mass stars ( ) there is no correlation between stellar parameters and the derived crystallinity of the dust. The correlation between the shape and strength of the 10 micron silicate feature reported by van Boekel et al. (2003) is reconfirmed with this larger sample. The evidence presented in this paper is combined with that of other studies to present a likely scenario of dust processing in Herbig Ae systems. We conclude that the present data favour a scenario in which the crystalline silicates are produced in the innermost regions of the disk, close to the star, and transported outward to the regions where they can be detected by means of 10 micron spectroscopy. Additionally, we conclude that the final crystallinity of these disks is reached very soon after active accretion has stopped. Key words: stars: circumstellar matter -- stars: pre-main sequence -- infrared: ISM -- ISM: lines and bands -- dust, extinction
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701177835464478, "perplexity": 3417.1013515864074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660916.37/warc/CC-MAIN-20160924173740-00231-ip-10-143-35-109.ec2.internal.warc.gz"}
https://quant.stackexchange.com/questions/57665/hull-white-extended-vasicek-model
# Hull White Extended Vasicek model I am trying to understand a formula in the landmark paper by Hull&White "Pricing Interest Rate Derivative Securities" (1990). I cannot see how rearranging (11) and applying boundary conditions results in (13) on page 6 (as below). Please could you give me a pointer and apologies if its my rudimentary calculus that is at fault. • Please at least explain your reason for downvote whoever you are. Aug 28, 2020 at 13:10 You are right, equation (11) is derived mechanically from (7) (by taking the derivative wrt to $$T$$ and then combining is with (7)), and somehow they think that (13) can be obtained from (11) without remembering (7). Maybe by smartly integrating (note for example that $$B_tB_T - BB_{tT}$$ is the numerator of the derivative wrt to $$t$$ of fraction $$B/B_T$$) and using the boundary condition (I couldn't figure it out). Of course, what we can do is solve the first-order linear equation in $$t$$ (7) $$B_t = a(t)B-1.$$ With the usual primitive functions: $$\alpha'(t) = a(t), \; \; \beta'(t) = -{\rm e}^{-\alpha(t)},$$ the general solution to equation (7) is $$B(t,T) = c(T){\rm e}^{\alpha(t)} + {\rm e}^{\alpha(t)}\beta(t),$$ with $$c(T)$$ arbitrary function of $$T$$. As $$B(T,T)=0$$, we must have: $$c (T)= -\beta(T).$$ So: $$B(t,T) = -{\rm e}^{\alpha(t)} \left(\beta(T) - \beta(t)\right).$$ We can then easily check that this solution respects (13): $$B(0,T) = -{\rm e}^{\alpha(0)} \left(\beta(T) - \beta(0)\right)$$ $$B(0,t) = -{\rm e}^{\alpha(0)} \left(\beta(t) - \beta(0)\right)$$ $$\partial B(0,t)/\partial t = -{\rm e}^{\alpha(0)}\beta'(t) = {\rm e}^{\alpha(0)} {\rm e}^{-\alpha(t)}$$ Edit: Note that (11) can be written as: $$(B_T)_t =\frac{1-B_t}{B}B_T$$ which is equivalent to $$(\ln B_T)_t = \frac{1-B_t}{B}.$$ At this point we need to remember from (7) that the right hand side is a function of $$t$$ only, $$a(t)$$, otherwise it's getting cumbersome to progress from here. The solution is $$B_T = {\rm e}^{\alpha (t) + \gamma (T)}$$ for $$\gamma (T)$$ an arbitrary function of $$T$$. Integrating wrt to $$T$$, we get: $$B(t,T) = {\rm e}^{\alpha (t)} (\Gamma (T) + \eta (t))$$ for $$\eta (t)$$ an arbitrary function of $$t$$ and $$\Gamma^\prime = {\rm e}^{\gamma}$$. Boundary condition $$B(T,T)=0$$ then forces: $$\Gamma(T) = -\eta(T).$$ So, $$B(t,T) = -{\rm e}^{\alpha(t)} \left(\eta(T) - \eta(t)\right).$$ One more time, noting that $$B_t = -{\rm e}^{\alpha(t)}a(t)\eta(T) + {\rm e}^{\alpha(t)}a(t) \eta(t) + {\rm e}^{\alpha(t)} \eta^\prime (t),$$ (7) then implies: $$\eta(t) = \beta(t).$$ • Thank you very much. I can see the boundary conditions fit from your explanation. Readers can google Hull&White's paper for more context if interested. Would be nice to know how they thought formula 13 follows 11 but at least I now feel I'm not alone in being puzzled by that step. I emailed John Hull last week but doubt he'll respond. Aug 29, 2020 at 21:59 • I added a note where I started from (13) and obtained the same solution. I did use (7) in the process, in particular that it is stating that $(1-B_t)/B_T$ is a function of $t$ only. – ir7 Aug 30, 2020 at 0:45 • I meant starting from (11), not (13). – ir7 Aug 30, 2020 at 0:52 • Many thanks. I think there's typo ... arbitrary function of t. A lot of effort much appreciated and I wish I could vote you up for it but I'm only a minion. Aug 30, 2020 at 6:29 • Fixed it. Wish you all the best. – ir7 Aug 30, 2020 at 15:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317360520362854, "perplexity": 502.4250051204489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00010.warc.gz"}
http://www.wikiwaves.org/Ship_Kelvin_Wake
# Ship Kelvin Wake Wave and Wave Body Interactions Current Chapter Ship Kelvin Wake Next Chapter Linear Wave-Body Interaction Previous Chapter Wavemaker Theory # Introduction Wake created behind a ship A ship moving over the surface of undisturbed water sets up waves emanating from the bow and stern of the ship. The waves created by the ship consist of divergent and transverse waves. The divergent wave are observed as the wake of a ship with a series of diagonal or oblique crests moving outwardly from the point of disturbance. These wave were first studied by Lord Kelvin ## Translating Coordinate System We have a the standard fixed coordinate system $x, y, z$ and a moving coordinate systems which is moving in the $x$ direction with speed $U$. We denote the moving coordinate systems in the $x$ direction by $\bar{x} = x + U t$ Let $\Phi(\mathbf{x},t) \,$ be the velocity potential describing the potential flow generated by the ship relative to the earth frame. The same potential expressed relative to the ship frame is $\bar{\Phi}(\mathbf{\bar{x}},t) \,$. The relation between the two potentials is given by the identity $\Phi(x,y,z,t) = \bar{\Phi}(\bar{x},y,z,t) = \bar{\Phi}(x-Ut,y,z,t) \,$ where the relation between the coordinates of the two coordinate systems has been introduced. Note the time dependence occurs in two places in $\bar{\Phi} \,$ and in one place in $\Phi \,$. The governing equations are always derived relative to the earth coordinate system and time derivatives are initially taken on $\Phi \,$. Therefore $\frac{\mathrm{d}\Phi}{\mathrm{d}t} = \frac{\mathrm{d}}{\mathrm{d}t} \bar{\phi} ( x-Ut,y,z,t) = \frac{\partial\phi}{\partial t} - U \frac{\partial\phi}{\partial x}$ All time derivatives of the earth fixed velocity potential $\Phi \,$ which appear in the free surface condition and the Bernoulli equation can be expressed in terms of derivatives of $\bar{\Phi} \,$ using the Galilean transformation derived above. If the flow is steady relative to the ship fixed coordinate system $\frac{\partial\bar{\Phi}}{\partial t} = 0 \,$ but $\frac{\mathrm{d}\Phi}{\mathrm{d}t} = -U \frac{\partial\bar{\Phi}}{\partial x} \,$ or, the ship wake is stationary relative to the ship but not relative to an observed on the beach. ## Kelvin wake Diagram of the Kelvin Wake Local view of Kelvin wake consists approximately of a plane progressive wave group propagating in direction $\theta\,\!$. As noted above surface wave systems of general form always consist of combinations of plane progressive waves of different frequencies and directions. The same model will apply to the ship kelvin wake. Relative to the earth frame, the local plane wave in Infinite Depth takes the form $\Phi = \frac{\mathrm{i}gA}{\omega} e^{kz-\mathrm{i}k(x\cos\theta+y\sin\theta)+ \mathrm{i}\omega t}$ Relative to the ship frame $bar{\Phi} = \frac{igA}{\omega} e^{kz-\mathrm{i}k(x\cos\theta+y\sin\theta)-\mathrm{i}(kU\cos\theta-\omega)t}$ But relative to the ship frame waves are stationary, so we must have: $kU\cos\theta = \omega \,$ or $\frac{\omega}{k} = C_p = U \cos \theta \,$ This implies the following • The phase velocity of the waves in the kelvin wake propagating in direction $\theta\,$ must be equal to $U\cos\theta\,$, otherwise they cannot be stationary relative to the ship. • Relative to the earth system the frequency of a local system propagating in direction $\theta \,$ is given by the relation $\omega = kU \cos \theta \,$ Relative to the earth system the Infinite Depth Dispersion Relation for a Free Surface states $\omega^2 = g k \,$ so that $\lambda(\theta) = \frac{2\pi U^2 \cos^2 \theta}{g} \,$ This is the wavelength of waves in a Kelvin wake propagating in direction $\theta \,$ which are stationary relative to the ship. ## Application of the Group velocity An observer sitting on an earth fixed frame observes a local wave system propagating in direction $\theta\,$ travelling at its group velocity $\frac{d\omega}{dK}\,$ by virtue of the Rayleigh device which states that we need to focus on the speed of the energy density ($\sim \,$ wave amplitude) rather than the speed of wave crests. So, relative to the earth fixed inclined coordinate system $(X', Y') \,$: $\frac{X'}{t} = V_g = \frac{d\omega}{dK} \,$ Or $X' = \frac{d\omega}{dK} t \ \Longrightarrow \ \frac{d}{dK} (K X' - \omega t) = 0$ $X' = X \cos \theta + Y \sin \theta = x \cos\theta + y\sin\theta + Ut \cos\theta \,$ So: $KX' - \omega t = K ( x\cos\theta + y\sin\theta ) + (KU\cos\theta - \omega) t \,$ However $(KU\cos\theta - \omega) t =0$ so that the Rayleigh condition for the velocity of the group takes the form: $\frac{d}{dK} [ K(\theta) (x\cos\theta + y\sin\theta) ] = 0 \,$ By virtue of the dispersion relation derived above: $K(\theta) = \frac{g}{U^2 \cos^2 \theta} \,$ It follows from the chain rule of differentiation that Rayleigh's condition is: $\frac{d}{d\theta} \left[ \frac{g}{U^2\cos^2\theta} (x\cos\theta + y\sin\theta) ]\right] = 0 \,$ At the position of the Kelvin waves which are locally observed by an observer at the beach. • So the "visible" waves in the wake of a ship are wave groups which must travel at the local group velocity. These conditions translate into the above equation which will be solved and discussed next. More discussion and a more mathematical derivation based on the principle of stationary phase can be found in Newman 1977. ## Solution of the equation for angle Graphical image of the equations The solution of the above equation will produce a relation between $\frac{y}{x} \,$ and $\theta\,$. So local waves in a Kelvin wake can only propagate in a certain direction $\theta\,$, given $\frac{y}{x} \,$. $\frac{y}{x} = - \frac{\cos\theta\sin\theta}{1+sin^2\theta} = \frac{y}{x}(\theta)$ which implies that • $\frac{y}{x}(\theta) \,$ is anti-symmetric about $\theta = 0 \,$ and each part corresponds to the Kelvin wake in the port and starboard sides of the vessel. The physics on either side is identical due to symmetry. • $\theta = 0 \,$: waves propagating in the same direction as the ship. These waves can only exist at $Y = 0\,$ as seen above. • $\theta = \frac{\pi}{2} \,$: waves propagating at a $90^\circ\,$ angle relative to the ship direction of forward translation. • $\theta = 35^\circ 16' \,$: (or 35,26°) waves propagating at an angle $\theta = 35^\circ 16' \,$ relative to the ship axis. These are waves seen at the caustic of the Kelvin wake. Let the solution of $\frac{y}{x}(\theta) \,$ be of the form, when inverted: $\mbox{Region I}: \qquad \theta = f_1 (\frac{y}{x}) \,$ $\mbox{Region II}: \qquad \theta = f_2 (\frac{y}{x}) \,$ Note that observable waves cannot exist for values of $\frac{y}{x}\,$ that exceed the value shown in the figure or $\left. \frac{y}{x} \right|_{Max} = 2^{-3/2} \,$. This translates into a value for the corresponding angle equal to $19^\circ 28' \,$ (or 19,47°) which is the angle of the caustic for any speed $U\,$. "transverse" and "divergent" wave systems in the Kelvin wake The crests of the wave system trailing a ship, the Kelvin wake, are curves of constant phase of: $\frac{x\cos\theta+y\sin\theta}{\cos^2\theta} = \mathbf{C} \,$ In $\mbox{Region I} \,$: $\mathbf{C} = \frac{x\cos f_1(\frac{y}{x}) + y\sin f_1 (\frac{y}{x})}{\cos^2 f_1 (\frac{y}{x})} \equiv G_1 (\frac{y}{x})$ In $\mbox{Region II} \,$: $\mathbf{C} = \frac{x\cos f_2(\frac{y}{x}) + y\sin f_2 (\frac{y}{x})}{\cos^2 f_2 (\frac{y}{x})} \equiv G_2 (\frac{y}{x})$ Plotting these curves we obtain a visual graph of the "transverse" and "divergent" wave systems in the Kelvin wake.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815636873245239, "perplexity": 496.97876622153797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00324-ip-10-171-96-226.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/326964/is-there-a-way-to-justify-perturbation-theory-in-qft
# Is there a way to justify perturbation theory in QFT? I can accept that some problems do require a perturbation theory approach to be solved. The development of perturbation theory is perfectly rigorous. So at first, even though many people say that no one knows how to tackle QFT in an exact fashion needing to resort to perturbation theory, I didn't see this as a problem. It turns out that recently I've watched a QFT lecture on youtube where the lecturer told that in QFT the interaction Hamiltonian has infinite norm, and hence the Dyson series diverges. Now wait a minute. This invalidates perturbation theory completely. Even in free QFT some infinities appear as divergent deltas, but with some sloopy plausibility arguments they are thrown out. This is already non rigorous and a problem, however, it can even be accepted. Now trying to use perturbation theory in a scenario where it is really proved it doesn't work and certainly diverges is a whole different story. From that point onwards one is not doing mathematics anymore, just manipulating symbols according to a different set of rules than those of mathematics, because if one goes on with the series diverging and the method failing, nothing that point forward has any concrete meaning. One might say "well, it matches experiment so we better not care about that", but from a theoretical point of view I think this really demans attention. I mean it is already a miracle that thing matches experiment, but nonetheless the theory is nonsense! How can a theorist be satisfied with such a thing? My point here is: is there any known way to deal with that, even if not mainstream? How can use a method in a situation it is proven to diverge? I mean, QFT has a serious problem, there must be some work on this, after all the theory is taken seriously nonetheless. How can this theory be taken serious with an immense issue such as that? EDIT: I believe the post needed some more precise statements. First thing: I might have had the wrong understanding of the discussion about the convergence of the series and the implications. Second, I'm taking a QFT course and nothing has been discussed about justification for the perturbative method insofar. Third, what I know from perturbation theory from non-relativistic QM, is the following: we have a hamiltonian $H_0$ which we know the eigenstates and eigenvalues, and the full hamiltonian is $$H =H_0+V,$$ we then write $V = \lambda W$ where this $W$ is small, and $\lambda$ is a parameter characterizing the problem. In that setting, we can write down a solution in terms of a power series in $\lambda$. The convergence of the series is tied to the fact that $W$ is small. Now, so long as we have a convergent power series in $\lambda$, we can interpret cutting the series on a certain power $n$ of $\lambda$ as one approximate solution for when $\lambda$ is such that $\lambda^{k}$, for $k> n$ can be neglected. This is true, because since $\lambda^k$ can be neglected for $k > n$, the following terms of the series are so small that we can neglect them and approximate the solution. So the conclusion: the exact solution would be obtained by the full series, but since we don't know how to compute it, we are able to get approximate solutions depending on the magnitude of $\lambda$, considering that $W$ is small and the following terms of the series will be small. What I've heard is: this $W$ in QFT usually has infinite norm $\|W\|_{\infty}=\infty$. It certainly isn't small, and these arguments as above would fail. The series wouldn't converge and furthermore, we can't just neglected the further terms saying they are negligible. This discussion isn't present in most QFT textbooks I've seem, at least in the chapters I've read. Most of them just present the series without discussing this issue, which as I said, I found out watching some QFT lectures. • Well, a very strong agreement with experiment helps to take this serious. Also it is believed that at the Energy-scale where pertubation theiry fails also the current QFT theory will fail anyway (and some unified model and later GR has to be taken into account). – lalala Apr 16 '17 at 15:48 • Two comments: (1) asymptotic series, which diverge, can be dealt with perfectly rigorously in mathematics (although we don't know how to handle the ones in QFT). (2) physicists have never cared about being perfectly rigorous. – Peter Shor Apr 16 '17 at 17:25 • Possible duplicates: physics.stackexchange.com/q/27665/2451 , physics.stackexchange.com/q/6530/2451 and links therein. – Qmechanic Apr 16 '17 at 17:58 • Well, my point of view is that if the theory gives such good results, there must be some way of justifying it, even if we don't know it and don't particularly care. Also see this nice paper. – Javier Apr 16 '17 at 18:27 • @user1620696, humans are divided into two categories. Those who say that your question is important and those who shove one century's value of open questions in fundamental physics under the carpet by claiming that "this does not matter" or "this cannot be given an answer" or "this is just a tool". John Bell's ghost will haunt all in the second category :p – Helen Apr 18 '17 at 8:54 There exist a large body of literature on the topic of divergent asymptotic series. This article gives an overview of the theory from a practical point of view. This article focuses on methods that can be applied to asymptotic series of which all the terms are known, or at least the coefficients of the late terms are known to some leading approximation. In QFT one typically has just a few terms of a perturbation expansion, but even in that case one can apply certain mathematical methods to resum the series. So, a typical problem is then that given a perturbative expansion of some function $f(g)$ in powers of some coupling $g$ we want to know the behavior of $f(g)$ for large $g$, but for large $g$ the series starts to diverge really fast, and we only have a few terms. This is not as hopeless as it looks, let's consider the following example. The logarithm of the factorial function $\log(n!)$ has the following asymptotic expansion for large $n$: $$\log(n!) = n\log(n) - n + \frac{1}{2}\log(2\pi n) + \frac{1}{12 n} - \frac{1}{360 n^3} + \mathcal{O(n^{-3})}$$ Suppose that these are all the terms that we know about. We want to extract the behavior of the factorial function near $n = 0$ using only the given terms. We obviously cannot set $n = 0$, the individual terms will already diverge. But what we can do is to extrapolate to $n = 0$ using only larger values for $n$ where the series does make sense. E.g. there is no problem with inserting even not so large values like $n = 1$ in the series, moreover you can put $n = 1+\epsilon$, expand in powers of $\epsilon$ and then set $\epsilon = -1$ to jump to $n = 0$. The validity of such methods then depend on assuming that the function you are dealing with is analytic in a neighborhood of $n = 0$, and the fact that you have to use a divergent series around infinity to get there doesn't make the approximations invalid. A more sophisticated way to get to the behavior near $n = 0$ is to apply a conformal transform to the expansion parameter. If we put: $$n = \frac{1-z}{p z}$$ and expand in powers of $z$, we get: $$\begin{split} f(z) =& \frac{1}{2}\log\left(\frac{2\pi}{p}\right)+\frac{\log(p)}{p} +\frac{1-\log(p)}{p z}+\left(\frac{1}{p}-\frac{1}{2}\right)\log(z) -\frac{\log(z)}{p z} +\\&\left(\frac{p}{12}+\frac{1}{2 p}-\frac{1}{2}\right)z + \left(\frac{p}{12}+\frac{1}{6 p}-\frac{1}{4}\right)z^2 + \left(-\frac{p^3}{360}+\frac{p}{12}+\frac{1}{12 p}-\frac{1}{6}\right) z^3+\mathcal{O}(z^4) \end{split}$$ There is now no problem with inserting $z=1$ in the series which corresponds to $n = 0$. The result will depend on the auxiliary parameter $p$, the optimal choice of this parameter is to choose it in such a way that the series has the best convergence. A good choice is obtained by setting the coefficient of the last term equal to zero, and then evaluating the coefficient of the coefficient of the previous term to see which solution makes that coefficient the smallest. This yields good results because typically the error is of the order of the last omitted term. Another interpretation is that the answer doesn't depend on $p$, but now you do have a $p$ dependence due to only having a finite number of terms. If you then make the last terms as small as possible, you're going to get a result that for other values of $p$ would have had to come from higher order terms. It's also worthwhile to consider the earlier terms to see if a particular value yields a more robust series. In this case you find that $p = 0.863778859012849315368798819337$ looks to be the best value to use, it would be the the second choice when considering the value of the coefficient of $z^2$, but the other solution that makes this coefficient the smallest yields a series with a coefficient that is slightly increasing before becoming zero for the $z^3$ term. So, if we then put $p = 0.863778859012849315368798819337$, we can put $z = 1$ to estimate $0!$, but we can do more. We can expand our function of $z$ around $z = 1$, by putting $z = 1-u$ and expanding in powers of $u$. If we then also expand $n$ in powers of $u$, we can invert the series to express $u$ in powers of $n$, so we then get to a series of $\log(n!)$ in powers of $n$. The result is: $$\log(n!)= 0.0002197 - 0.577755 n + 0.823213 n^2 - 0.401347 n^3 +\cdots$$ The exact expansion with the coefficients to 8 significant figures is: $$\log(n!) = -0.57721566 n + 0.82246703 n^2-0.40068563 n^3+\cdots$$ Clearly, a great deal of information can be extracted from divergent series, even if you just have a few terms. Mathematical rigor should be used to your advantage to extract as much information as possible, you should not be intimidated by mathematical rigor pointing to roadblocks. I think you completely misinterpret the notion of divergence of a perturbation series. Suppose the result is known up to the second order, so one has something like $f\left(\epsilon\right)=1+a\epsilon+b\epsilon^{2}+\dots$ for the solution when $\epsilon\ll 1$. That's fine, and once you calculated the constants $a$ and $b$, you usually see that $b\ll a$, so you believe you got a nice perturbative expansion. What you can prove with the perturbation expansion is that for some points, you will get a next-order term larger than the previous one, say the third one $c\gg b$. The full series will diverge, yet a solution with only $a$ and $b$ are good approximation of the solution. This lacks rigor, but you have to know that there is no converging series in interesting problems of physics... all problems of interest are plagued with this problem. It was discovered by Poincaré at the beginning of the 20-th century when he tried to resolve the 3-body problem (say the motion of the moon cycling around the earth cycling around the sun). Usually mathematicians would have written the solution up to the third order $$f\left(\epsilon\right)=1+a\epsilon+b\epsilon^{2}+\mathcal{O}\left(\epsilon^{3}\right)$$ because they know that the next order is a small quantity. They have tools to evaluate all higher orders (usually an integral form of the complete solution when the problem is a differential equation for example) and to compare it with the first terms. Physicists write instead $$f\left(\epsilon\right)=1+a\epsilon+b\epsilon^{2}+\cdots$$ because they usually have no idea when the perturbation theory will break up. Mathematicians proved that $\mathcal{O}\left(\epsilon^{3}\right)$ (to the third order for our example here) is in fact large, and may even diverge for some physical applications. And so what ? Physicists believe that one simply has to forget these higher orders terms, since higher orders terms may well be controlled by more refined theory, taking into account extra interactions which were neglected in the diverging model. In any case, everyone is somehow correct, since no-one knows how the complete model of the entire universe looks like ... So at the heart of perturbation theory lies the very profound method of physics: one takes a model, one calculates some outcomes, one confronts with experiments... again and again. • where does renormalization enter in this argumetn? I thought it was invented in order to avoid the infinities. Is it a different story? see physics.umd.edu/courses/Phys851/Luty/notes/renorm.pdf – anna v Apr 18 '17 at 8:48 • IMHO, this answer completely neglects the nature of asymptotic series, which are the core of the question. Also, the claims about "physicists being unable to do things rigorously" is completely unjustified. – AccidentalFourierTransform Apr 18 '17 at 8:50 • @annav Indeed, I completely forget renormalisation, thanks for pointing this out. It's a shame indeed. Feel free to complete my answer. I had the feeling my answer gently introduces the concept of perturbation series. It seems the OP never opened a book on QFT so I just gave handwaving arguments. – FraSchelle Apr 18 '17 at 9:11 • @AccidentalFourierTransform Indeed, I didn't wanted to enter too much details, feel free to post your own answer to complete mine. I had the feeling my answer gently introduces the concept of perturbation series. It seems the OP never opened a book on QFT so I just gave handwaving arguments. – FraSchelle Apr 18 '17 at 9:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8983389139175415, "perplexity": 223.63668451865146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00055.warc.gz"}
https://asmedigitalcollection.asme.org/HT/proceedings-abstract/HT2013/55478/V001T03A033/243355
The displacement of a three-dimensional immiscible blob subject to oscillatory acoustic excitation in a channel is studied with the Lattice Boltzmann method. The effects of amplitude of the force, viscosity and frequency on blob dynamics are investigated. The trend for variation of mean displacement of blob and frequency response is in agreement to that of the previous two-dimensional studies reported in literature. The response of the blob with pinned contact line shows underdamped behavior. It is also found that increasing the amplitude of the force increases the mean displacement and frequency response. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8782886862754822, "perplexity": 833.4127396008026}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00058.warc.gz"}
https://dantopology.wordpress.com/2009/11/08/dowkers-theorem/
# Dowker’s Theorem Let $\mathbb{I}$ be the closed unit interval $[0,1]$. In 1951, Dowker proved that for normal spaces $X$, $X \times \mathbb{I}$ is normal if and only if $X$ is countably paracompact. A space $X$ is countably paracompact if every countable open cover of $X$ has a locally finite open refinement. Dowker’s theorem is a fundamental result on products of normal spaces. With this theorem, a question was raised about the existence of a normal but not countably paracompact space (such a space became known as a Dowker space). For a detailed discussion on Dowker spaces, see the survey paper [Rudin]. The focus here is on presenting a proof for the Dowker’s theorem, laying the groundwork for future discussion. An open cover $\mathcal{V}=\lbrace{V_\alpha:\alpha \in S}\rbrace$ of a space $X$ is shrinkable if there exists an open cover$\mathcal{W}=\lbrace{W_\alpha:\alpha \in S}\rbrace$ such that $\overline{W_\alpha} \subset V_\alpha$ for each $\alpha \in S$. The open cover $\mathcal{W}$ is called a shrinking of $\mathcal{V}$. We use the following lemma in proving Dowker’s theorem. Go here to see a proof of this lemma. Lemma A space $X$ is normal if and only if every point-finite open cover of $X$ is shrinkable. Dowker’s Theorem For a normal space $X$, the following conditions are equivalent: 1. $X$ is a countably paracompact space. 2. If $\lbrace{U_n:n<\omega}\rbrace$ is an open cover of $X$, then there is a locally finite open refinement $\lbrace{V_n:n<\omega}\rbrace$ such that $\overline{V_n} \subset U_n$ for each $n$. 3. $X \times Y$ is normal for any compact metrizable space $Y$. 4. $X \times \mathbb{I}$ is normal. 5. For each sequence of closed sets such that $A_0 \supset A_1 \supset ...$ and $\cap_n A_n=\phi$, there exists open sets $B_n \supset A_n$ such that $\cap_n B_n=\phi$. Proof $1 \Longrightarrow 2$ Suppose $X$ is countably paracompact. Let $\mathcal{U}=\lbrace{U_n:n<\omega}\rbrace$ be an open cover of $X$. Then $\mathcal{U}$ has a locally open refinement $\mathcal{V}$. For each $n<\omega$, let $W_n=\bigcup \lbrace{V \in \mathcal{V}:V \subset U_n}\rbrace$. Note that $\lbrace{W_n:n<\omega}\rbrace$ is still locally finite (thus point-finite). By the lemma, it has a shrinking $\lbrace{V_n:n<\omega}\rbrace$. $2 \Longrightarrow 3$ Let $Y$ be a compact metrizable space. Let $\lbrace{E_0,E_1,E_2,...}\rbrace$ be a base of $Y$ such that it is closed under finite unions. Let $H,K$ be two disjoint closed subsets of $X \times Y$. The goal here is to find an open set $\mathcal{V}$ such that $H \subset \mathcal{V}$ and $\overline{\mathcal{V}}$ misses $K$. For each $x \in X$, consider the following: $S_x=\lbrace{y \in Y: (x,y) \in H}\rbrace$ $T_x=\lbrace{y \in Y: (x,y) \in K}\rbrace$. For each $n \in \omega$, let $O_n$ be defined by: $O_n=\lbrace{x \in X:S_x \subset E_n}$ and $T_x \subset Y-\overline{E_n}\rbrace$. Claim 1. Each $x \in X$ is an element of some $O_n$. Both $S_x$ and $T_x$ are compact. Thus we can find some $E_n$ such that $S_x \subset E_n$ and $T_x \subset Y-\overline{E_n}$. Claim 2. Each $O_n$ is open in $X$. Fix $z \in O_n$. For each $y \in Y-E_n$, $(z,y) \notin H$. Choose open set $A_y \times B_y$ such that $(z,y) \in A_y \times B_y$ and $A_y \times B_y$ misses $H$. The set of all $B_y$ is a cover of $Y-E_n$. We can find a finite subcover covering $Y-E_n$, say $B_{y(0)},B_{y(1)},...,B_{y(k)}$. Let $A=\bigcap_i A_{y(i)}$. For each $a \in \overline{E_n}$, $(z,a) \notin K$. Choose open set $C_a \times D_a$ such that $(z,a) \in C_a \times D_a$ and $C_a \times D_a$ misses $K$. The set of all $D_a$ is a cover of $\overline{E_n}$. We can find a finite subcover covering $\overline{E_n}$, say $D_{a(0)},D_{a(1)},...,D_{a(j)}$. Let $C=\bigcap_i C_{a(i)}$. Now, let $O=A \cap C$. Clearly $z \in O$. To show that $O \subset O_n$, pick $x \in O$. We want to show $S_x \subset E_n$ and $T_x \subset Y-\overline{E_n}$. Suppose we have $y \in S_x$ and $y \in Y-E_n$. Then $y \in B_{y(i)}$ for some $i$. As a result, $(x,y) \in A_{y(i)} \times B_{y(i)}$. This would mean that $(x,y) \notin H$. But this contradicts with $y \in S_x$. So we have $S_x \subset E_n$. On the other hand, suppose we have $y \in T_x$ and $y \in \overline{E_n}$. Then $y \in D_{a(i)}$ for some $i$. As a result, $(x,y) \in C_{a(i)} \times D_{a(i)}$. This means that $(x,y) \notin K$. This contradicts with $y \in T_x$. So we have $T_x \subset Y-\overline{E_n}$. It follows that $O \subset O_n$ and $O_n$ is open in $X$. Claim 3. We can find an open set $\mathcal{V}$ such that $H \subset \mathcal{V}$ and $\overline{\mathcal{V}}$ misses $K$. Let $\mathcal{O}=\bigcup_n O_n \times \overline{E_n}$. Note that $H \subset \mathcal{O}$ and $\mathcal{O}$ misses $K$. The open set $\mathcal{V}$ being constructed will satisfiy $H \subset \mathcal{V} \subset \overline{\mathcal{V}} \subset \mathcal{O}$. The open cover $\lbrace{O_n}\rbrace$ has a locally finite open refinement $\lbrace{V_n}\rbrace$ such that $\overline{V_n} \subset O_n$ for each $n$. Now define $\mathcal{V}=\bigcup_n V_n \times E_n$. Note that $H \subset \mathcal{V}$. We also have $\overline{\mathcal{V}}=\overline{\bigcup_n V_n \times E_n}=\bigcup_n \overline{V_n \times E_n}$. We have a closure preserving situation because the open sets $V_n$ are from a locally finite collection. Continue the derivation and we have: $=\bigcup_n \overline{V_n} \times \overline{E_n} \subset \bigcup_n O_n \times \overline{E_n}=\mathcal{O}$. $3 \Longrightarrow 4$ is obvious. $4 \Longrightarrow 5$ Let $A_0 \supset A_1 \supset ...$ be closed sets with empty intersection. We show that $A_n$ can be expanded by open sets $B_n$ such that $A_n \subset B_n$ and $\bigcap_n B_n=\phi$. Choose $p \in \mathbb{I}$ and a sequence of distinct points $p_n \in \mathbb{I}$ converging to $p$. Let $H=\cup \lbrace{A_n \times \lbrace{p_n}\rbrace:n \in \omega}\rbrace$ and $K=X \times \lbrace{p}\rbrace$. These are disjoint closed sets. Since $X \times \mathbb{I}$ is normal, we can find open set $V \subset X \times \mathbb{I}$ such that $H \subset V$ and $\overline{V}$ misses $K$. Let $B_n=\lbrace{x \in X:(x,p_n) \in V}\rbrace$. Note that each $B_n$ is open in $X$. Also, $A_n \subset B_n$. We want to show that $\bigcap_n B_n=\phi$. Let $x \in X$. The point $(x,p) \in K$ and $(x,p) \notin \overline{V}$. There exist open $E \subset X$ and open $F \subset Y$ such that $(x,p) \in E \times F$ and $(E \times F) \cap \overline{V}=\phi$. Then $p_n \in F$ for some $n$. Note that $(x,p_n) \notin V$. Thus $x \notin B_n$. It follows that $\bigcap_n B_n=\phi$. $5 \Longrightarrow 1$ Let $\lbrace{T_n:n \in \omega}\rbrace$ be an open cover of $X$. Let $E_n=\bigcup_{i \leq n} T_i$. Let $A_n=X-E_n$. Each $A_n$ is closed and $\bigcap_n A_n=\phi$. So there exist open sets $B_n \supset A_n$ such that $\bigcap_n B_n=\phi$. Since $X$ is normal, there exists open $W_n$ such that $A_n \subset W_n \subset \overline{W_n} \subset B_n$. It follows that $\bigcap_n \overline{W_n}=\phi$. Let $O_n=X-\bigcap_{i \leq n} \overline{W_n}$. Claim. $\lbrace{O_n:n \in \omega}\rbrace$ is an open cover of $X$. Furthermore, $\overline{O_n} \subset \bigcup_{i \leq n}T_i$. Let $x \in X$. Since $\bigcap_n B_n=\phi$, $x \notin B_n$ for some $n$. Then $x \notin \overline{W_n}$ and $x \in O_n$. To show the second half of the claim, let $y \notin \bigcup_{i \leq n}T_i$. Then $y \notin E_i$ for each $i \leq n$. This means $y \in A_i$ and $y \in W_i$ for each $i \leq n$. Then $\bigcap_{i \leq n}W_i$ is an open set containing $y$ that contains no point of $O_n$. Thus we have $\overline{O_n} \subset \bigcup_{i \leq n}T_i$. Let $S_n=T_n-\bigcup_{i < n} \overline{O_i}$. We show that $\lbrace{S_n:n \in \omega}\rbrace$ is a locally finite open refinement of $\lbrace{T_n:n \in \omega}\rbrace$. Clearly $S_n \subset T_n$. To show that the open sets $S_n$ form a cover, let $x \in X$. Choose least $n$ such that $x \in T_n$. Then for each $i, $x \notin E_i$. This means $x \in A_i$ and $x \in \overline{W_i}$ for each $i. It follows that $x \notin O_i$ for each $i. This implies $x \in S_n$. To see that the open sets $S_n$ form a locally finite collection, note that each $x \in X$ belong to some $O_n$. The open set $O_n$ misses $S_m$ for all $m>n$. Thus $O_n$ can only meet $S_i$ for $i \leq n$. We just prove that $X$ is countably paracompact. Note. In $4 \Longrightarrow 5$, all we need is that the factor $Y$ has a non-trivial convergent sequence. That is, if $X \times Y$ is normal and $Y$ has a non-trivial convergent sequence, then $X$ satisfies condition $5$. Reference [Dowker] Dowker, C. H. [1951], On Countably Paracompact Spaces, Canad. J. Math. 3, 219-224. [Rudin] Rudin, M. E., [1984], Dowker Spaces, Handbook of Set-Theoretic Topology (K. Kunen and J. E. Vaughan, eds), Elsevier Science Publishers B. V., Amsterdam, 761-780. ____________________________________________________________________ $\copyright \ 2009-2016 \text{ by Dan Ma}$ Revised November 27, 2016
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 233, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987980723381042, "perplexity": 57.39491784509936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00264-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.emathzone.com/tutorials/calculus/integration-of-square-root-of-a2x2.html
# Integration of the Square Root of a^2+x^2 In this tutorial we shall derive the integration of the square root of a^2+x^2, and solve this integration with the help of the integration by parts methods. The integral of $\sqrt {{a^2} + {x^2}}$ is of the form OR This integral can be written as Here the first function is $\sqrt {{a^2} + {x^2}}$ and the second function is $1$ Using the formula for integration by parts, we have Using the formula above, equation (i) becomes Using the given integral $I = \int {\sqrt {{a^2} + {x^2}} } dx$, we get But using the relation ${\sinh ^{ - 1}}x = \ln \left( {x + \sqrt {{x^2} + 1} } \right)$ and since we know that ${\sinh ^{ - 1}}\left( {\frac{x}{a}} \right) = \ln \left( {x + \sqrt {{x^2} + {a^2}} } \right)$, we have
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994938373565674, "perplexity": 257.77837516489285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00271.warc.gz"}
https://www.physicsforums.com/threads/mechanics-3-elastic-springs-and-strings-problems.125295/
# Mechanics 3 Elastic Springs and Strings Problems 1. Jul 5, 2006 ### garyljc Hello everyone , I'm new in this forum . I'm 16 and currently doing self study in Mechanics 3 . I've encounter a problem in the 2nd chapter of Mechanics 3 which is Elastic Springs and Strings . Here goes : A ball of mass 0.5kg is attached to one end of a cord of unstretched length 0.6m whose other end is fixed. When a horizontal force magnitude Q N is applied to the ball holding the ball in equilibrium, the cord increase in length by 10% and is inclined at an angle of arcsin 3/5 to the downwards vertical. Calculate the value of Q and the modulus of the cord. A few things that kept me wondering is that what do they mean by horizontal ? Does it means it acts along the slope or acts parellel to the ground ? Please help me , thank you = ):tongue2: 2. Jul 5, 2006 ### Hootenanny Staff Emeritus Hi there garyljc and welcome to PF, A horizontal force will act parallel to the ground. To answer this question, you will need to find the components of the force exerted by the chord, take vertical (y) and horizontal (x) components. Now, as the ball is in equilibrium what can we say about the forces acting on the ball? Next you will need to identify the forces acting on the ball. You should then be able to formulate to expressions, one vertically and one horizontally, which you may solve simultaneously. I am assuming here that the chord is thin, since we have not been given a diameter. (A diagram may be helpful in answering this question) 3. Jul 5, 2006 ### garyljc If the ball is in equilibrium , sum of the forces wil be equal to zero =) Got it . But I've found 2 equations ( Vertical and Horizontal ) , when I have 3 variables , which is Q , modulus of elasticity , R (normal reaction) . Is there anything I'm missing out ? What does the vertical equal to ? Thanks anyway 4. Jul 5, 2006 ### vijay123 sup garlyci.....force exerted by spring is (spring constant)*lenght displaced. 5. Jul 5, 2006 ### vijay123 i can solve you the problem...butcan you please tell me what arcsine means beacasue in my country, we only have angles in terms of radians and degress....if you would convert it into degress....it would be a great help in my calculation. thank you! 6. Jul 5, 2006 ### garyljc arcsin 3/5 means sin @ = 3/5 7. Jul 5, 2006 ### Hootenanny Staff Emeritus This is the wrong formulae to use. We are dealing with chords here, not springs. We are required to calculate young's modulus for the chord not the spring constant. 8. Jul 5, 2006 ### Hootenanny Staff Emeritus You should have only two unknowns is each equation (Q and a component of the tension). In this situation there are only three forces acting, (1)Q, (2)The tension and (3)The weight of the ball. There should be no reaction force. 9. Jul 5, 2006 ### garyljc But what will R be equal to ? There must be a normal reaction the particle 10. Jul 5, 2006 ### Hootenanny Staff Emeritus HINT: Look up the definition of tension Last edited: Jul 5, 2006 11. Jul 5, 2006 ### garyljc Ahhhh , I got it , thanks anyway . Kinda hard though for me doing self study =) Cheers Hoot =) 12. Jul 5, 2006 ### Hootenanny Staff Emeritus No problem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253701567649841, "perplexity": 1174.6776245393255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544140.93/warc/CC-MAIN-20161202170904-00099-ip-10-31-129-80.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/p/poisson+encapsulation+statistics.html
#### Sample records for poisson encapsulation statistics 1. Semi-Poisson statistics in quantum chaos. Science.gov (United States) García-García, Antonio M; Wang, Jiao 2006-03-01 We investigate the quantum properties of a nonrandom Hamiltonian with a steplike singularity. It is shown that the eigenfunctions are multifractals and, in a certain range of parameters, the level statistics is described exactly by semi-Poisson statistics (SP) typical of pseudointegrable systems. It is also shown that our results are universal, namely, they depend exclusively on the presence of the steplike singularity and are not modified by smooth perturbations of the potential or the addition of a magnetic flux. Although the quantum properties of our system are similar to those of a disordered conductor at the Anderson transition, we report important quantitative differences in both the level statistics and the multifractal dimensions controlling the transition. Finally, the study of quantum transport properties suggests that the classical singularity induces quantum anomalous diffusion. We discuss how these findings may be experimentally corroborated by using ultracold atoms techniques. 2. Binomial vs poisson statistics in radiation studies International Nuclear Information System (INIS) Foster, J.; Kouris, K.; Spyrou, N.M.; Matthews, I.P.; Welsh National School of Medicine, Cardiff 1983-01-01 The processes of radioactive decay, decay and growth of radioactive species in a radioactive chain, prompt emission(s) from nuclear reactions, conventional activation and cyclic activation are discussed with respect to their underlying statistical density function. By considering the transformation(s) that each nucleus may undergo it is shown that all these processes are fundamentally binomial. Formally, when the number of experiments N is large and the probability of success p is close to zero, the binomial is closely approximated by the Poisson density function. In radiation and nuclear physics, N is always large: each experiment can be conceived of as the observation of the fate of each of the N nuclei initially present. Whether p, the probability that a given nucleus undergoes a prescribed transformation, is close to zero depends on the process and nuclide(s) concerned. Hence, although a binomial description is always valid, the Poisson approximation is not always adequate. Therefore further clarification is provided as to when the binomial distribution must be used in the statistical treatment of detected events. (orig.) 3. Limitations of Poisson statistics in describing radioactive decay. Science.gov (United States) 2015-12-01 4. Is it safe to use Poisson statistics in nuclear spectrometry? International Nuclear Information System (INIS) Pomme, S.; Robouch, P.; Arana, G.; Eguskiza, M.; Maguregui, M.I. 2000-01-01 The boundary conditions in which Poisson statistics can be applied in nuclear spectrometry are investigated. Improved formulas for the uncertainty of nuclear counting with deadtime and pulse pileup are presented. A comparison is made between the expected statistical uncertainty for loss-free counting, fixed live-time and fixed real-time measurements. (author) 5. Poisson statistics application in modelling of neutron detection International Nuclear Information System (INIS) Avdic, S.; Marinkovic, P. 1996-01-01 The main purpose of this study is taking into account statistical analysis of the experimental data which were measured by 3 He neutron spectrometer. The unfolding method based on principle of maximum likelihood incorporates the Poisson approximation of counting statistics applied (aithor) 6. Statistics of weighted Poisson events and its applications International Nuclear Information System (INIS) Bohm, G.; Zech, G. 2014-01-01 The statistics of the sum of random weights where the number of weights is Poisson distributed has important applications in nuclear physics, particle physics and astrophysics. Events are frequently weighted according to their acceptance or relevance to a certain type of reaction. The sum is described by the compound Poisson distribution (CPD) which is shortly reviewed. It is shown that the CPD can be approximated by a scaled Poisson distribution (SPD). The SPD is applied to parameter estimation in situations where the data are distorted by resolution effects. It performs considerably better than the normal approximation that is usually used. A special Poisson bootstrap technique is presented which permits to derive confidence limits for observations following the CPD 7. Quasars, companion galaxies and Poisson statistics International Nuclear Information System (INIS) Webster, A. 1982-01-01 Arp has presented a sample of quasars lying close to the companion galaxies of bright spirals, from which he estimates a value of 10 -17 for the probability that the galaxies and quasars are sited independently on the celestial sphere; Browne, however, has found a simple fallacy in the statistics which accounts for about 10 of the 17 orders of magnitude. Here we draw attention to an obscure part of Arp's calculation which we have been unable to repeat; if it is carried out in what seems to be the most straightforward way, about five more orders may be accounted for. In consequence, it is not clear that the sample contains any evidence damaging to the popular notion that the redshifts of quasars indicate distance through the Hubble Law. (author) 8. Statistical analysis of non-homogeneous Poisson processes. Statistical processing of a particle multidetector International Nuclear Information System (INIS) Lacombe, J.P. 1985-12-01 Statistic study of Poisson non-homogeneous and spatial processes is the first part of this thesis. A Neyman-Pearson type test is defined concerning the intensity measurement of these processes. Conditions are given for which consistency of the test is assured, and others giving the asymptotic normality of the test statistics. Then some techniques of statistic processing of Poisson fields and their applications to a particle multidetector study are given. Quality tests of the device are proposed togetherwith signal extraction methods [fr 9. A spatial scan statistic for compound Poisson data. Science.gov (United States) Rosychuk, Rhonda J; Chang, Hsing-Ming 2013-12-20 The topic of spatial cluster detection gained attention in statistics during the late 1980s and early 1990s. Effort has been devoted to the development of methods for detecting spatial clustering of cases and events in the biological sciences, astronomy and epidemiology. More recently, research has examined detecting clusters of correlated count data associated with health conditions of individuals. Such a method allows researchers to examine spatial relationships of disease-related events rather than just incident or prevalent cases. We introduce a spatial scan test that identifies clusters of events in a study region. Because an individual case may have multiple (repeated) events, we base the test on a compound Poisson model. We illustrate our method for cluster detection on emergency department visits, where individuals may make multiple disease-related visits. Copyright © 2013 John Wiley & Sons, Ltd. 10. Intertime jump statistics of state-dependent Poisson processes. Science.gov (United States) Daly, Edoardo; Porporato, Amilcare 2007-01-01 A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models. 11. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models Science.gov (United States) Amalia, Junita; Purhadi, Otok, Bambang Widjanarko 2017-11-01 Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method. 12. The choice between Ruark-Devol and Poisson statistics in the measurement of short-lived radionuclides International Nuclear Information System (INIS) Kennedy, G. 1981-01-01 The question of whether to use Poisson or Ruark-Devol statistics for radioactivity measurements in which the counting time is long compared to the half-life is discussed. Experimental data are presented which are well described by Poisson statistics. The applications of Ruark-Devol statistics are found to be very limited, in disagreement with earlier publications. (author) 13. Characterization of a Compton suppression system and the applicability of Poisson statistics International Nuclear Information System (INIS) Nicholson, G.; Landsberger, S.; Welch, L. 2008-01-01 The Compton suppression system (CSS) has been thoroughly characterized at the University of Texas' Nuclear Engineering Teaching Laboratory (NETL). Effects of dead-time, sample displacement from primary detector, and primary energy detector position relative to the active shield detector have been measured and analyzed. Also, the applicability of Poisson counting statistics to Compton suppression spectroscopy has been evaluated. (author) 14. Statistical error in simulations of Poisson processes: Example of diffusion in solids Science.gov (United States) Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V. 2016-08-01 Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations. 15. Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach. Science.gov (United States) Gao, Yi; Bouix, Sylvain 2016-05-01 Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. Copyright © 2016 Elsevier B.V. All rights reserved. 16. A random matrix approach to the crossover of energy-level statistics from Wigner to Poisson International Nuclear Information System (INIS) Datta, Nilanjana; Kunz, Herve 2004-01-01 We analyze a class of parametrized random matrix models, introduced by Rosenzweig and Porter, which is expected to describe the energy level statistics of quantum systems whose classical dynamics varies from regular to chaotic as a function of a parameter. We compute the generating function for the correlations of energy levels, in the limit of infinite matrix size. The crossover between Poisson and Wigner statistics is measured by a renormalized coupling constant. The model is exactly solved in the sense that, in the limit of infinite matrix size, the energy-level correlation functions and their generating function are given in terms of a finite set of integrals 17. Spatial statistics of pitting corrosion patterning: Quadrat counts and the non-homogeneous Poisson process International Nuclear Information System (INIS) Lopez de la Cruz, J.; Gutierrez, M.A. 2008-01-01 This paper presents a stochastic analysis of spatial point patterns as effect of localized pitting corrosion. The Quadrat Counts method is studied with two empirical pit patterns. The results are dependent on the quadrat size and bias is introduced when empty quadrats are accounted for the analysis. The spatially inhomogeneous Poisson process is used to improve the performance of the Quadrat Counts method. The latter combines Quadrat Counts with distance-based statistics in the analysis of pit patterns. The Inter-Event and the Nearest-Neighbour statistics are here implemented in order to compare their results. Further, the treatment of patterns in irregular domains is discussed 18. Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks Science.gov (United States) Frahm, Klaus M.; Shepelyansky, Dima L. 2014-04-01 We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables. 19. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties International Nuclear Information System (INIS) Stoneking, M.R.; Den Hartog, D.J. 1996-06-01 The fitting of data by χ 2 -minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a χ 2 -minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than ∼20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers 20. Non-Poisson counting statistics of a hybrid G-M counter dead time model International Nuclear Information System (INIS) Lee, Sang Hoon; Jae, Moosung; Gardner, Robin P. 2007-01-01 The counting statistics of a G-M counter with a considerable dead time event rate deviates from Poisson statistics. Important characteristics such as observed counting rates as a function true counting rates, variances and interval distributions were analyzed for three dead time models, non-paralyzable, paralyzable and hybrid, with the help of GMSIM, a Monte Carlo dead time effect simulator. The simulation results showed good agreements with the models in observed counting rates and variances. It was found through GMSIM simulations that the interval distribution for the hybrid model showed three distinctive regions, a complete cutoff region for the duration of the total dead time, a degraded exponential and an enhanced exponential regions. By measuring the cutoff and the duration of degraded exponential from the pulse interval distribution, it is possible to evaluate the two dead times in the hybrid model 1. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics. Science.gov (United States) De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos 2014-06-01 Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry. 2. Interpolation between Airy and Poisson statistics for unitary chiral non-Hermitian random matrix ensembles International Nuclear Information System (INIS) Akemann, G.; Bender, M. 2010-01-01 We consider a family of chiral non-Hermitian Gaussian random matrices in the unitarily invariant symmetry class. The eigenvalue distribution in this model is expressed in terms of Laguerre polynomials in the complex plane. These are orthogonal with respect to a non-Gaussian weight including a modified Bessel function of the second kind, and we give an elementary proof for this. In the large n limit, the eigenvalue statistics at the spectral edge close to the real axis are described by the same family of kernels interpolating between Airy and Poisson that was recently found by one of the authors for the elliptic Ginibre ensemble. We conclude that this scaling limit is universal, appearing for two different non-Hermitian random matrix ensembles with unitary symmetry. As a second result we give an equivalent form for the interpolating Airy kernel in terms of a single real integral, similar to representations for the asymptotic kernel in the bulk and at the hard edge of the spectrum. This makes its structure as a one-parameter deformation of the Airy kernel more transparent. 3. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms International Nuclear Information System (INIS) Hohage, Thorsten; Werner, Frank 2016-01-01 Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years. (topical review) 4. Universal Poisson Statistics of mRNAs with Complex Decay Pathways. Science.gov (United States) Thattai, Mukund 2016-01-19 5. Linear response in aging glassy systems, intermittency and the Poisson statistics of record fluctuations DEFF Research Database (Denmark) Sibani, Paolo 2007-01-01 in a correlated fashion and through irreversible bursts, quakes', which punctuate reversible and equilibrium-like fluctuations of zero average. The temporal distribution of the quakes is a Poisson distribution with an average growing logarithmically on time, indicating that the quakes are triggered by record... 6. [Statistical (Poisson) motor unit number estimation. Methodological aspects and normal results in the extensor digitorum brevis muscle of healthy subjects]. Science.gov (United States) Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (page than CMAP amplitude ( 0.5002 and 0.4142, respectively pphisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does. 7. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics Directory of Open Access Journals (Sweden) Dongming Li 2017-04-01 Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. 8. Statistical properties of a filtered Poisson process with additive random noise: distributions, correlations and moment estimation International Nuclear Information System (INIS) Theodorsen, A; Garcia, O E; Rypdal, M 2017-01-01 Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type. (paper) 9. Elementary Statistical Models for Vector Collision-Sequence Interference Effects with Poisson-Distributed Collision Times International Nuclear Information System (INIS) Lewis, J.C. 2011-01-01 In a recent paper (Lewis, 2008) a class of models suitable for application to collision-sequence interference was introduced. In these models velocities are assumed to be completely randomized in each collision. The distribution of velocities was assumed to be Gaussian. The integrated induced dipole moment μk, for vector interference, or the scalar modulation μk, for scalar interference, was assumed to be a function of the impulse (integrated force) fk, or its magnitude fk, experienced by the molecule in a collision. For most of (Lewis, 2008) it was assumed that μk fk and μk fk, but it proved to be possible to extend the models, so that the magnitude of the induced dipole moment is equal to an arbitrary power or sum of powers of the intermolecular force. This allows estimates of the in filling of the interference dip by the dis proportionality of the induced dipole moment and force. One particular such model, using data from (Herman and Lewis, 2006), leads to the most realistic estimate for the in filling of the vector interference dip yet obtained. In (Lewis, 2008) the drastic assumption was made that collision times occurred at equal intervals. In the present paper that assumption is removed: the collision times are taken to form a Poisson process. This is much more realistic than the equal-intervals assumption. The interference dip is found to be a Lorentzian in this model 10. Two-state Markov-chain Poisson nature of individual cellphone call statistics Science.gov (United States) Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Zhou, Wei-Xing; Sornette, Didier 2016-07-01 Unfolding the burst patterns in human activities and social interactions is a very important issue especially for understanding the spreading of disease and information and the formation of groups and organizations. Here, we conduct an in-depth study of the temporal patterns of cellphone conversation activities of 73 339 anonymous cellphone users, whose inter-call durations are Weibull distributed. We find that the individual call events exhibit a pattern of bursts, that high activity periods are alternated with low activity periods. In both periods, the number of calls are exponentially distributed for individuals, but power-law distributed for the population. Together with the exponential distributions of inter-call durations within bursts and of the intervals between consecutive bursts, we demonstrate that the individual call activities are driven by two independent Poisson processes, which can be combined within a minimal model in terms of a two-state first-order Markov chain, giving significant fits for nearly half of the individuals. By measuring directly the distributions of call rates across the population, which exhibit power-law tails, we purport the existence of power-law distributions, via the ‘superposition of distributions’ mechanism. Our findings shed light on the origins of bursty patterns in other human activities. 11. Statistical and heuristic image noise extraction (SHINE): a new method for processing Poisson noise in scintigraphic images International Nuclear Information System (INIS) Hannequin, Pascal; Mas, Jacky 2002-01-01 Poisson noise is one of the factors degrading scintigraphic images, especially at low count level, due to the statistical nature of photon detection. We have developed an original procedure, named statistical and heuristic image noise extraction (SHINE), to reduce the Poisson noise contained in the scintigraphic images, preserving the resolution, the contrast and the texture. The SHINE procedure consists in dividing the image into 4 x 4 blocks and performing a correspondence analysis on these blocks. Each block is then reconstructed using its own significant factors which are selected using an original statistical variance test. The SHINE procedure has been validated using a line numerical phantom and a hot spots and cold spots real phantom. The reference images are the noise-free simulated images for the numerical phantom and an extremely high counts image for the real phantom. The SHINE procedure has then been applied to the Jaszczak phantom and clinical data including planar bone scintigraphy, planar Sestamibi scintigraphy and Tl-201 myocardial SPECT. The SHINE procedure reduces the mean normalized error between the noisy images and the corresponding reference images. This reduction is constant and does not change with the count level. The SNR in a SHINE processed image is close to that of the corresponding raw image with twice the number of counts. The visual results with the Jaszczak phantom SPECT have shown that SHINE preserves the contrast and the resolution of the slices well. Clinical examples have shown no visual difference between the SHINE images and the corresponding raw images obtained with twice the acquisition duration. SHINE is an entirely automatic procedure which enables halving the acquisition time or the injected dose in scintigraphic acquisitions. It can be applied to all scintigraphic images, including PET data, and to all low-count photon images 12. Identifying overrepresented concepts in gene lists from literature: a statistical approach based on Poisson mixture model Directory of Open Access Journals (Sweden) Zhai Chengxiang 2010-05-01 Full Text Available Abstract Background Large-scale genomic studies often identify large gene lists, for example, the genes sharing the same expression patterns. The interpretation of these gene lists is generally achieved by extracting concepts overrepresented in the gene lists. This analysis often depends on manual annotation of genes based on controlled vocabularies, in particular, Gene Ontology (GO. However, the annotation of genes is a labor-intensive process; and the vocabularies are generally incomplete, leaving some important biological domains inadequately covered. Results We propose a statistical method that uses the primary literature, i.e. free-text, as the source to perform overrepresentation analysis. The method is based on a statistical framework of mixture model and addresses the methodological flaws in several existing programs. We implemented this method within a literature mining system, BeeSpace, taking advantage of its analysis environment and added features that facilitate the interactive analysis of gene sets. Through experimentation with several datasets, we showed that our program can effectively summarize the important conceptual themes of large gene sets, even when traditional GO-based analysis does not yield informative results. Conclusions We conclude that the current work will provide biologists with a tool that effectively complements the existing ones for overrepresentation analysis from genomic experiments. Our program, Genelist Analyzer, is freely available at: http://workerbee.igb.uiuc.edu:8080/BeeSpace/Search.jsp 13. Modelling carcinogenesis after radiotherapy using Poisson statistics: implications for IMRT, protons and ions Energy Technology Data Exchange (ETDEWEB) Jones, Bleddyn [Gray Institute for Radiation Oncology and Biology, University of Oxford, Old Road Campus, Headington, Oxford OX3 7DQ (United Kingdom)], E-mail: [email protected] 2009-06-01 Current technical radiotherapy advances aim to (a) better conform the dose contours to cancers and (b) reduce the integral dose exposure and thereby minimise unnecessary dose exposure to normal tissues unaffected by the cancer. Various types of conformal and intensity modulated radiotherapy (IMRT) using x-rays can achieve (a) while charged particle therapy (CPT)-using proton and ion beams-can achieve both (a) and (b), but at greater financial cost. Not only is the long term risk of radiation related normal tissue complications important, but so is the risk of carcinogenesis. Physical dose distribution plans can be generated to show the differences between the above techniques. IMRT is associated with a dose bath of low to medium dose due to fluence transfer: dose is effectively transferred from designated organs at risk to other areas; thus dose and risk are transferred. Many clinicians are concerned that there may be additional carcinogenesis many years after IMRT. CPT reduces the total energy deposition in the body and offers many potential advantages in terms of the prospects for better quality of life along with cancer cure. With C ions there is a tail of dose beyond the Bragg peaks, due to nuclear fragmentation; this is not found with protons. CPT generally uses higher linear energy transfer (which varies with particle and energy), which carries a higher relative risk of malignant induction, but also of cell death quantified by the relative biological effect concept, so at higher dose levels the frank development of malignancy should be reduced. Standard linear radioprotection models have been used to show a reduction in carcinogenesis risk of between two- and 15-fold depending on the CPT location. But the standard risk models make no allowance for fractionation and some have a dose limit at 4 Gy. Alternatively, tentative application of the linear quadratic model and Poissonian statistics to chromosome breakage and cell kill simultaneously allows estimation of 14. Relationship between the generalized equivalent uniform dose formulation and the Poisson statistics-based tumor control probability model International Nuclear Information System (INIS) Zhou Sumin; Das, Shiva; Wang Zhiheng; Marks, Lawrence B. 2004-01-01 The generalized equivalent uniform dose (GEUD) model uses a power-law formalism, where the outcome is related to the dose via a power law. We herein investigate the mathematical compatibility between this GEUD model and the Poisson statistics based tumor control probability (TCP) model. The GEUD and TCP formulations are combined and subjected to a compatibility constraint equation. This compatibility constraint equates tumor control probability from the original heterogeneous target dose distribution to that from the homogeneous dose from the GEUD formalism. It is shown that this constraint equation possesses a unique, analytical closed-form solution which relates radiation dose to the tumor cell survival fraction. It is further demonstrated that, when there is no positive threshold or finite critical dose in the tumor response to radiation, this relationship is not bounded within the realistic cell survival limits of 0%-100%. Thus, the GEUD and TCP formalisms are, in general, mathematically inconsistent. However, when a threshold dose or finite critical dose exists in the tumor response to radiation, there is a unique mathematical solution for the tumor cell survival fraction that allows the GEUD and TCP formalisms to coexist, provided that all portions of the tumor are confined within certain specific dose ranges 15. A Poisson Cluster Stochastic Rainfall Generator That Accounts for the Interannual Variability of Rainfall Statistics: Validation at Various Geographic Locations across the United States Directory of Open Access Journals (Sweden) Dongkyun Kim 2014-01-01 Full Text Available A novel approach for a Poisson cluster stochastic rainfall generator was validated in its ability to reproduce important rainfall and watershed response characteristics at 104 locations in the United States. The suggested novel approach, The Hybrid Model (THM, as compared to the traditional Poisson cluster rainfall modeling approaches, has an additional capability to account for the interannual variability of rainfall statistics. THM and a traditional approach of Poisson cluster rainfall model (modified Bartlett-Lewis rectangular pulse model were compared in their ability to reproduce the characteristics of extreme rainfall and watershed response variables such as runoff and peak flow. The results of the comparison indicate that THM generally outperforms the traditional approach in reproducing the distributions of peak rainfall, peak flow, and runoff volume. In addition, THM significantly outperformed the traditional approach in reproducing extreme rainfall by 2.3% to 66% and extreme flow values by 32% to 71%. 16. Poisson distribution NARCIS (Netherlands) Hallin, M.; Piegorsch, W.; El Shaarawi, A. 2012-01-01 The random variable X taking values 0,1,2,…,x,… with probabilities pλ(x) = e−λλx/x!, where λ∈R0+ is called a Poisson variable, and its distribution a Poisson distribution, with parameter λ. The Poisson distribution with parameter λ can be obtained as the limit, as n → ∞ and p → 0 in such a way that 17. Poisson Autoregression DEFF Research Database (Denmark) Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... variance, implying an interpretation as an integer valued GARCH process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model for time... 18. Poisson Autoregression DEFF Research Database (Denmark) Fokianos, Konstantinos; Rahbæk, Anders; Tjøstheim, Dag This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... variance, making an interpretation as an integer valued GARCH process possible. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model... 19. Poisson processes NARCIS (Netherlands) Boxma, O.J.; Yechiali, U.; Ruggeri, F.; Kenett, R.S.; Faltin, F.W. 2007-01-01 The Poisson process is a stochastic counting process that arises naturally in a large variety of daily life situations. We present a few definitions of the Poisson process and discuss several properties as well as relations to some well-known probability distributions. We further briefly discuss the 20. Poisson Autoregression DEFF Research Database (Denmark) Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag 2009-01-01 In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its past values, as well as to the observed values of the Poisson process. This also applies...... to the conditional variance, making possible interpretation as an integer-valued generalized autoregressive conditional heteroscedasticity process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and past observations. As a particular example, we consider...... an exponential autoregressive Poisson model for time series. Under geometric ergodicity, the maximum likelihood estimators are shown to be asymptotically Gaussian in the linear model. In addition, we provide a consistent estimator of their asymptotic covariance matrix. Our approach to verifying geometric... 1. Poisson Coordinates. Science.gov (United States) Li, Xian-Ying; Hu, Shi-Min 2013-02-01 Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions. 2. Measuring mouse retina response near the detection threshold to direct stimulation of photons with sub-poisson statistics Science.gov (United States) Tavala, Amir; Dovzhik, Krishna; Schicker, Klaus; Koschak, Alexandra; Zeilinger, Anton Probing the visual system of human and animals at very low photon rate regime has recently attracted the quantum optics community. In an experiment on the isolated photoreceptor cells of Xenopus, the cell output signal was measured while stimulating it by pulses with sub-poisson distributed photons. The results showed single photon detection efficiency of 29 +/-4.7% [1]. Another behavioral experiment on human suggests a less detection capability at perception level with the chance of 0.516 +/-0.01 (i.e. slightly better than random guess) [2]. Although the species are different, both biological models and experimental observations with classical light stimuli expect that a fraction of single photon responses is filtered somewhere within the retina network and/or during the neural processes in the brain. In this ongoing experiment, we look for a quantitative answer to this question by measuring the output signals of the last neural layer of WT mouse retina using microelectrode arrays. We use a heralded downconversion single-photon source. We stimulate the retina directly since the eye lens (responsible for 20-50% of optical loss and scattering [2]) is being removed. Here, we demonstrate our first results that confirms the response to the sub-poisson distributied pulses. This project was supported by Austrian Academy of Sciences, SFB FoQuS F 4007-N23 funded by FWF and ERC QIT4QAD 227844 funded by EU Commission. 3. Understanding poisson regression. Science.gov (United States) Hayat, Matthew J; Higgins, Melinda 2014-04-01 Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated. 4. Extended Poisson Exponential Distribution Directory of Open Access Journals (Sweden) Anum Fatima 2015-09-01 Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution. 5. On Poisson functions OpenAIRE Terashima, Yuji 2008-01-01 In this paper, defining Poisson functions on super manifolds, we show that the graphs of Poisson functions are Dirac structures, and find Poisson functions which include as special cases both quasi-Poisson structures and twisted Poisson structures. 6. Poisson branching point processes International Nuclear Information System (INIS) Matsuo, K.; Teich, M.C.; Saleh, B.E.A. 1984-01-01 We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers 7. A Classroom Note on the Binomial and Poisson Distributions: Biomedical Examples for Use in Teaching Introductory Statistics Science.gov (United States) Holland, Bart K. 2006-01-01 A generally-educated individual should have some insight into how decisions are made in the very wide range of fields that employ statistical and probabilistic reasoning. Also, students of introductory probability and statistics are often best motivated by specific applications rather than by theory and mathematical development, because most… 8. Binomial distribution of Poisson statistics and tracks overlapping probability to estimate total tracks count with low uncertainty International Nuclear Information System (INIS) Khayat, Omid; Afarideh, Hossein; Mohammadnia, Meisam 2015-01-01 In the solid state nuclear track detectors of chemically etched type, nuclear tracks with center-to-center neighborhood of distance shorter than two times the radius of tracks will emerge as overlapping tracks. Track overlapping in this type of detectors causes tracks count losses and it becomes rather severe in high track densities. Therefore, tracks counting in this condition should include a correction factor for count losses of different tracks overlapping orders since a number of overlapping tracks may be counted as one track. Another aspect of the problem is the cases where imaging the whole area of the detector and counting all tracks are not possible. In these conditions a statistical generalization method is desired to be applicable in counting a segmented area of the detector and the results can be generalized to the whole surface of the detector. Also there is a challenge in counting the tracks in densely overlapped tracks because not sufficient geometrical or contextual information are available. It this paper we present a statistical counting method which gives the user a relation between the tracks overlapping probabilities on a segmented area of the detector surface and the total number of tracks. To apply the proposed method one can estimate the total number of tracks on a solid state detector of arbitrary shape and dimensions by approximating the tracks averaged area, whole detector surface area and some orders of tracks overlapping probabilities. It will be shown that this method is applicable in high and ultra high density tracks images and the count loss error can be enervated using a statistical generalization approach. - Highlights: • A correction factor for count losses of different tracks overlapping orders. • For the cases imaging the whole area of the detector is not possible. • Presenting a statistical generalization method for segmented areas. • Giving a relation between the tracks overlapping probabilities and the total tracks 9. Drug Adverse Event Detection in Health Plan Data Using the Gamma Poisson Shrinker and Comparison to the Tree-based Scan Statistic Directory of Open Access Journals (Sweden) David Smith 2013-03-01 Full Text Available Background: Drug adverse event (AE signal detection using the Gamma Poisson Shrinker (GPS is commonly applied in spontaneous reporting. AE signal detection using large observational health plan databases can expand medication safety surveillance. Methods: Using data from nine health plans, we conducted a pilot study to evaluate the implementation and findings of the GPS approach for two antifungal drugs, terbinafine and itraconazole, and two diabetes drugs, pioglitazone and rosiglitazone. We evaluated 1676 diagnosis codes grouped into 183 different clinical concepts and four levels of granularity. Several signaling thresholds were assessed. GPS results were compared to findings from a companion study using the identical analytic dataset but an alternative statistical method—the tree-based scan statistic (TreeScan. Results: We identified 71 statistical signals across two signaling thresholds and two methods, including closely-related signals of overlapping diagnosis definitions. Initial review found that most signals represented known adverse drug reactions or confounding. About 31% of signals met the highest signaling threshold. Conclusions: The GPS method was successfully applied to observational health plan data in a distributed data environment as a drug safety data mining method. There was substantial concordance between the GPS and TreeScan approaches. Key method implementation decisions relate to defining exposures and outcomes and informed choice of signaling thresholds. 10. (Quasi-)Poisson enveloping algebras OpenAIRE Yang, Yan-Hong; Yao, Yuan; Ye, Yu 2010-01-01 We introduce the quasi-Poisson enveloping algebra and Poisson enveloping algebra for a non-commutative Poisson algebra. We prove that for a non-commutative Poisson algebra, the category of quasi-Poisson modules is equivalent to the category of left modules over its quasi-Poisson enveloping algebra, and the category of Poisson modules is equivalent to the category of left modules over its Poisson enveloping algebra. 11. Perbandingan Regresi Binomial Negatif dan Regresi Conway-Maxwell-Poisson dalam Mengatasi Overdispersi pada Regresi Poisson Directory of Open Access Journals (Sweden) Lusi Eka Afri 2017-03-01 estimation and to downward bias parameter estimation (underestimate. This study aims to compare the Negative Binomial Regression model and Conway-Maxwell-Poisson (COM- Poisson regression model to overcome over dispersion of Poisson distribution data based on deviance test statistics. The data used in this study are simulation data and applied case data. The simulation data were obtained by generating the Poisson distribution data containing over dispersion using the R programming language based on data characteristic such as ?, the probability (p of zero value and the sample size (n. The generated data is used to get the estimated parameter coefficient of the negative binomial regression and COM-Poisson. Keywords: overdispersion, negative binomial regression and Conway-Maxwell-Poisson regression 12. On the fractal characterization of Paretian Poisson processes Science.gov (United States) Eliazar, Iddo I.; Sokolov, Igor M. 2012-06-01 Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities. 13. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series Science.gov (United States) Wilson, Robert M. 2001-01-01 Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high. 14. Homogeneous Poisson structures International Nuclear Information System (INIS) Shafei Deh Abad, A.; Malek, F. 1993-09-01 We provide an algebraic definition for Schouten product and give a decomposition for any homogenenous Poisson structure in any n-dimensional vector space. A large class of n-homogeneous Poisson structures in R k is also characterized. (author). 4 refs 15. Singular Poisson tensors International Nuclear Information System (INIS) Littlejohn, R.G. 1982-01-01 The Hamiltonian structures discovered by Morrison and Greene for various fluid equations were obtained by guessing a Hamiltonian and a suitable Poisson bracket formula, expressed in terms of noncanonical (but physical) coordinates. In general, such a procedure for obtaining a Hamiltonian system does not produce a Hamiltonian phase space in the usual sense (a symplectic manifold), but rather a family of symplectic manifolds. To state the matter in terms of a system with a finite number of degrees of freedom, the family of symplectic manifolds is parametrized by a set of Casimir functions, which are characterized by having vanishing Poisson brackets with all other functions. The number of independent Casimir functions is the corank of the Poisson tensor J/sup ij/, the components of which are the Poisson brackets of the coordinates among themselves. Thus, these Casimir functions exist only when the Poisson tensor is singular 16. On poisson-stopped-sums that are mixed poisson OpenAIRE Valero Baya, Jordi; Pérez Casany, Marta; Ginebra Molins, Josep 2013-01-01 Maceda (1948) characterized the mixed Poisson distributions that are Poisson-stopped-sum distributions based on the mixing distribution. In an alternative characterization of the same set of distributions here the Poisson-stopped-sum distributions that are mixed Poisson distributions is proved to be the set of Poisson-stopped-sums of either a mixture of zero-truncated Poisson distributions or a zero-modification of it. Peer Reviewed 17. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds Science.gov (United States) Martínez-Torres, David; Miranda, Eva 2018-01-01 We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds. 18. Cumulative Poisson Distribution Program Science.gov (United States) Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert 1990-01-01 Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C. 19. Modifications to POISSON International Nuclear Information System (INIS) Harwood, L.H. 1981-01-01 At MSU we have used the POISSON family of programs extensively for magnetic field calculations. In the presently super-saturated computer situation, reducing the run time for the program is imperative. Thus, a series of modifications have been made to POISSON to speed up convergence. Two of the modifications aim at having the first guess solution as close as possible to the final solution. The other two aim at increasing the convergence rate. In this discussion, a working knowledge of POISSON is assumed. The amount of new code and expected time saving for each modification is discussed 20. Poisson Processes in Free Probability OpenAIRE An, Guimei; Gao, Mingchu 2015-01-01 We prove a multidimensional Poisson limit theorem in free probability, and define joint free Poisson distributions in a non-commutative probability space. We define (compound) free Poisson process explicitly, similar to the definitions of (compound) Poisson processes in classical probability. We proved that the sum of finitely many freely independent compound free Poisson processes is a compound free Poisson processes. We give a step by step procedure for constructing a (compound) free Poisso... 1. On Poisson Nonlinear Transformations Directory of Open Access Journals (Sweden) Nasir Ganikhodjaev 2014-01-01 Full Text Available We construct the family of Poisson nonlinear transformations defined on the countable sample space of nonnegative integers and investigate their trajectory behavior. We have proved that these nonlinear transformations are regular. 2. Scaling the Poisson Distribution Science.gov (United States) Farnsworth, David L. 2014-01-01 We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented. 3. Lévy meets poisson: a statistical artifact may lead to erroneous recategorization of Lévy walk as Brownian motion. Science.gov (United States) 2013-03-01 The flow of GPS data on animal space is challenging old paradigms, such as the issue of the scale-free Lévy walk versus scale-specific Brownian motion. Since these movement classes often require different protocols with respect to ecological analyses, further theoretical development in this field is important. I describe central concepts such as scale-specific versus scale-free movement and the difference between mechanistic and statistical-mechanical levels of analysis. Next, I report how a specific sampling scheme may have produced much confusion: a Lévy walk may be wrongly categorized as Brownian motion if the duration of a move, or bout, is used as a proxy for step length and a move is subjectively defined. Hence, the categorization and recategorization of movement class compliance surrounding the Lévy walk controversy may have been based on a statistical artifact. This issue may be avoided by collecting relocations at a fixed rate at a temporal scale that minimizes over- and undersampling. 4. Fractional Poisson process (II) International Nuclear Information System (INIS) Wang Xiaotian; Wen Zhixiong; Zhang Shiying 2006-01-01 In this paper, we propose a stochastic process W H (t)(H-bar (12,1)) which we call fractional Poisson process. The process W H (t) is self-similar in wide sense, displays long range dependence, and has more fatter tail than Gaussian process. In addition, it converges to fractional Brownian motion in distribution 5. Information content of poisson images International Nuclear Information System (INIS) Cederlund, J. 1979-04-01 One major problem when producing images with the aid of Poisson distributed quanta is how best to compromise between spatial and contrast resolution. Increasing the number of image elements improves spatial resolution, but at the cost of fewer quanta per image element, which reduces contrast resolution. Information theory arguments are used to analyse this problem. It is argued that information capacity is a useful concept to describe an important property of the imaging device, but that in order to compute the information content of an image produced by this device some statistical properties (such as the a priori probability of the densities) of the object to be depicted must be taken into account. If these statistical properties are not known one cannot make a correct choice between spatial and contrast resolution. (author) 6. High throughput single-cell and multiple-cell micro-encapsulation. Science.gov (United States) Lagus, Todd P; Edd, Jon F 2012-06-15 signals from bioreactor products. Drops also provide the ability to re-merge drops into larger aqueous samples or with other drops for intercellular signaling studies. The reduction in dilution implies stronger detection signals for higher accuracy measurements as well as the ability to reduce potentially costly sample and reagent volumes. Encapsulation of cells in drops has been utilized to improve detection of protein expression, antibodies, enzymes, and metabolic activity for high throughput screening, and could be used to improve high throughput cytometry. Additional studies present applications in bio-electrospraying of cell containing drops for mass spectrometry and targeted surface cell coatings. Some applications, however, have been limited by the lack of ability to control the number of cells encapsulated in drops. Here we present a method of ordered encapsulation which increases the demonstrated encapsulation efficiencies for one and two cells and may be extrapolated for encapsulation of a larger number of cells. To achieve monodisperse drop generation, microfluidic "flow focusing" enables the creation of controllable-size drops of one fluid (an aqueous cell mixture) within another (a continuous oil phase) by using a nozzle at which the streams converge. For a given nozzle geometry, the drop generation frequency f and drop size can be altered by adjusting oil and aqueous flow rates Q(oil) and Q(aq). As the flow rates increase, the flows may transition from drop generation to unstable jetting of aqueous fluid from the nozzle. When the aqueous solution contains suspended particles, particles become encapsulated and isolated from one another at the nozzle. For drop generation using a randomly distributed aqueous cell suspension, the average fraction of drops D(k) containing k cells is dictated by Poisson statistics, where D(k) = λ(k) exp(-λ)/(k!) and λ is the average number of cells per drop. The fraction of cells which end up in the "correctly" encapsulated 7. Formal equivalence of Poisson structures around Poisson submanifolds NARCIS (Netherlands) Marcut, I.T. 2012-01-01 Let (M,π) be a Poisson manifold. A Poisson submanifold P ⊂ M gives rise to a Lie algebroid AP → P. Formal deformations of π around P are controlled by certain cohomology groups associated to AP. Assuming that these groups vanish, we prove that π is formally rigid around P; that is, any other Poisson 8. Poisson brackets of orthogonal polynomials OpenAIRE Cantero, María José; Simon, Barry 2009-01-01 For the standard symplectic forms on Jacobi and CMV matrices, we compute Poisson brackets of OPRL and OPUC, and relate these to other basic Poisson brackets and to Jacobians of basic changes of variable. 9. Nambu–Poisson gauge theory Energy Technology Data Exchange (ETDEWEB) Jurčo, Branislav, E-mail: [email protected] [Charles University in Prague, Faculty of Mathematics and Physics, Mathematical Institute, Prague 186 75 (Czech Republic); Schupp, Peter, E-mail: [email protected] [Jacobs University Bremen, 28759 Bremen (Germany); Vysoký, Jan, E-mail: [email protected] [Jacobs University Bremen, 28759 Bremen (Germany); Czech Technical University in Prague, Faculty of Nuclear Sciences and Physical Engineering, Prague 115 19 (Czech Republic) 2014-06-02 We generalize noncommutative gauge theory using Nambu–Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg–Witten map. We construct a covariant Nambu–Poisson gauge theory action, give its first order expansion in the Nambu–Poisson tensor and relate it to a Nambu–Poisson matrix model. 10. Nambu–Poisson gauge theory International Nuclear Information System (INIS) Jurčo, Branislav; Schupp, Peter; Vysoký, Jan 2014-01-01 We generalize noncommutative gauge theory using Nambu–Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg–Witten map. We construct a covariant Nambu–Poisson gauge theory action, give its first order expansion in the Nambu–Poisson tensor and relate it to a Nambu–Poisson matrix model. 11. Branes in Poisson sigma models International Nuclear Information System (INIS) Falceto, Fernando 2010-01-01 In this review we discuss possible boundary conditions (branes) for the Poisson sigma model. We show how to carry out the perturbative quantization in the presence of a general pre-Poisson brane and how this is related to the deformation quantization of Poisson structures. We conclude with an open problem: the perturbative quantization of the system when the boundary has several connected components and we use a different pre-Poisson brane in every component. 12. Normal forms in Poisson geometry NARCIS (Netherlands) Marcut, I.T. 2013-01-01 The structure of Poisson manifolds is highly nontrivial even locally. The first important result in this direction is Conn's linearization theorem around fixed points. One of the main results of this thesis (Theorem 2) is a normal form theorem in Poisson geometry, which is the Poisson-geometric 13. Poisson filtering of laser ranging data Science.gov (United States) Ricklefs, Randall L.; Shelus, Peter J. 1993-01-01 The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described. 14. Les poissons de Guyane OpenAIRE Ifremer 1992-01-01 Vous trouverez dans ce document les 24 poissons les plus courants de Guyane (sur un nombre d'espèces approchant les 200) avec leurs principales caractéristiques, leurs noms scientifiques, français, anglais et espagnol et leurs photographies. Ils sont classés, de l'acoupa au vivaneau ti yeux, par ordre alphabétique. Si vous ne trouvez pas de chiffres sur la production de telle ou telle espèce, c'est parce qu'ils n'existent pas, mais aussi et surtout parce qu'ils ne signifieraient rien, l... 15. Doubly stochastic Poisson processes in artificial neural learning. Science.gov (United States) Card, H C 1998-01-01 This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits. 16. Statistics CERN Document Server Hayslett, H T 1991-01-01 Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the 17. Nonhomogeneous fractional Poisson processes Energy Technology Data Exchange (ETDEWEB) Wang Xiaotian [School of Management, Tianjin University, Tianjin 300072 (China)]. E-mail: [email protected]; Zhang Shiying [School of Management, Tianjin University, Tianjin 300072 (China); Fan Shen [Computer and Information School, Zhejiang Wanli University, Ningbo 315100 (China) 2007-01-15 In this paper, we propose a class of non-Gaussian stationary increment processes, named nonhomogeneous fractional Poisson processes W{sub H}{sup (j)}(t), which permit the study of the effects of long-range dependance in a large number of fields including quantum physics and finance. The processes W{sub H}{sup (j)}(t) are self-similar in a wide sense, exhibit more fatter tail than Gaussian processes, and converge to the Gaussian processes in distribution in some cases. In addition, we also show that the intensity function {lambda}(t) strongly influences the existence of the highest finite moment of W{sub H}{sup (j)}(t) and the behaviour of the tail probability of W{sub H}{sup (j)}(t) 18. Nonhomogeneous fractional Poisson processes International Nuclear Information System (INIS) Wang Xiaotian; Zhang Shiying; Fan Shen 2007-01-01 In this paper, we propose a class of non-Gaussian stationary increment processes, named nonhomogeneous fractional Poisson processes W H (j) (t), which permit the study of the effects of long-range dependance in a large number of fields including quantum physics and finance. The processes W H (j) (t) are self-similar in a wide sense, exhibit more fatter tail than Gaussian processes, and converge to the Gaussian processes in distribution in some cases. In addition, we also show that the intensity function λ(t) strongly influences the existence of the highest finite moment of W H (j) (t) and the behaviour of the tail probability of W H (j) (t) 19. Statistics Science.gov (United States) Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report. 20. Evolutionary inference via the Poisson Indel Process. Science.gov (United States) Bouchard-Côté, Alexandre; Jordan, Michael I 2013-01-22 We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. 1. Statistics International Nuclear Information System (INIS) 2005-01-01 For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees 2. Poisson Spot with Magnetic Levitation Science.gov (United States) Hoover, Matthew; Everhart, Michael; D'Arruda, Jose 2010-01-01 In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow. 3. Harmonic statistics Energy Technology Data Exchange (ETDEWEB) Eliazar, Iddo, E-mail: [email protected] 2017-05-15 The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established. 4. Harmonic statistics International Nuclear Information System (INIS) Eliazar, Iddo 2017-01-01 The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established. 5. Poisson hierarchy of discrete strings International Nuclear Information System (INIS) Ioannidou, Theodora; Niemi, Antti J. 2016-01-01 The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed. 6. Poisson hierarchy of discrete strings Energy Technology Data Exchange (ETDEWEB) Ioannidou, Theodora, E-mail: [email protected] [Faculty of Civil Engineering, School of Engineering, Aristotle University of Thessaloniki, 54249, Thessaloniki (Greece); Niemi, Antti J., E-mail: [email protected] [Department of Physics and Astronomy, Uppsala University, P.O. Box 803, S-75108, Uppsala (Sweden); Laboratoire de Mathematiques et Physique Theorique CNRS UMR 6083, Fédération Denis Poisson, Université de Tours, Parc de Grandmont, F37200, Tours (France); Department of Physics, Beijing Institute of Technology, Haidian District, Beijing 100081 (China) 2016-01-28 The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed. 7. Statistics International Nuclear Information System (INIS) 2001-01-01 For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products 8. Statistics International Nuclear Information System (INIS) 2000-01-01 For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products 9. Statistics International Nuclear Information System (INIS) 1999-01-01 For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products 10. POLYETHYLENE ENCAPSULATION International Nuclear Information System (INIS) Kalb, P. 2001-01-01 Polyethylene microencapsulation physically homogenizes and incorporates mixed waste particles within a molten polymer matrix, forming a solidified final waste form upon cooling. Each individual particle of waste is embedded within the polymer block and is surrounded by a durable, leach-resistant coating. The process has been successfully applied for the treatment of a broad range of mixed wastes, including evaporator concentrate salts, soil, sludges, incinerator ash, off-gas blowdown solutions, decontamination solutions, molten salt oxidation process residuals, ion exchange resins, granular activated carbon, shredded dry active waste, spill clean-up residuals, depleted uranium powders, and failed grout waste forms. For waste streams containing high concentrations of soluble toxic metal contaminants, additives can be used to further reduce leachability, thus improving waste loadings while meeting or exceeding regulatory disposal criteria. In this configuration, contaminants are both chemically stabilized and physically solidified, making the process a true stabilization/solidification (S/S) technology. Unlike conventional hydraulic cement grouts or thermosetting polymers, thermoplastic polymers such as polyethylene require no chemical. reaction for solidification. Thus, a stable, solid, final waste form product is assured on cooling. Variations in waste chemistry over time do not affect processing parameters and do not require reformulation of the recipe. Incorporation of waste particles within the polymer matrix serves as an aggregate and improves the mechanical strength and integrity of the waste form. The compressive strength of polyethylene microencapsulated waste forms varies based on the type and quantity of waste encapsulated, but is typically between 7 and 17.2 MPa (1000 and 2500 psi), well above the minimum strength of 0.4 MPa (160 psi) recommended by the U.S. Nuclear Regulatory Commission (NRC) for low-level radioactive waste forms in support of 10 CFR 61 11. Statistics International Nuclear Information System (INIS) 2003-01-01 For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products 12. Statistics International Nuclear Information System (INIS) 2004-01-01 For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees 13. Statistics International Nuclear Information System (INIS) 2000-01-01 For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products 14. Polynomial Poisson algebras: Gel'fand-Kirillov problem and Poisson spectra OpenAIRE Lecoutre, César 2014-01-01 We study the fields of fractions and the Poisson spectra of polynomial Poisson algebras.\\ud \\ud First we investigate a Poisson birational equivalence problem for polynomial Poisson algebras over a field of arbitrary characteristic. Namely, the quadratic Poisson Gel'fand-Kirillov problem asks whether the field of fractions of a Poisson algebra is isomorphic to the field of fractions of a Poisson affine space, i.e. a polynomial algebra such that the Poisson bracket of two generators is equal to... 15. Non-equal-time Poisson brackets OpenAIRE Nikolic, H. 1998-01-01 The standard definition of the Poisson brackets is generalized to the non-equal-time Poisson brackets. Their relationship to the equal-time Poisson brackets, as well as to the equal- and non-equal-time commutators, is discussed. International Nuclear Information System (INIS) Pordes, O.; Plows, J.P. 1980-01-01 A method is described for encapsulating a particular radioactive waste which consists of suspending the waste in a viscous liquid encapsulating material, of synthetic resin monomers or prepolymers, and setting the encapsulating material by addition or condensation polymerization to form a solid material in which the waste is dispersed. (author) 17. Newton/Poisson-Distribution Program Science.gov (United States) Bowerman, Paul N.; Scheuer, Ernest M. 1990-01-01 NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C. 18. POISSON SUPERFISH, Poisson Equation Solver for Radio Frequency Cavity International Nuclear Information System (INIS) Colman, J. 2001-01-01 1 - Description of program or function: POISSON, SUPERFISH is a group of (1) codes that solve Poisson's equation and are used to compute field quality for both magnets and fixed electric potentials and (2) RF cavity codes that calculate resonant frequencies and field distributions of the fundamental and higher modes. The group includes: POISSON, PANDIRA, SUPERFISH, AUTOMESH, LATTICE, FORCE, MIRT, PAN-T, TEKPLOT, SF01, and SHY. POISSON solves Poisson's (or Laplace's) equation for the vector (scalar) potential with nonlinear isotropic iron (dielectric) and electric current (charge) distributions for two-dimensional Cartesian or three-dimensional cylindrical symmetry. It calculates the derivatives of the potential, the stored energy, and performs harmonic (multipole) analysis of the potential. PANDIRA is similar to POISSON except it allows anisotropic and permanent magnet materials and uses a different numerical method to obtain the potential. SUPERFISH solves for the accelerating (TM) and deflecting (TE) resonant frequencies and field distributions in an RF cavity with two-dimensional Cartesian or three-dimensional cylindrical symmetry. Only the azimuthally symmetric modes are found for cylindrically symmetric cavities. AUTOMESH prepares input for LATTICE from geometrical data describing the problem, (i.e., it constructs the 'logical' mesh and generates (x,y) coordinate data for straight lines, arcs of circles, and segments of hyperbolas). LATTICE generates an irregular triangular (physical) mesh from the input data, calculates the 'point current' terms at each mesh point in regions with distributed current density, and sets up the mesh point relaxation order needed to write the binary problem file for the equation-solving POISSON, PANDIRA, or SUPERFISH. FORCE calculates forces and torques on coils and iron regions from POISSON or PANDIRA solutions for the potential. MIRT optimizes magnet profiles, coil shapes, and current densities from POISSON output based on a 19. Coordination of Conditional Poisson Samples Directory of Open Access Journals (Sweden) Grafström Anton 2015-12-01 Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers. 20. The applicability of the Poisson distribution in radiochemical measurements International Nuclear Information System (INIS) Luthardt, M.; Proesch, U. 1980-01-01 The fact that, on principle, the Poisson distribution describes the statistics of nuclear decay is generally accepted. The applicability of this distribution to nuclear radiation measurements has recently been questioned. Applying the chi-squared test for goodness of fit on the analogy of the moving average, at least 3 cases may be distinguished, which lead to an incorrect rejection of the Poisson distribution for measurements. Examples are given. Distributions, which make allowance for special parameters, should only be used after careful examination of the data with regard to other interfering effects. The Poisson distribution will further on be applicable to many simple measuring operations. Some basic equations for the analysis of poisson-distributed data are given. (author) 1. Modeling laser velocimeter signals as triply stochastic Poisson processes Science.gov (United States) Mayo, W. T., Jr. 1976-01-01 Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals. 2. Topological Poisson Sigma models on Poisson-Lie groups International Nuclear Information System (INIS) Calvo, Ivan; Falceto, Fernando; Garcia-Alvarez, David 2003-01-01 We solve the topological Poisson Sigma model for a Poisson-Lie group G and its dual G*. We show that the gauge symmetry for each model is given by its dual group that acts by dressing transformations on the target. The resolution of both models in the open geometry reveals that there exists a map from the reduced phase of each model (P and P*) to the main symplectic leaf of the Heisenberg double (D 0 ) such that the symplectic forms on P, P* are obtained as the pull-back by those maps of the symplectic structure on D 0 . This uncovers a duality between P and P* under the exchange of bulk degrees of freedom of one model with boundary degrees of freedom of the other one. We finally solve the Poisson Sigma model for the Poisson structure on G given by a pair of r-matrices that generalizes the Poisson-Lie case. The Hamiltonian analysis of the theory requires the introduction of a deformation of the Heisenberg double. (author) 3. Poisson's spot and Gouy phase Science.gov (United States) da Paz, I. G.; Soldati, Rodolfo; Cabral, L. A.; de Oliveira, J. G. G.; Sampaio, Marcos 2016-12-01 Recently there have been experimental results on Poisson spot matter-wave interferometry followed by theoretical models describing the relative importance of the wave and particle behaviors for the phenomenon. We propose an analytical theoretical model for Poisson's spot with matter waves based on the Babinet principle, in which we use the results for free propagation and single-slit diffraction. We take into account effects of loss of coherence and finite detection area using the propagator for a quantum particle interacting with an environment. We observe that the matter-wave Gouy phase plays a role in the existence of the central peak and thus corroborates the predominantly wavelike character of the Poisson's spot. Our model shows remarkable agreement with the experimental data for deuterium (D2) molecules. 4. Comparison of Poisson structures and Poisson-Lie dynamical r-matrices OpenAIRE Enriquez, B.; Etingof, P.; Marshall, I. 2004-01-01 We construct a Poisson isomorphism between the formal Poisson manifolds g^* and G^*, where g is a finite dimensional quasitriangular Lie bialgebra. Here g^* is equipped with its Lie-Poisson (or Kostant-Kirillov-Souriau) structure, and G^* with its Poisson-Lie structure. We also quantize Poisson-Lie dynamical r-matrices of Balog-Feher-Palla. 5. NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM Science.gov (United States) Bowerman, P. N. 1994-01-01 The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988. 6. Encapsulation by Janus spheroids OpenAIRE Li, Wei; Liu, Ya; Brett, Genevieve; Gunton, James D. 2011-01-01 The micro/nano encapsulation technology has acquired considerable attention in the fields of drug delivery, biomaterial engineering, and materials science. Based on recent advances in chemical particle synthesis, we propose a primitive model of an encapsulation system produced by the self-assembly of Janus oblate spheroids, particles with oblate spheroidal bodies and two hemi-surfaces coded with dissimilar chemical properties. Using Monte Carlo simulation, we investigate the encapsulation sys... 7. Background stratified Poisson regression analysis of cohort data. Science.gov (United States) Richardson, David B; Langholz, Bryan 2012-03-01 Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. 8. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion. Science.gov (United States) Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng 2018-06-01 The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance. 9. Background stratified Poisson regression analysis of cohort data International Nuclear Information System (INIS) Richardson, David B.; Langholz, Bryan 2012-01-01 Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.) 10. Perturbation-induced emergence of Poisson-like behavior in non-Poisson systems International Nuclear Information System (INIS) Akin, Osman C; Grigolini, Paolo; Paradisi, Paolo 2009-01-01 The response of a system with ON–OFF intermittency to an external harmonic perturbation is discussed. ON–OFF intermittency is described by means of a sequence of random events, i.e., the transitions from the ON to the OFF state and vice versa. The unperturbed waiting times (WTs) between two events are assumed to satisfy a renewal condition, i.e., the WTs are statistically independent random variables. The response of a renewal model with non-Poisson ON–OFF intermittency, associated with non-exponential WT distribution, is analyzed by looking at the changes induced in the WT statistical distribution by the harmonic perturbation. The scaling properties are also studied by means of diffusion entropy analysis. It is found that, in the range of fast and relatively strong perturbation, the non-Poisson system displays a Poisson-like behavior in both WT distribution and scaling. In particular, the histogram of perturbed WTs becomes a sequence of equally spaced peaks, with intensity decaying exponentially in time. Further, the diffusion entropy detects an ordinary scaling (related to normal diffusion) instead of the expected unperturbed anomalous scaling related to the inverse power-law decay. Thus, an analysis based on the WT histogram and/or on scaling methods has to be considered with some care when dealing with perturbed intermittent systems 11. Analyzing hospitalization data: potential limitations of Poisson regression. Science.gov (United States) Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R 2015-08-01 Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved. 12. Markov modulated Poisson process models incorporating covariates for rainfall intensity. Science.gov (United States) Thayakaran, R; Ramesh, N I 2013-01-01 Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied. 13. Encapsulation plant at Forsmark International Nuclear Information System (INIS) Nystroem, Anders 2007-08-01 SKB has already carried out a preliminary study of an encapsulation plant detached from Clab (Central interim storage for spent fuels). This stand-alone encapsulation plant was named FRINK and its assumed siting was the above-ground portion of the final repository, irrespective of the repository's location. The report previously presented was produced in cooperation with BNFL Engineering Ltd in Manchester and the fuel reception technical solution was examined by Gesellschaft fuer Nuklear-Service mbH (GNS) in Hannover and by Societe Generale pour les Techniques Nouvelles (SGN) in Paris. This report is an update of the earlier preliminary study report and is based on the assumption that the encapsulation plant and also the final repository will be sited in the Forsmark area. SKB's main alternative for siting the encapsulation plant is next to Clab. Planning of this facility is ongoing and technical solutions from the planning work have been incorporated in this report. An encapsulation plant placed in proximity to any final repository in Forsmark forms part of the alternative presentation in the application for permission to construct and operate an installation at Clab. The main technical difference between the planned encapsulation plant at Clab and an encapsulation plant at a final repository at Forsmark is how the fuel is managed and prepared before actual encapsulation. Fuel reception at the encapsulation plant in Forsmark would be dry, i.e. there would be no water-filled pools at the facility. Clab is used for verificatory fuel measurements, sorting and drying of the fuel before transport to Forsmark. This means that Clab will require a measure of rebuilding and supplementary equipment. In purely technical terms, the prospects for building an encapsulation plant sited at Forsmark are good. A description of the advantages and drawbacks of siting the encapsulation plant at Clab as opposed to any final repository at Forsmark is presented in a separate report 14. Encapsulation plant at Forsmark Energy Technology Data Exchange (ETDEWEB) Nystroem, Anders 2007-08-15 SKB has already carried out a preliminary study of an encapsulation plant detached from Clab (Central interim storage for spent fuels). This stand-alone encapsulation plant was named FRINK and its assumed siting was the above-ground portion of the final repository, irrespective of the repository's location. The report previously presented was produced in cooperation with BNFL Engineering Ltd in Manchester and the fuel reception technical solution was examined by Gesellschaft fuer Nuklear-Service mbH (GNS) in Hannover and by Societe Generale pour les Techniques Nouvelles (SGN) in Paris. This report is an update of the earlier preliminary study report and is based on the assumption that the encapsulation plant and also the final repository will be sited in the Forsmark area. SKB's main alternative for siting the encapsulation plant is next to Clab. Planning of this facility is ongoing and technical solutions from the planning work have been incorporated in this report. An encapsulation plant placed in proximity to any final repository in Forsmark forms part of the alternative presentation in the application for permission to construct and operate an installation at Clab. The main technical difference between the planned encapsulation plant at Clab and an encapsulation plant at a final repository at Forsmark is how the fuel is managed and prepared before actual encapsulation. Fuel reception at the encapsulation plant in Forsmark would be dry, i.e. there would be no water-filled pools at the facility. Clab is used for verificatory fuel measurements, sorting and drying of the fuel before transport to Forsmark. This means that Clab will require a measure of rebuilding and supplementary equipment. In purely technical terms, the prospects for building an encapsulation plant sited at Forsmark are good. A description of the advantages and drawbacks of siting the encapsulation plant at Clab as opposed to any final repository at Forsmark is presented in a separate 15. Graded geometry and Poisson reduction OpenAIRE Cattaneo, A S; Zambon, M 2009-01-01 The main result of [2] extends the Marsden-Ratiu reduction theorem [4] in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof in [2]. Further, we provide an alternative algebraic proof for the main result. ©2009 American Institute of Physics 16. Encapsulation of nuclear wastes International Nuclear Information System (INIS) Arnold, J.L.; Boyle, R.W. 1978-01-01 Toxic waste materials are encapsulated by the method wherein the waste material in liquid or finely divided solid form is uniformly dispersed in a vinyl ester resin or an unsaturated polyester and the resin cured under conditions that the exotherm does not rise above the temperature at which the integrity of the encapsulating material is destroyed 17. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals. Science.gov (United States) Frejlich, Pedro; Mărcuț, Ioan 2018-01-01 Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras. 18. Independent production and Poisson distribution International Nuclear Information System (INIS) Golokhvastov, A.I. 1994-01-01 The well-known statement of factorization of inclusive cross-sections in case of independent production of particles (or clusters, jets etc.) and the conclusion of Poisson distribution over their multiplicity arising from it do not follow from the probability theory in any way. Using accurately the theorem of the product of independent probabilities, quite different equations are obtained and no consequences relative to multiplicity distributions are obtained. 11 refs 19. A generalized gyrokinetic Poisson solver International Nuclear Information System (INIS) Lin, Z.; Lee, W.W. 1995-03-01 A generalized gyrokinetic Poisson solver has been developed, which employs local operations in the configuration space to compute the polarization density response. The new technique is based on the actual physical process of gyrophase-averaging. It is useful for nonlocal simulations using general geometry equilibrium. Since it utilizes local operations rather than the global ones such as FFT, the new method is most amenable to massively parallel algorithms 20. Relaxed Poisson cure rate models. Science.gov (United States) Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N 2016-03-01 The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 1. Poisson denoising on the sphere Science.gov (United States) Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M. 2009-08-01 In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data. 2. Singularities of Poisson structures and Hamiltonian bifurcations NARCIS (Netherlands) Meer, van der J.C. 2010-01-01 Consider a Poisson structure on C8(R3,R) with bracket {, } and suppose that C is a Casimir function. Then {f, g} =<¿C, (¿g x ¿f) > is a possible Poisson structure. This confirms earlier observations concerning the Poisson structure for Hamiltonian systems that are reduced to a one degree of freedom 3. A Martingale Characterization of Mixed Poisson Processes. Science.gov (United States) 1985-10-01 03LA A 11. TITLE (Inciuae Security Clanafication, ",A martingale characterization of mixed Poisson processes " ________________ 12. PERSONAL AUTHOR... POISSON PROCESSES Jostification .......... . ... . . Di.;t ib,,jtion by Availability Codes Dietmar Pfeifer* Technical University Aachen Dist Special and...Mixed Poisson processes play an important role in many branches of applied probability, for instance in insurance mathematics and physics (see Albrecht 4. Events in time: Basic analysis of Poisson data International Nuclear Information System (INIS) Engelhardt, M.E. 1994-09-01 The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given 5. Events in time: Basic analysis of Poisson data Energy Technology Data Exchange (ETDEWEB) Engelhardt, M.E. 1994-09-01 The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given. 6. High-efficiency single cell encapsulation and size selective capture of cells in picoliter droplets based on hydrodynamic micro-vortices. Science.gov (United States) Kamalakshakurup, Gopakumar; Lee, Abraham P 2017-12-05 Single cell analysis has emerged as a paradigm shift in cell biology to understand the heterogeneity of individual cells in a clone for pathological interrogation. Microfluidic droplet technology is a compelling platform to perform single cell analysis by encapsulating single cells inside picoliter-nanoliter (pL-nL) volume droplets. However, one of the primary challenges for droplet based single cell assays is single cell encapsulation in droplets, currently achieved either randomly, dictated by Poisson statistics, or by hydrodynamic techniques. In this paper, we present an interfacial hydrodynamic technique which initially traps the cells in micro-vortices, and later releases them one-to-one into the droplets, controlled by the width of the outer streamline that separates the vortex from the flow through the streaming passage adjacent to the aqueous-oil interface (d gap ). One-to-one encapsulation is achieved at a d gap equal to the radius of the cell, whereas complete trapping of the cells is realized at a d gap smaller than the radius of the cell. The unique feature of this technique is that it can perform 1. high efficiency single cell encapsulations and 2. size-selective capturing of cells, at low cell loading densities. Here we demonstrate these two capabilities with a 50% single cell encapsulation efficiency and size selective separation of platelets, RBCs and WBCs from a 10× diluted blood sample (WBC capture efficiency at 70%). The results suggest a passive, hydrodynamic micro-vortex based technique capable of performing high-efficiency single cell encapsulation for cell based assays. 7. Encapsulation with structured triglycerides Science.gov (United States) Lipids provide excellent materials to encapsulate bioactive compounds for food and pharmaceutical applications. Lipids are renewable, biodegradable, and easily modified to provide additional chemical functionality. The use of structured lipids that have been modified with photoactive properties are ... 8. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'. Science.gov (United States) de Nijs, Robin 2015-07-21 In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images. 9. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images' DEFF Research Database (Denmark) de Nijs, Robin 2015-01-01 In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed...... by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics... 10. Neutron fluctuation analysis in a subcritical multiplying system with a stochastically pulsed poisson source International Nuclear Information System (INIS) Kostic, Lj. 2003-01-01 The influence of the stochastically pulsed Poisson source to the statistical properties of the subcritical multiplying system is analyzed in the paper. It is shown a strong dependence on the pulse period and pulse width of the source (author) 11. Ship-Track Models Based on Poisson-Distributed Port-Departure Times National Research Council Canada - National Science Library Heitmeyer, Richard 2006-01-01 ... of those ships, and their nominal speeds. The probability law assumes that the ship departure times are Poisson-distributed with a time-varying departure rate and that the ship speeds and ship routes are statistically independent... 12. Poisson's ratio of fiber-reinforced composites Science.gov (United States) Christiansson, Henrik; Helsing, Johan 1996-05-01 Poisson's ratio flow diagrams, that is, the Poisson's ratio versus the fiber fraction, are obtained numerically for hexagonal arrays of elastic circular fibers in an elastic matrix. High numerical accuracy is achieved through the use of an interface integral equation method. Questions concerning fixed point theorems and the validity of existing asymptotic relations are investigated and partially resolved. Our findings for the transverse effective Poisson's ratio, together with earlier results for random systems by other authors, make it possible to formulate a general statement for Poisson's ratio flow diagrams: For composites with circular fibers and where the phase Poisson's ratios are equal to 1/3, the system with the lowest stiffness ratio has the highest Poisson's ratio. For other choices of the elastic moduli for the phases, no simple statement can be made. 13. Review of encapsulation technologies International Nuclear Information System (INIS) Shaulis, L. 1996-09-01 The use of encapsulation technology to produce a compliant waste form is an outgrowth from existing polymer industry technology and applications. During the past 12 years, the Department of Energy (DOE) has been researching the use of this technology to treat mixed wastes (i.e., containing hazardous and radioactive wastes). The two primary encapsulation techniques are microencapsulation and macroencapsulation. Microencapsulation is the thorough mixing of a binding agent with a powdered waste, such as incinerator ash. Macroencapsulation coats the surface of bulk wastes, such as lead debris. Cement, modified cement, and polyethylene are the binding agents which have been researched the most. Cement and modified cement have been the most commonly used binding agents to date. However, recent research conducted by DOE laboratories have shown that polyethylene is more durable and cost effective than cements. The compressive strength, leachability, resistance to chemical degradation, etc., of polyethylene is significantly greater than that of cement and modified cement. Because higher waste loads can be used with polyethylene encapsulant, the total cost of polyethylene encapsulation is significantly less costly than cement treatment. The only research lacking in the assessment of polyethylene encapsulation treatment for mixed wastes is pilot and full-scale testing with actual waste materials. To date, only simulated wastes have been tested. The Rocky Flats Environmental Technology Site had planned to conduct pilot studies using actual wastes during 1996. This experiment should provide similar results to the previous tests that used simulated wastes. If this hypothesis is validated as anticipated, it will be clear that polyethylene encapsulation should be pursued by DOE to produce compliant waste forms 14. Singular reduction of Nambu-Poisson manifolds Science.gov (United States) Das, Apurba The version of Marsden-Ratiu Poisson reduction theorem for Nambu-Poisson manifolds by a regular foliation have been studied by Ibáñez et al. In this paper, we show that this reduction procedure can be extended to the singular case. Under a suitable notion of Hamiltonian flow on the reduced space, we show that a set of Hamiltonians on a Nambu-Poisson manifold can also be reduced. 15. Nonlinear Poisson equation for heterogeneous media. Science.gov (United States) Hu, Langhua; Wei, Guo-Wei 2012-08-22 The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved. 16. Non-holonomic dynamics and Poisson geometry International Nuclear Information System (INIS) Borisov, A V; Mamaev, I S; Tsiganov, A V 2014-01-01 This is a survey of basic facts presently known about non-linear Poisson structures in the analysis of integrable systems in non-holonomic mechanics. It is shown that by using the theory of Poisson deformations it is possible to reduce various non-holonomic systems to dynamical systems on well-understood phase spaces equipped with linear Lie-Poisson brackets. As a result, not only can different non-holonomic systems be compared, but also fairly advanced methods of Poisson geometry and topology can be used for investigating them. Bibliography: 95 titles 17. Poisson Regression Analysis of Illness and Injury Surveillance Data Energy Technology Data Exchange (ETDEWEB) Frome E.L., Watkins J.P., Ellis E.D. 2012-12-12 The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson 18. Experimental dead-time distortions of Poisson processes International Nuclear Information System (INIS) Faraci, G.; Pennisi, A.R.; Consiglio Nazionale delle Ricerche, Catania 1983-01-01 In order to check the distortions, introduced by a non-extended dead time on the Poisson statistics, accurate experiments have been made in single channel counting. At a given measuring time, the dependence on the choice of the time origin and on the width of the dead time has been verified. An excellent agreement has been found between the theoretical expressions and the experimental curves. (orig.) 19. Transport of encapsulated nuclear fuels International Nuclear Information System (INIS) Broman, Ulrika; Dybeck, Peter; Ekendahl, Ann-Mari 2005-12-01 The transport system for encapsulated fuel is described, including a preliminary drawing of a transport container. In the report, the encapsulation plant is assumed to be located to Oskarshamn, and the repository to Oskarshamn or Forsmark 20. Subcutaneous encapsulated fat necrosis DEFF Research Database (Denmark) Aydin, Dogu; Berg, Jais O 2016-01-01 We have described subcutaneous encapsulated fat necrosis, which is benign, usually asymptomatic and underreported. Images have only been published on two earlier occasions, in which the necrotic nodules appear "pearly" than the cloudy yellow surface in present case. The presented image may help... 1. On (co)homology of Frobenius Poisson algebras OpenAIRE Zhu, Can; Van Oystaeyen, Fred; ZHANG, Yinhuo 2014-01-01 In this paper, we study Poisson (co)homology of a Frobenius Poisson algebra. More precisely, we show that there exists a duality between Poisson homology and Poisson cohomology of Frobenius Poisson algebras, similar to that between Hochschild homology and Hochschild cohomology of Frobenius algebras. Then we use the non-degenerate bilinear form on a unimodular Frobenius Poisson algebra to construct a Batalin-Vilkovisky structure on the Poisson cohomology ring making it into a Batalin-Vilkovisk... 2. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model. Science.gov (United States) Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah 2012-01-01 Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis. 3. Monitoring Poisson observations using combined applications of Shewhart and EWMA charts Science.gov (United States) Abujiya, Mu'azu Ramat 2017-11-01 The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure. 4. Extension of the application of conway-maxwell-poisson models: analyzing traffic crash data exhibiting underdispersion. Science.gov (United States) Lord, Dominique; Geedipally, Srinivas Reddy; Guikema, Seth D 2010-08-01 The objective of this article is to evaluate the performance of the COM-Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM-Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over- or underdispersion. Over the last year, the COM-Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson-gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM-Poisson models were estimated using crash data collected at 162 railway-highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM-Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set. 5. POISSON, Analysis Solution of Poisson Problems in Probabilistic Risk Assessment International Nuclear Information System (INIS) Froehner, F.H. 1986-01-01 1 - Description of program or function: Purpose of program: Analytic treatment of two-stage Poisson problem in Probabilistic Risk Assessment. Input: estimated a-priori mean failure rate and error factor of system considered (for calculation of stage-1 prior), number of failures and operating times for similar systems (for calculation of stage-2 prior). Output: a-posteriori probability distributions on linear and logarithmic time scale (on specified time grid) and expectation values of failure rate and error factors are calculated for: - stage-1 a-priori distribution, - stage-1 a-posteriori distribution, - stage-2 a-priori distribution, - stage-2 a-posteriori distribution. 2 - Method of solution: Bayesian approach with conjugate stage-1 prior, improved with experience from similar systems to yield stage-2 prior, and likelihood function from experience with system under study (documentation see below under 10.). 3 - Restrictions on the complexity of the problem: Up to 100 similar systems (including the system considered), arbitrary number of problems (failure types) with same grid 6. Square root approximation to the poisson channel NARCIS (Netherlands) Tsiatmas, A.; Willems, F.M.J.; Baggen, C.P.M.J. 2013-01-01 Starting from the Poisson model we present a channel model for optical communications, called the Square Root (SR) Channel, in which the noise is additive Gaussian with constant variance. Initially, we prove that for large peak or average power, the transmission rate of a Poisson Channel when coding 7. A Seemingly Unrelated Poisson Regression Model OpenAIRE King, Gary 1989-01-01 This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models. 8. Poisson geometry from a Dirac perspective Science.gov (United States) Meinrenken, Eckhard 2018-03-01 We present proofs of classical results in Poisson geometry using techniques from Dirac geometry. This article is based on mini-courses at the Poisson summer school in Geneva, June 2016, and at the workshop Quantum Groups and Gravity at the University of Waterloo, April 2016. 9. Associative and Lie deformations of Poisson algebras OpenAIRE Remm, Elisabeth 2011-01-01 Considering a Poisson algebra as a non associative algebra satisfying the Markl-Remm identity, we study deformations of Poisson algebras as deformations of this non associative algebra. This gives a natural interpretation of deformations which preserves the underlying associative structure and we study deformations which preserve the underlying Lie algebra. 10. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation International Nuclear Information System (INIS) Bardsley, Johnathan M; Goldes, John 2009-01-01 In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness 11. A twisted generalization of Novikov-Poisson algebras OpenAIRE Yau, Donald 2010-01-01 Hom-Novikov-Poisson algebras, which are twisted generalizations of Novikov-Poisson algebras, are studied. Hom-Novikov-Poisson algebras are shown to be closed under tensor products and several kinds of twistings. Necessary and sufficient conditions are given under which Hom-Novikov-Poisson algebras give rise to Hom-Poisson algebras. 12. Speech parts as Poisson processes. Science.gov (United States) 2001-09-01 This paper presents evidence that six of the seven parts of speech occur in written text as Poisson processes, simple or recurring. The six major parts are nouns, verbs, adjectives, adverbs, prepositions, and conjunctions, with the interjection occurring too infrequently to support a model. The data consist of more than the first 5000 words of works by four major authors coded to label the parts of speech, as well as periods (sentence terminators). Sentence length is measured via the period and found to be normally distributed with no stochastic model identified for its occurrence. The models for all six speech parts but the noun significantly distinguish some pairs of authors and likewise for the joint use of all words types. Any one author is significantly distinguished from any other by at least one word type and sentence length very significantly distinguishes each from all others. The variety of word type use, measured by Shannon entropy, builds to about 90% of its maximum possible value. The rate constants for nouns are close to the fractions of maximum entropy achieved. This finding together with the stochastic models and the relations among them suggest that the noun may be a primitive organizer of written text. 13. Encapsulation Efficiency, Oscillatory Rheometry Directory of Open Access Journals (Sweden) 2014-01-01 Full Text Available Nanoliposomes are one of the most important polar lipid-based nanocarriers which can be used for encapsulation of both hydrophilic and hydrophobic active compounds. In this research, nanoliposomes based on lecithin-polyethylene glycol-gamma oryzanol were prepared by using a modified thermal method. Only one melting peak in DSC curve of gamma oryzanol bearing liposomes was observed which could be attributed to co-crystallization of both compounds. The addition of gamma oryzanol, caused to reduce the melting point of 5% (w/v lecithin-based liposome from 207°C to 163.2°C. At high level of lecithin, increasing of liposome particle size (storage at 4°C for two months was more obvious and particle size increased from 61 and 113 to 283 and 384 nanometers, respectively. The encapsulation efficiency of gamma oryzanol increased from 60% to 84.3% with increasing lecithin content. The encapsulation stability of oryzanol in liposome was determined at different concentrations of lecithin 3, 5, 10, 20% (w/v and different storage times (1, 7, 30 and 60 days. In all concentrations, the encapsulation stability slightly decreased during 30 days storage. The scanning electron microscopy (SEM images showed relatively spherical to elliptic particles which indicated to low extent of particles coalescence. The oscillatory rheometry showed that the loss modulus of liposomes were higher than storage modulus and more liquid-like behavior than solid-like behavior. The samples storage at 25°C for one month, showed higher viscoelastic parameters than those having been stored at 4°C which were attributed to higher membrane fluidity at 25°C and their final coalescence.Nanoliposomes are one of the most important polar lipid based nanocarriers which can be used for encapsulation of both hydrophilic and hydrophobic active compounds. In this research, nanoliposomes based on lecithin-polyethylene glycol-gamma oryzanol were prepared by using modified thermal method. Only one 14. Deconvolution under Poisson noise using exact data fidelity and synthesis or analysis sparsity priors OpenAIRE Dupé , François-Xavier; Fadili , Jalal M.; Starck , Jean-Luc 2012-01-01 International audience; In this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type spars... 15. Encapsulating spent nuclear fuel International Nuclear Information System (INIS) Fleischer, L.R.; Gunasekaran, M. 1979-01-01 A system is described for encapsulating spent nuclear fuel discharged from nuclear reactors in the form of rods or multi-rod assemblies. The rods are completely and contiguously enclosed in concrete in which metallic fibres are incorporated to increase thermal conductivity and polymers to decrease fluid permeability. This technique provides the advantage of acceptable long-term stability for storage over the conventional underwater storage method. Examples are given of suitable concrete compositions. (UK) 16. Constructions and classifications of projective Poisson varieties Science.gov (United States) Pym, Brent 2018-03-01 This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds. 17. Constructions and classifications of projective Poisson varieties. Science.gov (United States) Pym, Brent 2018-01-01 This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds. 18. Dirichlet forms methods for Poisson point measures and Lévy processes with emphasis on the creation-annihilation techniques CERN Document Server Bouleau, Nicolas 2015-01-01 A simplified approach to Malliavin calculus adapted to Poisson random measures is developed and applied in this book. Called the “lent particle method” it is based on perturbation of the position of particles. Poisson random measures describe phenomena involving random jumps (for instance in mathematical finance) or the random distribution of particles (as in statistical physics). Thanks to the theory of Dirichlet forms, the authors develop a mathematical tool for a quite general class of random Poisson measures and significantly simplify computations of Malliavin matrices of Poisson functionals. The method gives rise to a new explicit calculus that they illustrate on various examples: it consists in adding a particle and then removing it after computing the gradient. Using this method, one can establish absolute continuity of Poisson functionals such as Lévy areas, solutions of SDEs driven by Poisson measure and, by iteration, obtain regularity of laws. The authors also give applications to error calcul... 19. Poisson-Hopf limit of quantum algebras International Nuclear Information System (INIS) Ballesteros, A; Celeghini, E; Olmo, M A del 2009-01-01 The Poisson-Hopf analogue of an arbitrary quantum algebra U z (g) is constructed by introducing a one-parameter family of quantizations U z,ℎ (g) depending explicitly on ℎ and by taking the appropriate ℎ → 0 limit. The q-Poisson analogues of the su(2) algebra are discussed and the novel su q P (3) case is introduced. The q-Serre relations are also extended to the Poisson limit. This approach opens the perspective for possible applications of higher rank q-deformed Hopf algebras in semiclassical contexts 20. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C. Science.gov (United States) Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed 2013-01-01 In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data. 1. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance. Science.gov (United States) Poplová, Michaela; Sovka, Pavel; Cifra, Michal 2017-01-01 Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. 2. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution. Science.gov (United States) Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep 2017-01-01 The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. 3. The Poisson equation on Klein surfaces Directory of Open Access Journals (Sweden) Monica Rosiu 2016-04-01 Full Text Available We obtain a formula for the solution of the Poisson equation with Dirichlet boundary condition on a region of a Klein surface. This formula reveals the symmetric character of the solution. 4. Poisson point processes imaging, tracking, and sensing CERN Document Server Streit, Roy L 2010-01-01 This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection. 5. Noncommutative gauge theory for Poisson manifolds Energy Technology Data Exchange (ETDEWEB) Jurco, Branislav E-mail: [email protected]; Schupp, Peter E-mail: [email protected]; Wess, Julius E-mail: [email protected] 2000-09-25 A noncommutative gauge theory is associated to every Abelian gauge theory on a Poisson manifold. The semi-classical and full quantum version of the map from the ordinary gauge theory to the noncommutative gauge theory (Seiberg-Witten map) is given explicitly to all orders for any Poisson manifold in the Abelian case. In the quantum case the construction is based on Kontsevich's formality theorem. 6. Noncommutative gauge theory for Poisson manifolds International Nuclear Information System (INIS) Jurco, Branislav; Schupp, Peter; Wess, Julius 2000-01-01 A noncommutative gauge theory is associated to every Abelian gauge theory on a Poisson manifold. The semi-classical and full quantum version of the map from the ordinary gauge theory to the noncommutative gauge theory (Seiberg-Witten map) is given explicitly to all orders for any Poisson manifold in the Abelian case. In the quantum case the construction is based on Kontsevich's formality theorem 7. Principles of applying Poisson units in radiology International Nuclear Information System (INIS) Benyumovich, M.S. 2000-01-01 The probability that radioactive particles hit particular space patterns (e.g. cells in the squares of a count chamber net) and time intervals (e.g. radioactive particles hit a given area per time unit) follows the Poisson distribution. The mean is the only parameter from which all this distribution depends. A metrological base of counting the cells and radioactive particles is a property of the Poisson distribution assuming equality of a standard deviation to a root square of mean (property 1). The application of Poisson units in counting of blood formed elements and cultured cells was proposed by us (Russian Federation Patent No. 2126230). Poisson units relate to the means which make the property 1 valid. In a case of cells counting, the square of these units is equal to 1/10 of one of count chamber net where they count the cells. Thus one finds the means from the single cell count rate divided by 10. Finding the Poisson units when counting the radioactive particles should assume determination of a number of these particles sufficient to make equality 1 valid. To this end one should subdivide a time interval used in counting a single particle count rate into different number of equal portions (count numbers). Next one should pick out the count number ensuring the satisfaction of equality 1. Such a portion is taken as a Poisson unit in the radioactive particles count. If the flux of particles is controllable one should set up a count rate sufficient to make equality 1 valid. Operations with means obtained by with the use of Poisson units are performed on the base of approximation of the Poisson distribution by a normal one. (author) 8. Multivariate fractional Poisson processes and compound sums OpenAIRE Beghin, Luisa; Macci, Claudio 2015-01-01 In this paper we present multivariate space-time fractional Poisson processes by considering common random time-changes of a (finite-dimensional) vector of independent classical (non-fractional) Poisson processes. In some cases we also consider compound processes. We obtain some equations in terms of some suitable fractional derivatives and fractional difference operators, which provides the extension of known equations for the univariate processes. 9. Selective encapsulation by Janus particles Energy Technology Data Exchange (ETDEWEB) Li, Wei, E-mail: [email protected] [Materials Research Laboratory, University of California, Santa Barbara, California 93106 (United States); Ruth, Donovan; Gunton, James D. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States); Rickman, Jeffrey M. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States); Department of Materials Science and Engineering, Lehigh University, Bethlehem, Pennsylvania 18015 (United States) 2015-06-28 We employ Monte Carlo simulation to examine encapsulation in a system comprising Janus oblate spheroids and isotropic spheres. More specifically, the impact of variations in temperature, particle size, inter-particle interaction range, and strength is examined for a system in which the spheroids act as the encapsulating agents and the spheres as the encapsulated guests. In this picture, particle interactions are described by a quasi-square-well patch model. This study highlights the environmental adaptation and selectivity of the encapsulation system to changes in temperature and guest particle size, respectively. Moreover, we identify an important range in parameter space where encapsulation is favored, as summarized by an encapsulation map. Finally, we discuss the generalization of our results to systems having a wide range of particle geometries. 10. A dynamic bivariate Poisson model for analysing and forecasting match results in the English Premier League NARCIS (Netherlands) Koopman, S.J.; Lit, R. 2015-01-01 Summary: We develop a statistical model for the analysis and forecasting of football match results which assumes a bivariate Poisson distribution with intensity coefficients that change stochastically over time. The dynamic model is a novelty in the statistical time series analysis of match results 11. Quantization of the Poisson SU(2) and its Poisson homogeneous space - the 2-sphere International Nuclear Information System (INIS) Sheu, A.J.L. 1991-01-01 We show that deformation quantizations of the Poisson structures on the Poisson Lie group SU(2) and its homogeneous space, the 2-sphere, are compatible with Woronowicz's deformation quantization of SU(2)'s group structure and Podles' deformation quantization of 2-sphere's homogeneous structure, respectively. So in a certain sense the multiplicativity of the Lie Poisson structure on SU(2) at the classical level is preserved under quantization. (orig.) 12. Differential expression analysis for RNAseq using Poisson mixed models. Science.gov (United States) Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang 2017-06-20 Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research. 13. A generalized right truncated bivariate Poisson regression model with applications to health data. Science.gov (United States) Islam, M Ataharul; Chowdhury, Rafiqul I 2017-01-01 A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. 14. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes. Science.gov (United States) Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique 2015-05-01 The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis. 15. [Application of detecting and taking overdispersion into account in Poisson regression model]. Science.gov (United States) Bouche, G; Lepage, B; Migeot, V; Ingrand, P 2009-08-01 Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages. 16. Update on cellular encapsulation. Science.gov (United States) Smith, Kate E; Johnson, Robert C; Papas, Klearchos K 2018-05-06 There is currently a significant disparity between the number of patients who need lifesaving transplants and the number of donated human organs. Xenotransplantation is a way to address this disparity and attempts to enable the use of xenogeneic tissues have persisted for centuries. While immunologic incompatibilities have presented a persistent impediment to their use, encapsulation may represent a way forward for the use of cell-based xenogeneic therapeutics without the need for immunosuppression. In conjunction with modern innovations such as the use of bioprinting, incorporation of immune modulating molecules into capsule membranes, and genetic engineering, the application of xenogeneic cells to treat disorders ranging from pain to liver failure is becoming increasingly realistic. The present review discusses encapsulation in the context of xenotransplantation, focusing on the current status of clinical trials, persistent issues such as antigen shedding, oxygen availability, and donor selection, and recent developments that may address these limitations. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd. 17. Swedish encapsulation station review International Nuclear Information System (INIS) Andersson, Sven Olof; Brunzell, P.; Heibel, R.; McCarthy, J.; Pennington, C.; Rusch, C.; Varley, G. 1998-06-01 In the Encapsulation Station (ES) Review performed by NAC International, a number of different areas have been studied. The main objectives with the review have been to: Perform an independent review of the cost estimates for the ES presented in SKB's document 'Plan 1996'. This has been made through comparisons between the ES and BNFL's Waste Encapsulation Plant (WEP) at Sellafield as well as with the CLAB facility. Review the location of the ES (at the CLAB site or at the final repository) and its interaction with other parts of the Swedish system for spent fuel management. Review the logistics and plant capacity of the ES. Identify important safety aspects of the ES as a basis for future licensing activities. Based on NAC International's experience of casks for transport and storage of spent fuel, review the basic design of the copper/steel canister and the transport cask. This review insides design, manufacturing, handling and licensing aspects. Perform an overall comparison between the ES project and the CLAB project with the objective to identify major project risks and discuss their mitigation 18. Swedish encapsulation station review Energy Technology Data Exchange (ETDEWEB) Andersson, Sven Olof; Brunzell, P.; Heibel, R.; McCarthy, J.; Pennington, C.; Rusch, C.; Varley, G. [NAC International, Zuerich (Switzerland) 1998-06-01 In the Encapsulation Station (ES) Review performed by NAC International, a number of different areas have been studied. The main objectives with the review have been to: Perform an independent review of the cost estimates for the ES presented in SKBs document Plan 1996. This has been made through comparisons between the ES and BNFLs Waste Encapsulation Plant (WEP) at Sellafield as well as with the CLAB facility. Review the location of the ES (at the CLAB site or at the final repository) and its interaction with other parts of the Swedish system for spent fuel management. Review the logistics and plant capacity of the ES. Identify important safety aspects of the ES as a basis for future licensing activities. Based on NAC Internationals experience of casks for transport and storage of spent fuel, review the basic design of the copper/steel canister and the transport cask. This review insides design, manufacturing, handling and licensing aspects. Perform an overall comparison between the ES project and the CLAB project with the objective to identify major project risks and discuss their mitigation 19 refs, 9 figs, 35 tabs 19. Poisson-event-based analysis of cell proliferation. Science.gov (United States) Summers, Huw D; Wills, John W; Brown, M Rowan; Rees, Paul 2015-05-01 A protocol for the assessment of cell proliferation dynamics is presented. This is based on the measurement of cell division events and their subsequent analysis using Poisson probability statistics. Detailed analysis of proliferation dynamics in heterogeneous populations requires single cell resolution within a time series analysis and so is technically demanding to implement. Here, we show that by focusing on the events during which cells undergo division rather than directly on the cells themselves a simplified image acquisition and analysis protocol can be followed, which maintains single cell resolution and reports on the key metrics of cell proliferation. The technique is demonstrated using a microscope with 1.3 μm spatial resolution to track mitotic events within A549 and BEAS-2B cell lines, over a period of up to 48 h. Automated image processing of the bright field images using standard algorithms within the ImageJ software toolkit yielded 87% accurate recording of the manually identified, temporal, and spatial positions of the mitotic event series. Analysis of the statistics of the interevent times (i.e., times between observed mitoses in a field of view) showed that cell division conformed to a nonhomogeneous Poisson process in which the rate of occurrence of mitotic events, λ exponentially increased over time and provided values of the mean inter mitotic time of 21.1 ± 1.2 hours for the A549 cells and 25.0 ± 1.1 h for the BEAS-2B cells. Comparison of the mitotic event series for the BEAS-2B cell line to that predicted by random Poisson statistics indicated that temporal synchronisation of the cell division process was occurring within 70% of the population and that this could be increased to 85% through serum starvation of the cell culture. © 2015 International Society for Advancement of Cytometry. 20. The Poisson equation at second order in relativistic cosmology International Nuclear Information System (INIS) Hidalgo, J.C.; Christopherson, Adam J.; Malik, Karim A. 2013-01-01 We calculate the relativistic constraint equation which relates the curvature perturbation to the matter density contrast at second order in cosmological perturbation theory. This relativistic ''second order Poisson equation'' is presented in a gauge where the hydrodynamical inhomogeneities coincide with their Newtonian counterparts exactly for a perfect fluid with constant equation of state. We use this constraint to introduce primordial non-Gaussianity in the density contrast in the framework of General Relativity. We then derive expressions that can be used as the initial conditions of N-body codes for structure formation which probe the observable signature of primordial non-Gaussianity in the statistics of the evolved matter density field 1. Improving EWMA Plans for Detecting Unusual Increases in Poisson Counts Directory of Open Access Journals (Sweden) R. S. Sparks 2009-01-01 adaptive exponentially weighted moving average (EWMA plan is developed for signalling unusually high incidence when monitoring a time series of nonhomogeneous daily disease counts. A Poisson transitional regression model is used to fit background/expected trend in counts and provides “one-day-ahead” forecasts of the next day's count. Departures of counts from their forecasts are monitored. The paper outlines an approach for improving early outbreak data signals by dynamically adjusting the exponential weights to be efficient at signalling local persistent high side changes. We emphasise outbreak signals in steady-state situations; that is, changes that occur after the EWMA statistic had run through several in-control counts. 2. Micro-Encapsulation of Probiotics Science.gov (United States) Meiners, Jean-Antoine Micro-encapsulation is defined as the technology for packaging with the help of protective membranes particles of finely ground solids, droplets of liquids or gaseous materials in small capsules that release their contents at controlled rates over prolonged periods of time under the influences of specific conditions (Boh, 2007). The material encapsulating the core is referred to as coating or shell. 3. Encapsulation of Clay Platelets inside Latex Particles NARCIS (Netherlands) Voorn, D.J.; Ming, W.; Herk, van A.M.; Fernando, R.H.; Sung, Li-Piin 2009-01-01 We present our recent attempts in encapsulating clay platelets inside latex particles by emulsion polymerization. Face modification of clay platelets by cationic exchange has been shown to be insufficient for clay encapsulation, leading to armored latex particles. Successful encapsulation of 4. Modeling spiking behavior of neurons with time-dependent Poisson processes. Science.gov (United States) Shinomoto, S; Tsubo, Y 2001-10-01 Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys. Directory of Open Access Journals (Sweden) 2015-01-01 Full Text Available Palisaded encapsulated neuroma (PEN is a benign cutaneous or mucosal neural tumor which, usually, presents as a solitary, firm, asymptomatic, papule or nodule showing striking predilection for the face. It occurs commonly in middle age, and there is no sex predilection. Oral PEN are not common, and these lesions must be distinguished from other peripheral nerve sheath tumors such as the neurofibroma, neurilemma (schwannoma, and traumatic neuroma. The major challenge in dealing with lesions of PEN is to avoid the misdiagnosis of neural tumors that may be associated with systemic syndromes such as neurofibromatosis and multiple endocrine neoplasia syndrome type 2B. Here, we present a case of benign PEN of the gingiva in the left anterior mandibular region, laying importance on immunohistochemical staining in diagnosing such lesions. 6. Encapsulated scintillation detector International Nuclear Information System (INIS) Toepke, I.L. 1982-01-01 A scintillation detector crystal is encapsulated in a hermetically sealed housing having a glass window. The window may be mounted in a ring by a compression seal formed during cooling of the ring and window after heating. The window may be chemically bonded to the ring with or without a compression seal. The ring is welded to the housing along thin weld flanges to reduce the amount of weld heat which must be applied. A thin section is provided to resist the flow of welding heat to the seal between the ring and the window thereby forming a thermal barrier. The thin section may be provided by a groove cut partially through the wall of the ring. A layer of PTFE between the tubular body and the crystal minimizes friction created by thermal expansion. Spring washers urge the crystal towards the window. (author) 7. Liposome-encapsulated chemotherapy DEFF Research Database (Denmark) Børresen, B.; Hansen, A. E.; Kjær, A. 2018-01-01 Cytotoxic drugs encapsulated into liposomes were originally designed to increase the anticancer response, while minimizing off-target adverse effects. The first liposomal chemotherapeutic drug was approved for use in humans more than 20years ago, and the first publication regarding its use...... to inherent issues with the enhanced permeability and retention effect, the tumour phenomenon which liposomal drugs exploit. This effect seems very heterogeneously distributed in the tumour. Also, it is potentially not as ubiquitously occurring as once thought, and it may prove important to select patients...... not resolve the other challenges that liposomal chemotherapy faces, and more work still needs to be done to determine which veterinary patients may benefit the most from liposomal chemotherapy.... 8. New process encapsulates International Nuclear Information System (INIS) Mueller, J.J. 1982-01-01 The results of the various aspects of this study indicate that the encapsulation process is not only capable of reducing the percent of Radon-222 emanation but also reduces the possibility of the leaching of toxic elements. Radon-222 emanation after solidification showed a 93.51% reduction from the slurry. The Gamma Spectral Analyses of short-lived Radon daughters supported the above findings. Leach studies on solidified refinery waste and transformer oils indicate there is a significant reduction in the possibility of toxic substances leaching out of the solidified samples. Further studies are needed to confirm the results of this investigation; however, the present findings indicate that the process could substantially reduce Radon-222 exhalation into the environment from uranium tailings ponds and reduce toxic leachates from hazardous waste materials 9. Counting statistics in radioactivity measurements International Nuclear Information System (INIS) Martin, J. 1975-01-01 The application of statistical methods to radioactivity measurement problems is analyzed in several chapters devoted successively to: the statistical nature of radioactivity counts; the application to radioactive counting of two theoretical probability distributions, Poisson's distribution law and the Laplace-Gauss law; true counting laws; corrections related to the nature of the apparatus; statistical techniques in gamma spectrometry [fr 10. Selective Contrast Adjustment by Poisson Equation Directory of Open Access Journals (Sweden) Ana-Belen Petro 2013-09-01 Full Text Available Poisson Image Editing is a new technique permitting to modify the gradient vector field of an image, and then to recover an image with a gradient approaching this modified gradient field. This amounts to solve a Poisson equation, an operation which can be efficiently performed by Fast Fourier Transform (FFT. This paper describes an algorithm applying this technique, with two different variants. The first variant enhances the contrast by increasing the gradient in the dark regions of the image. This method is well adapted to images with back light or strong shadows, and reveals details in the shadows. The second variant of the same Poisson technique enhances all small gradients in the image, thus also sometimes revealing details and texture. 11. High order Poisson Solver for unbounded flows DEFF Research Database (Denmark) Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe 2015-01-01 This paper presents a high order method for solving the unbounded Poisson equation on a regular mesh using a Green’s function solution. The high order convergence was achieved by formulating mollified integration kernels, that were derived from a filter regularisation of the solution field....... The method was implemented on a rectangular domain using fast Fourier transforms (FFT) to increase computational efficiency. The Poisson solver was extended to directly solve the derivatives of the solution. This is achieved either by including the differential operator in the integration kernel...... the equations of fluid mechanics as an example, but can be used in many physical problems to solve the Poisson equation on a rectangular unbounded domain. For the two-dimensional case we propose an infinitely smooth test function which allows for arbitrary high order convergence. Using Gaussian smoothing... 12. Poisson-Jacobi reduction of homogeneous tensors International Nuclear Information System (INIS) Grabowski, J; Iglesias, D; Marrero, J C; Padron, E; Urbanski, P 2004-01-01 The notion of homogeneous tensors is discussed. We show that there is a one-to-one correspondence between multivector fields on a manifold M, homogeneous with respect to a vector field Δ on M, and first-order polydifferential operators on a closed submanifold N of codimension 1 such that Δ is transversal to N. This correspondence relates the Schouten-Nijenhuis bracket of multivector fields on M to the Schouten-Jacobi bracket of first-order polydifferential operators on N and generalizes the Poissonization of Jacobi manifolds. Actually, it can be viewed as a super-Poissonization. This procedure of passing from a homogeneous multivector field to a first-order polydifferential operator can also be understood as a sort of reduction; in the standard case-a half of a Poisson reduction. A dual version of the above correspondence yields in particular the correspondence between Δ-homogeneous symplectic structures on M and contact structures on N 13. The BRST complex of homological Poisson reduction Science.gov (United States) Müller-Lennert, Martin 2017-02-01 BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure. 14. Estimation of Poisson noise in spatial domain Science.gov (United States) Švihlík, Jan; Fliegel, Karel; Vítek, Stanislav; Kukal, Jaromír.; Krbcová, Zuzana 2017-09-01 This paper deals with modeling of astronomical images in the spatial domain. We consider astronomical light images contaminated by the dark current which is modeled by Poisson random process. Dark frame image maps the thermally generated charge of the CCD sensor. In this paper, we solve the problem of an addition of two Poisson random variables. At first, the noise analysis of images obtained from the astronomical camera is performed. It allows estimating parameters of the Poisson probability mass functions in every pixel of the acquired dark frame. Then the resulting distributions of the light image can be found. If the distributions of the light image pixels are identified, then the denoising algorithm can be applied. The performance of the Bayesian approach in the spatial domain is compared with the direct approach based on the method of moments and the dark frame subtraction. 15. Evaluating the double Poisson generalized linear model. Science.gov (United States) Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique 2013-10-01 The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved. 16. Bayesian regression of piecewise homogeneous Poisson processes Directory of Open Access Journals (Sweden) Diego Sevilla 2015-12-01 Full Text Available In this paper, a Bayesian method for piecewise regression is adapted to handle counting processes data distributed as Poisson. A numerical code in Mathematica is developed and tested analyzing simulated data. The resulting method is valuable for detecting breaking points in the count rate of time series for Poisson processes. Received: 2 November 2015, Accepted: 27 November 2015; Edited by: R. Dickman; Reviewed by: M. Hutter, Australian National University, Canberra, Australia.; DOI: http://dx.doi.org/10.4279/PIP.070018 Cite as: D J R Sevilla, Papers in Physics 7, 070018 (2015 17. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising. Science.gov (United States) Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George 2009-08-01 We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques. 18. Image deblurring with Poisson data: from cells to galaxies International Nuclear Information System (INIS) Bertero, M; Boccacci, P; Desiderà, G; Vicidomini, G 2009-01-01 Image deblurring is an important topic in imaging science. In this review, we consider together fluorescence microscopy and optical/infrared astronomy because of two common features: in both cases the imaging system can be described, with a sufficiently good approximation, by a convolution operator, whose kernel is the so-called point-spread function (PSF); moreover, the data are affected by photon noise, described by a Poisson process. This statistical property of the noise, that is common also to emission tomography, is the basis of maximum likelihood and Bayesian approaches introduced in the mid eighties. From then on, a huge amount of literature has been produced on these topics. This review is a tutorial and a review of a relevant part of this literature, including some of our previous contributions. We discuss the mathematical modeling of the process of image formation and detection, and we introduce the so-called Bayesian paradigm that provides the basis of the statistical treatment of the problem. Next, we describe and discuss the most frequently used algorithms as well as other approaches based on a different description of the Poisson noise. We conclude with a review of other topics related to image deblurring such as boundary effect correction, space-variant PSFs, super-resolution, blind deconvolution and multiple-image deconvolution. (topical review) 19. OSR encapsulation basis -- 100-KW International Nuclear Information System (INIS) Meichle, R.H. 1995-01-01 The purpose of this report is to provide the basis for a change in the Operations Safety Requirement (OSR) encapsulated fuel storage requirements in the 105 KW fuel storage basin which will permit the handling and storing of encapsulated fuel in canisters which no longer have a water-free space in the top of the canister. The scope of this report is limited to providing the change from the perspective of the safety envelope (bases) of the Safety Analysis Report (SAR) and Operations Safety Requirements (OSR). It does not change the encapsulation process itself 20. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography. Science.gov (United States) Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi 2016-10-01 Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved. 1. Poisson processes and a Bessel function integral NARCIS (Netherlands) Steutel, F.W. 1985-01-01 The probability of winning a simple game of competing Poisson processes turns out to be equal to the well-known Bessel function integral J(x, y) (cf. Y. L. Luke, Integrals of Bessel Functions, McGraw-Hill, New York, 1962). Several properties of J, some of which seem to be new, follow quite easily 2. Almost Poisson integration of rigid body systems International Nuclear Information System (INIS) Austin, M.A.; Krishnaprasad, P.S.; Li-Sheng Wang 1993-01-01 In this paper we discuss the numerical integration of Lie-Poisson systems using the mid-point rule. Since such systems result from the reduction of hamiltonian systems with symmetry by lie group actions, we also present examples of reconstruction rules for the full dynamics. A primary motivation is to preserve in the integration process, various conserved quantities of the original dynamics. A main result of this paper is an O(h 3 ) error estimate for the Lie-Poisson structure, where h is the integration step-size. We note that Lie-Poisson systems appear naturally in many areas of physical science and engineering, including theoretical mechanics of fluids and plasmas, satellite dynamics, and polarization dynamics. In the present paper we consider a series of progressively complicated examples related to rigid body systems. We also consider a dissipative example associated to a Lie-Poisson system. The behavior of the mid-point rule and an associated reconstruction rule is numerically explored. 24 refs., 9 figs 3. Measuring Poisson Ratios at Low Temperatures Science.gov (United States) Boozon, R. S.; Shepic, J. A. 1987-01-01 Simple extensometer ring measures bulges of specimens in compression. New method of measuring Poisson's ratio used on brittle ceramic materials at cryogenic temperatures. Extensometer ring encircles cylindrical specimen. Four strain gauges connected in fully active Wheatstone bridge self-temperature-compensating. Used at temperatures as low as liquid helium. 4. Affine Poisson Groups and WZW Model Directory of Open Access Journals (Sweden) 2008-01-01 Full Text Available We give a detailed description of a dynamical system which enjoys a Poisson-Lie symmetry with two non-isomorphic dual groups. The system is obtained by taking the q → ∞ limit of the q-deformed WZW model and the understanding of its symmetry structure results in uncovering an interesting duality of its exchange relations. 5. Easy Demonstration of the Poisson Spot Science.gov (United States) Gluck, Paul 2010-01-01 Many physics teachers have a set of slides of single, double and multiple slits to show their students the phenomena of interference and diffraction. Thomas Young's historic experiments with double slits were indeed a milestone in proving the wave nature of light. But another experiment, namely the Poisson spot, was also important historically and… 6. Quantum fields and Poisson processes. Pt. 2 International Nuclear Information System (INIS) Bertrand, J.; Gaveau, B.; Rideau, G. 1985-01-01 Quantum field evolutions are written as expectation values with respect to Poisson processes in two simple models; interaction of two boson fields (with conservation of the number of particles in one field) and interaction of a boson with a fermion field. The introduction of a cutt-off ensures that the expectation values are well-defined. (orig.) 7. Natural Poisson structures of nonlinear plasma dynamics International Nuclear Information System (INIS) Kaufman, A.N. 1982-01-01 Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering. (Auth.) 8. Poisson brackets for fluids and plasmas International Nuclear Information System (INIS) Morrison, P.J. 1982-01-01 Noncanonical yet Hamiltonian descriptions are presented of many of the non-dissipative field equations that govern fluids and plasmas. The dynamical variables are the usually encountered physical variables. These descriptions have the advantage that gauge conditions are absent, but at the expense of introducing peculiar Poisson brackets. Clebsch-like potential descriptions that reverse this situations are also introduced 9. Natural Poisson structures of nonlinear plasma dynamics International Nuclear Information System (INIS) Kaufman, A.N. 1982-06-01 Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering 10. Coherent transform, quantization, and Poisson geometry CERN Document Server Novikova, E; Itskov, V; Karasev, M V 1998-01-01 This volume contains three extensive articles written by Karasev and his pupils. Topics covered include the following: coherent states and irreducible representations for algebras with non-Lie permutation relations, Hamilton dynamics and quantization over stable isotropic submanifolds, and infinitesimal tensor complexes over degenerate symplectic leaves in Poisson manifolds. The articles contain many examples (including from physics) and complete proofs. 11. Efficient information transfer by Poisson neurons Czech Academy of Sciences Publication Activity Database Košťál, Lubomír; Shinomoto, S. 2016-01-01 Roč. 13, č. 3 (2016), s. 509-520 ISSN 1547-1063 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : information capacity * Poisson neuron * metabolic cost * decoding error Subject RIV: BD - Theory of Information Impact factor: 1.035, year: 2016 12. Gravity Probe B Encapsulated Science.gov (United States) 2004-01-01 In this photo, the Gravity Probe B (GP-B) space vehicle is being encapsulated atop the Delta II launch vehicle. The GP-B is the relativity experiment developed at Stanford University to test two extraordinary predictions of Albert Einstein's general theory of relativity. The experiment will measure, very precisely, the expected tiny changes in the direction of the spin axes of four gyroscopes contained in an Earth-orbiting satellite at a 400-mile altitude. So free are the gyroscopes from disturbance that they will provide an almost perfect space-time reference system. They will measure how space and time are very slightly warped by the presence of the Earth, and, more profoundly, how the Earth's rotation very slightly drags space-time around with it. These effects, though small for the Earth, have far-reaching implications for the nature of matter and the structure of the Universe. GP-B is among the most thoroughly researched programs ever undertaken by NASA. This is the story of a scientific quest in which physicists and engineers have collaborated closely over many years. Inspired by their quest, they have invented a whole range of technologies that are already enlivening other branches of science and engineering. Launched April 20, 2004 , the GP-B program was managed for NASA by the Marshall Space Flight Center. Development of the GP-B is the responsibility of Stanford University along with major subcontractor Lockheed Martin Corporation. (Image credit to Russ Underwood, Lockheed Martin Corporation). 13. Sclerosing Encapsulating Peritonitis; Review Directory of Open Access Journals (Sweden) 2016-05-01 Full Text Available Sclerosing encapsulating peritonitis (SEP is a rare chronic inflammatory condition of the peritoneum with an unknown aetiology. Also known as abdominal cocoon, the condition occurs when loops of the bowel are encased within the peritoneal cavity by a membrane, leading to intestinal obstruction. Due to its rarity and nonspecific clinical features, it is often misdiagnosed. The condition presents with recurrent episodes of small bowel obstruction and can be idiopathic or secondary; the latter is associated with predisposing factors such as peritoneal dialysis or abdominal tuberculosis. In the early stages, patients can be managed conservatively; however, surgical intervention is necessary for those with advanced stage intestinal obstruction. A literature review revealed 118 cases of SEP; the mean age of these patients was 39 years and 68.0% were male. The predominant presentation was abdominal pain (72.0%, distension (44.9% or a mass (30.5%. Almost all of the patients underwent surgical excision (99.2% without postoperative complications (88.1%. 14. Encapsulation process for diffraction gratings. Science.gov (United States) Ratzsch, Stephan; Kley, Ernst-Bernhard; Tünnermann, Andreas; Szeghalmi, Adriana 2015-07-13 Encapsulation of grating structures facilitates an improvement of the optical functionality and/or adds mechanical stability to the fragile structure. Here, we introduce novel encapsulation process of nanoscale patterns based on atomic layer deposition and micro structuring. The overall size of the encapsulated structured surface area is only restricted by the size of the available microstructuring and coating devices; thus, overcoming inherent limitations of existing bonding processes concerning cleanliness, roughness, and curvature of the components. Finally, the process is demonstrated for a transmission grating. The encapsulated grating has 97.5% transmission efficiency in the -1st diffraction order for TM-polarized light, and is being limited by the experimental grating parameters as confirmed by rigorous coupled wave analysis. 15. Encapsulated microsensors for reservoir interrogation Science.gov (United States) Scott, Eddie Elmer; Aines, Roger D.; Spadaccini, Christopher M. 2016-03-08 In one general embodiment, a system includes at least one microsensor configured to detect one or more conditions of a fluidic medium of a reservoir; and a receptacle, wherein the receptacle encapsulates the at least one microsensor. In another general embodiment, a method include injecting the encapsulated at least one microsensor as recited above into a fluidic medium of a reservoir; and detecting one or more conditions of the fluidic medium of the reservoir. 16. Fundamentals of statistics CERN Document Server Mulholland, Henry 1968-01-01 Fundamentals of Statistics covers topics on the introduction, fundamentals, and science of statistics. The book discusses the collection, organization and representation of numerical data; elementary probability; the binomial Poisson distributions; and the measures of central tendency. The text describes measures of dispersion for measuring the spread of a distribution; continuous distributions for measuring on a continuous scale; the properties and use of normal distribution; and tests involving the normal or student's 't' distributions. The use of control charts for sample means; the ranges 17. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments International Nuclear Information System (INIS) Fisicaro, G.; Goedecker, S.; Genovese, L.; Andreussi, O.; Marzari, N. 2016-01-01 The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes 18. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments. Science.gov (United States) Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S 2016-01-07 The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes. 19. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments Energy Technology Data Exchange (ETDEWEB) Fisicaro, G., E-mail: [email protected]; Goedecker, S. [Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel (Switzerland); Genovese, L. [University of Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Andreussi, O. [Institute of Computational Science, Università della Svizzera Italiana, Via Giuseppe Buffi 13, CH-6904 Lugano (Switzerland); Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland); Marzari, N. [Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland) 2016-01-07 The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes. 20. Analisis Faktor – Faktor yang Mempengaruhi Jumlah Kejahatan Pencurian Kendaraan Bermotor (Curanmor) Menggunakan Model Geographically Weighted Poisson Regression (Gwpr) OpenAIRE Haris, Muhammad; Yasin, Hasbi; Hoyyi, Abdul 2015-01-01 Theft is an act taking someone else's property, partially or entierely, with intention to have it illegally. Motor vehicle theft is one of the most highlighted crime type and disturbing the communities. Regression analysis is a statistical analysis for modeling the relationships between response variable and predictor variable. If the response variable follows a Poisson distribution or categorized as a count data, so the regression model used is Poisson regression. Geographically Weighted Poi... 1. Laplace-Laplace analysis of the fractional Poisson process OpenAIRE Gorenflo, Rudolf; Mainardi, Francesco 2013-01-01 We generate the fractional Poisson process by subordinating the standard Poisson process to the inverse stable subordinator. Our analysis is based on application of the Laplace transform with respect to both arguments of the evolving probability densities. 2. Experimental investigation of statistical density function of decaying radioactive sources International Nuclear Information System (INIS) Salma, I.; Zemplen-Papp, E. 1991-01-01 The validity of the Poisson and the λ P(k) modified Poisson statistical density functions of observing k events in a short time interval is investigated experimentally in radioactive decay detection for various measuring times. The experiments to measure radioactive decay were performed with 89m Y, using a multichannel analyzer. According to the results, Poisson statistics adequately describes the counting experiment for short measuring times. (author) 13 refs.; 4 figs 3. Generalized master equations for non-Poisson dynamics on networks. Science.gov (United States) Hoffmann, Till; Porter, Mason A; Lambiotte, Renaud 2012-10-01 The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature. 4. Poisson equation for weak gravitational lensing International Nuclear Information System (INIS) Kling, Thomas P.; Campbell, Bryan 2008-01-01 Using the Newman and Penrose [E. T. Newman and R. Penrose, J. Math. Phys. (N.Y.) 3, 566 (1962).] spin-coefficient formalism, we examine the full Bianchi identities of general relativity in the context of gravitational lensing, where the matter and space-time curvature are projected into a lens plane perpendicular to the line of sight. From one component of the Bianchi identity, we provide a rigorous, new derivation of a Poisson equation for the projected matter density where the source term involves second derivatives of the observed weak gravitational lensing shear. We also show that the other components of the Bianchi identity reveal no new results. Numerical integration of the Poisson equation in test cases shows an accurate mass map can be constructed from the combination of a ground-based, wide-field image and a Hubble Space Telescope image of the same system 5. Poisson-Lie T-plurality International Nuclear Information System (INIS) Unge, Rikard von 2002-01-01 We extend the path-integral formalism for Poisson-Lie T-duality to include the case of Drinfeld doubles which can be decomposed into bi-algebras in more than one way. We give the correct shift of the dilaton, correcting a mistake in the literature. We then use the fact that the six dimensional Drinfeld doubles have been classified to write down all possible conformal Poisson-Lie T-duals of three dimensional space times and we explicitly work out two duals to the constant dilaton and zero anti-symmetric tensor Bianchi type V space time and show that they satisfy the string equations of motion. This space-time was previously thought to have no duals because of the tracefulness of the structure constants. (author) 6. Linear odd Poisson bracket on Grassmann variables International Nuclear Information System (INIS) Soroka, V.A. 1999-01-01 A linear odd Poisson bracket (antibracket) realized solely in terms of Grassmann variables is suggested. It is revealed that the bracket, which corresponds to a semi-simple Lie group, has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, the second and the third orders with respect to Grassmann derivatives, in contrast with the canonical odd Poisson bracket having the only Grassmann-odd nilpotent differential Δ-operator of the second order. It is shown that these Δ-like operators together with a Grassmann-odd nilpotent Casimir function of this bracket form a finite-dimensional Lie superalgebra. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.) 7. The Fractional Poisson Process and the Inverse Stable Subordinator OpenAIRE Meerschaert, Mark; Nane, Erkan; Vellaisamy, P. 2011-01-01 The fractional Poisson process is a renewal process with Mittag-Leffler waiting times. Its distributions solve a time-fractional analogue of the Kolmogorov forward equation for a Poisson process. This paper shows that a traditional Poisson process, with the time variable replaced by an independent inverse stable subordinator, is also a fractional Poisson process. This result unifies the two main approaches in the stochastic theory of time-fractional diffusion equations. The equivalence extend... 8. Reduction of Nambu-Poisson Manifolds by Regular Distributions Science.gov (United States) Das, Apurba 2018-03-01 The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure. 9. Simple dead-time corrections for discrete time series of non-Poisson data International Nuclear Information System (INIS) Larsen, Michael L; Kostinski, Alexander B 2009-01-01 The problem of dead time (instrumental insensitivity to detectable events due to electronic or mechanical reset time) is considered. Most existing algorithms to correct for event count errors due to dead time implicitly rely on Poisson counting statistics of the underlying phenomena. However, when the events to be measured are clustered in time, the Poisson statistics assumption results in underestimating both the true event count and any statistics associated with count variability; the 'busiest' part of the signal is partially missed. Using the formalism associated with the pair-correlation function, we develop first-order correction expressions for the general case of arbitrary counting statistics. The results are verified through simulation of a realistic clustering scenario 10. Comparing two Poisson populations sequentially: an application International Nuclear Information System (INIS) Halteman, E.J. 1986-01-01 Rocky Flats Plant in Golden, Colorado monitors each of its employees for radiation exposure. Excess exposure is detected by comparing the means of two Poisson populations. A sequential probability ratio test (SPRT) is proposed as a replacement for the fixed sample normal approximation test. A uniformly most efficient SPRT exists, however logistics suggest using a truncated SPRT. The truncated SPRT is evaluated in detail and shown to possess large potential savings in average time spent by employees in the monitoring process 11. Irreversible thermodynamics of Poisson processes with reaction. Science.gov (United States) Méndez, V; Fort, J 1999-11-01 A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics. 12. Degenerate odd Poisson bracket on Grassmann variables International Nuclear Information System (INIS) Soroka, V.A. 2000-01-01 A linear degenerate odd Poisson bracket (antibracket) realized solely on Grassmann variables is proposed. It is revealed that this bracket has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, second and third orders with respect to the Grassmann derivatives. It is shown that these Δ-like operators, together with the Grassmann-odd nilpotent Casimir function of this bracket, form a finite-dimensional Lie superalgebra 13. Poisson/Superfish codes for personal computers International Nuclear Information System (INIS) Humphries, S. 1992-01-01 The Poisson/Superfish codes calculate static E or B fields in two-dimensions and electromagnetic fields in resonant structures. New versions for 386/486 PCs and Macintosh computers have capabilities that exceed the mainframe versions. Notable improvements are interactive graphical post-processors, improved field calculation routines, and a new program for charged particle orbit tracking. (author). 4 refs., 1 tab., figs 14. Computation of solar perturbations with Poisson series Science.gov (United States) Broucke, R. 1974-01-01 Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained. 15. Alternative Forms of Compound Fractional Poisson Processes Directory of Open Access Journals (Sweden) Luisa Beghin 2012-01-01 Full Text Available We study here different fractional versions of the compound Poisson process. The fractionality is introduced in the counting process representing the number of jumps as well as in the density of the jumps themselves. The corresponding distributions are obtained explicitly and proved to be solution of fractional equations of order less than one. Only in the final case treated in this paper, where the number of jumps is given by the fractional-difference Poisson process defined in Orsingher and Polito (2012, we have a fractional driving equation, with respect to the time argument, with order greater than one. Moreover, in this case, the compound Poisson process is Markovian and this is also true for the corresponding limiting process. All the processes considered here are proved to be compositions of continuous time random walks with stable processes (or inverse stable subordinators. These subordinating relationships hold, not only in the limit, but also in the finite domain. In some cases the densities satisfy master equations which are the fractional analogues of the well-known Kolmogorov one. 16. Exterior differentials in superspace and Poisson brackets International Nuclear Information System (INIS) Soroka, Dmitrij V.; Soroka, Vyacheslav A. 2003-01-01 It is shown that two definitions for an exterior differential in superspace, giving the same exterior calculus, yet lead to different results when applied to the Poisson bracket. A prescription for the transition with the help of these exterior differentials from the given Poisson bracket of definite Grassmann parity to another bracket is introduced. It is also indicated that the resulting bracket leads to generalization of the Schouten-Nijenhuis bracket for the cases of superspace and brackets of diverse Grassmann parities. It is shown that in the case of the Grassmann-odd exterior differential the resulting bracket is the bracket given on exterior forms. The above-mentioned transition with the use of the odd exterior differential applied to the linear even/odd Poisson brackets, that correspond to semi-simple Lie groups, results, respectively, in also linear odd/even brackets which are naturally connected with the Lie superalgebra. The latter contains the BRST and anti-BRST charges and can be used for calculation of the BRST operator cogomology. (author) 17. Duality and modular class of a Nambu-Poisson structure International Nuclear Information System (INIS) Ibanez, R.; Leon, M. de; Lopez, B.; Marrero, J.C.; Padron, E. 2001-01-01 In this paper we introduce cohomology and homology theories for Nambu-Poisson manifolds. Also we study the relation between the existence of a duality for these theories and the vanishing of a particular Nambu-Poisson cohomology class, the modular class. The case of a regular Nambu-Poisson structure and some singular examples are discussed. (author) 18. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces. Science.gov (United States) Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying 2013-03-08 Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily. 19. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data KAUST Repository Sepú lveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G 2013-01-01 Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd. 20. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data. Science.gov (United States) Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G 2013-02-26 The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 1. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data KAUST Repository Sepúlveda, Nuno 2013-02-26 Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd. 2. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes Science.gov (United States) Orsingher, Enzo; Polito, Federico 2012-08-01 In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers. 3. Nonlocal Poisson-Fermi model for ionic solvent. Science.gov (United States) Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob 2016-07-01 We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution. 4. Modeling the number of car theft using Poisson regression Science.gov (United States) Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura 2016-10-01 Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia. 5. Radio pulsar glitches as a state-dependent Poisson process Science.gov (United States) Fulgenzi, W.; Melatos, A.; Hughes, B. D. 2017-10-01 Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar. 6. Poisson Mixture Regression Models for Heart Disease Prediction. Science.gov (United States) Mufudza, Chipo; Erol, Hamza 2016-01-01 Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. 7. Error calculations statistics in radioactive measurements International Nuclear Information System (INIS) Verdera, Silvia 1994-01-01 Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test 8. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes. Science.gov (United States) Hougaard, P; Lee, M L; Whitmore, G A 1997-12-01 Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients. 9. Statistical properties of earthquakes clustering Directory of Open Access Journals (Sweden) A. Vecchio 2008-04-01 Full Text Available Often in nature the temporal distribution of inhomogeneous stochastic point processes can be modeled as a realization of renewal Poisson processes with a variable rate. Here we investigate one of the classical examples, namely, the temporal distribution of earthquakes. We show that this process strongly departs from a Poisson statistics for both catalogue and sequence data sets. This indicate the presence of correlations in the system probably related to the stressing perturbation characterizing the seismicity in the area under analysis. As shown by this analysis, the catalogues, at variance with sequences, show common statistical properties. 10. The transverse Poisson's ratio of composites. Science.gov (United States) Foye, R. L. 1972-01-01 An expression is developed that makes possible the prediction of Poisson's ratio for unidirectional composites with reference to any pair of orthogonal axes that are normal to the direction of the reinforcing fibers. This prediction appears to be a reasonable one in that it follows the trends of the finite element analysis and the bounding estimates, and has the correct limiting value for zero fiber content. It can only be expected to apply to composites containing stiff, circular, isotropic fibers bonded to a soft matrix material. 11. Risk Sensitive Filtering with Poisson Process Observations International Nuclear Information System (INIS) Malcolm, W. P.; James, M. R.; Elliott, R. J. 2000-01-01 In this paper we consider risk sensitive filtering for Poisson process observations. Risk sensitive filtering is a type of robust filtering which offers performance benefits in the presence of uncertainties. We derive a risk sensitive filter for a stochastic system where the signal variable has dynamics described by a diffusion equation and determines the rate function for an observation process. The filtering equations are stochastic integral equations. Computer simulations are presented to demonstrate the performance gain for the risk sensitive filter compared with the risk neutral filter 12. Moments analysis of concurrent Poisson processes International Nuclear Information System (INIS) McBeth, G.W.; Cross, P. 1975-01-01 A moments analysis of concurrent Poisson processes has been carried out. Equations are given which relate combinations of distribution moments to sums of products involving the number of counts associated with the processes and the mean rate of the processes. Elimination of background is discussed and equations suitable for processing random radiation, parent-daughter pairs in the presence of background, and triple and double correlations in the presence of background are given. The theory of identification of the four principle radioactive series by moments analysis is discussed. (Auth.) 13. Encapsulated Curcumin for Transdermal Administration African Journals Online (AJOL) Purpose: To develop a proniosomal carrier system of curcumin for transdermal delivery. Methods: Proniosomes of curcumin were prepared by encapsulation of the drug in a mixture of Span 80, cholesterol and diethyl ether by ether injection method, and then investigated as a transdermal drug delivery system (TDDS). 14. Device for encapsulating radioactive materials International Nuclear Information System (INIS) Suthanthiran, K. 1994-01-01 A capsule for encapsulating radioactive material for radiation treatment comprising two or more interfitting sleeves, wherein each sleeve comprises a closed bottom portion having a circumferential wall extending therefrom, and an open end located opposite the bottom portion. The sleeves are constructed to fit over one another to thereby establish an effectively sealed capsule container. 3 figs 15. Encapsulation of polymer photovoltaic prototypes DEFF Research Database (Denmark) Krebs, Frederik C 2006-01-01 A simple and efficient method for the encapsulation of polymer and organic photovoltaic prototypes is presented. The method employs device preparation on glass substrates with subsequent sealing using glass fiber reinforced thermosetting epoxy (prepreg) against a back plate. The method allows... 16. Reactants encapsulation and Maillard Reaction NARCIS (Netherlands) Troise, A.D.; Fogliano, V. 2013-01-01 In the last decades many efforts have been addressed to the control of Maillard Reaction products in different foods with the aim to promote the formation of compounds having the desired color and flavor and to reduce the concentration of several potential toxic molecules. Encapsulation, already 17. Technology of mammalian cell encapsulation NARCIS (Netherlands) Uludag, H; De Vos, P; Tresco, PA 2000-01-01 Entrapment of mammalian cells in physical membranes has been practiced since the early 1950s when it was originally introduced as a basic research tool. The method has since been developed based on the promise of its therapeutic usefulness in tissue transplantation. Encapsulation physically isolates 18. Nonhomogeneous Poisson process with nonparametric frailty International Nuclear Information System (INIS) Slimacek, Vaclav; Lindqvist, Bo Henry 2016-01-01 The failure processes of heterogeneous repairable systems are often modeled by non-homogeneous Poisson processes. The common way to describe an unobserved heterogeneity between systems is to multiply the basic rate of occurrence of failures by a random variable (a so-called frailty) having a specified parametric distribution. Since the frailty is unobservable, the choice of its distribution is a problematic part of using these models, as are often the numerical computations needed in the estimation of these models. The main purpose of this paper is to develop a method for estimation of the parameters of a nonhomogeneous Poisson process with unobserved heterogeneity which does not require parametric assumptions about the heterogeneity and which avoids the frequently encountered numerical problems associated with the standard models for unobserved heterogeneity. The introduced method is illustrated on an example involving the power law process, and is compared to the standard gamma frailty model and to the classical model without unobserved heterogeneity. The derived results are confirmed in a simulation study which also reveals several not commonly known properties of the gamma frailty model and the classical model, and on a real life example. - Highlights: • A new method for estimation of a NHPP with frailty is introduced. • Introduced method does not require parametric assumptions about frailty. • The approach is illustrated on an example with the power law process. • The method is compared to the gamma frailty model and to the model without frailty. 19. Renewal characterization of Markov modulated Poisson processes Directory of Open Access Journals (Sweden) Marcel F. Neuts 1989-01-01 Full Text Available A Markov Modulated Poisson Process (MMPP M(t defined on a Markov chain J(t is a pure jump process where jumps of M(t occur according to a Poisson process with intensity λi whenever the Markov chain J(t is in state i. M(t is called strongly renewal (SR if M(t is a renewal process for an arbitrary initial probability vector of J(t with full support on P={i:λi>0}. M(t is called weakly renewal (WR if there exists an initial probability vector of J(t such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t,M(t]. Relevance to the lumpability of J(t is also studied. 20. Variational Gaussian approximation for Poisson data Science.gov (United States) Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen 2018-02-01 The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms. 1. On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action Science.gov (United States) Chekhov, L. O.; Mazzocco, M. 2017-12-01 Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles. 2. PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON Directory of Open Access Journals (Sweden) 2013-09-01 Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model. 3. A test of inflated zeros for Poisson regression models. Science.gov (United States) He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan 2017-01-01 Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power. 4. A Method of Poisson's Ration Imaging Within a Material Part Science.gov (United States) Roth, Don J. (Inventor) 1994-01-01 The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data. 5. Method of Poisson's ratio imaging within a material part Science.gov (United States) Roth, Don J. (Inventor) 1996-01-01 The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image. 6. Compound Poisson Approximations for Sums of Random Variables OpenAIRE Serfozo, Richard F. 1986-01-01 We show that a sum of dependent random variables is approximately compound Poisson when the variables are rarely nonzero and, given they are nonzero, their conditional distributions are nearly identical. We give several upper bounds on the total-variation distance between the distribution of such a sum and a compound Poisson distribution. Included is an example for Markovian occurrences of a rare event. Our bounds are consistent with those that are known for Poisson approximations for sums of... 7. Surface reconstruction through poisson disk sampling. Directory of Open Access Journals (Sweden) Wenguang Hou Full Text Available This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. 8. Thinning spatial point processes into Poisson processes DEFF Research Database (Denmark) Møller, Jesper; Schoenberg, Frederic Paik 2010-01-01 are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered.... 9. Thinning spatial point processes into Poisson processes DEFF Research Database (Denmark) Møller, Jesper; Schoenberg, Frederic Paik , and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can......This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered.... 10. Periodic Poisson Solver for Particle Tracking International Nuclear Information System (INIS) Dohlus, M.; Henning, C. 2015-05-01 A method is described to solve the Poisson problem for a three dimensional source distribution that is periodic into one direction. Perpendicular to the direction of periodicity a free space (or open) boundary is realized. In beam physics, this approach allows to calculate the space charge field of a continualized charged particle distribution with periodic pattern. The method is based on a particle mesh approach with equidistant grid and fast convolution with a Green's function. The periodic approach uses only one period of the source distribution, but a periodic extension of the Green's function. The approach is numerically efficient and allows the investigation of periodic- and pseudo-periodic structures with period lengths that are small compared to the source dimensions, for instance of laser modulated beams or of the evolution of micro bunch structures. Applications for laser modulated beams are given. 11. Nonlinear poisson brackets geometry and quantization CERN Document Server Karasev, M V 2012-01-01 This book deals with two old mathematical problems. The first is the problem of constructing an analog of a Lie group for general nonlinear Poisson brackets. The second is the quantization problem for such brackets in the semiclassical approximation (which is the problem of exact quantization for the simplest classes of brackets). These problems are progressively coming to the fore in the modern theory of differential equations and quantum theory, since the approach based on constructions of algebras and Lie groups seems, in a certain sense, to be exhausted. The authors' main goal is to describe in detail the new objects that appear in the solution of these problems. Many ideas of algebra, modern differential geometry, algebraic topology, and operator theory are synthesized here. The authors prove all statements in detail, thus making the book accessible to graduate students. 12. Encapsulation of polymer photovoltaic prototypes Energy Technology Data Exchange (ETDEWEB) Krebs, Frederik C. [The Danish Polymer Centre, RISOE National Laboratory, P.O. Box 49, DK-4000 Roskilde (Denmark) 2006-12-15 A simple and efficient method for the encapsulation of polymer and organic photovoltaic prototypes is presented. The method employs device preparation on glass substrates with subsequent sealing using glass fiber reinforced thermosetting epoxy (prepreg) against a back plate. The method allows for transporting oxygen and water sensitive devices outside a glove box environment after sealing and enables sharing of devices between research groups such that efficiency and stability can be evaluated in different laboratories. (author) 13. Zeolite encapsulation of H2 International Nuclear Information System (INIS) Cooper, S.; Lakner, J.F. 1982-08-01 Experiments with H 2 have shown that it is possible to encapsulate gases in the structure of certain molecular sieves. This method may offer a better means of temporarily storing and disposing of tritium over some others presently in use. The method may also prove safer, and may enable isotope separation, and removal of 3 He. Initial experiments were performed with H 2 to screen potential candidates for use with tritium 14. Experimental investigation of statistical models describing distribution of counts International Nuclear Information System (INIS) Salma, I.; Zemplen-Papp, E. 1992-01-01 The binomial, Poisson and modified Poisson models which are used for describing the statistical nature of the distribution of counts are compared theoretically, and conclusions for application are considered. The validity of the Poisson and the modified Poisson statistical distribution for observing k events in a short time interval is investigated experimentally for various measuring times. The experiments to measure the influence of the significant radioactive decay were performed with 89 Y m (T 1/2 =16.06 s), using a multichannel analyser (4096 channels) in the multiscaling mode. According to the results, Poisson statistics describe the counting experiment for short measuring times (up to T=0.5T 1/2 ) and its application is recommended. However, analysis of the data demonstrated, with confidence, that for long measurements (T≥T 1/2 ) Poisson distribution is not valid and the modified Poisson function is preferable. The practical implications in calculating uncertainties and in optimizing the measuring time are discussed. Differences between the standard deviations evaluated on the basis of the Poisson and binomial models are especially significant for experiments with long measuring time (T/T 1/2 ≥2) and/or large detection efficiency (ε>0.30). Optimization of the measuring time for paired observations yields the same solution for either the binomial or the Poisson distribution. (orig.) 15. Microergodicity effects on ebullition of methane modelled by Mixed Poisson process with Pareto mixing variable Czech Academy of Sciences Publication Activity Database Jordanova, P.; Dušek, Jiří; Stehlík, M. 2013-01-01 Roč. 128, OCT 15 (2013), s. 124-134 ISSN 0169-7439 R&D Projects: GA ČR(CZ) GAP504/11/1151; GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : environmental chemistry * ebullition of methane * mixed poisson processes * renewal process * pareto distribution * moving average process * robust statistics * sedge–grass marsh Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013 16. The Concepts of Pseudo Compound Poisson and Partition Representations in Discrete Probability Directory of Open Access Journals (Sweden) Werner Hürlimann 2015-01-01 Full Text Available The mathematical/statistical concepts of pseudo compound Poisson and partition representations in discrete probability are reviewed and clarified. A combinatorial interpretation of the convolution of geometric distributions in terms of a variant of Newton’s identities is obtained. The practical use of the twofold convolution leads to an improved goodness-of-fit for a data set from automobile insurance that was up to now not fitted satisfactorily. 17. A new multivariate zero-adjusted Poisson model with applications to biomedicine. Science.gov (United States) Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen 2018-05-25 Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 18. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers. Science.gov (United States) Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew 2014-12-26 Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation. 19. Introduction to Bayesian statistics CERN Document Server 2017-01-01 There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati... 20. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity. Science.gov (United States) Lin, I-Chun; Xing, Dajun; Shapley, Robert 2012-12-01 One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. 1. Estimation of a Non-homogeneous Poisson Model: An Empirical ... African Journals Online (AJOL) This article aims at applying the Nonhomogeneous Poisson process to trends of economic development. For this purpose, a modified Nonhomogeneous Poisson process is derived when the intensity rate is considered as a solution of stochastic differential equation which satisfies the geometric Brownian motion. The mean ... 2. Formulation of Hamiltonian mechanics with even and odd Poisson brackets International Nuclear Information System (INIS) Khudaverdyan, O.M.; Nersesyan, A.P. 1987-01-01 A possibility is studied as to constrict the odd Poisson bracket and odd Hamiltonian by the given dynamics in phase superspace - the even Poisson bracket and even Hamiltonian so the transition to the new structure does not change the equations of motion. 9 refs 3. Double generalized linear compound poisson models to insurance claims data DEFF Research Database (Denmark) Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo 2017-01-01 This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances.... 4. Rate-optimal Bayesian intensity smoothing for inhomogeneous Poisson processes NARCIS (Netherlands) Belitser, E.N.; Serra, P.; van Zanten, H. 2015-01-01 We apply nonparametric Bayesian methods to study the problem of estimating the intensity function of an inhomogeneous Poisson process. To motivate our results we start by analyzing count data coming from a call center which we model as a Poisson process. This analysis is carried out using a certain 5. Quantum algebras and Poisson geometry in mathematical physics CERN Document Server Karasev, M V 2005-01-01 This collection presents new and interesting applications of Poisson geometry to some fundamental well-known problems in mathematical physics. The methods used by the authors include, in addition to advanced Poisson geometry, unexpected algebras with non-Lie commutation relations, nontrivial (quantum) Kählerian structures of hypergeometric type, dynamical systems theory, semiclassical asymptotics, etc. 6. Cluster X-varieties, amalgamation, and Poisson-Lie groups DEFF Research Database (Denmark) Fock, V. V.; Goncharov, A. B. 2006-01-01 In this paper, starting from a split semisimple real Lie group G with trivial center, we define a family of varieties with additional structures. We describe them as cluster χ-varieties, as defined in [FG2]. In particular they are Poisson varieties. We define canonical Poisson maps of these varie... 7. Poisson cohomology of scalar multidimensional Dubrovin-Novikov brackets Science.gov (United States) Carlet, Guido; Casati, Matteo; Shadrin, Sergey 2017-04-01 We compute the Poisson cohomology of a scalar Poisson bracket of Dubrovin-Novikov type with D independent variables. We find that the second and third cohomology groups are generically non-vanishing in D > 1. Hence, in contrast with the D = 1 case, the deformation theory in the multivariable case is non-trivial. 8. Avoiding negative populations in explicit Poisson tau-leaping. Science.gov (United States) Cao, Yang; Gillespie, Daniel T; Petzold, Linda R 2005-08-01 The explicit tau-leaping procedure attempts to speed up the stochastic simulation of a chemically reacting system by approximating the number of firings of each reaction channel during a chosen time increment tau as a Poisson random variable. Since the Poisson random variable can have arbitrarily large sample values, there is always the possibility that this procedure will cause one or more reaction channels to fire so many times during tau that the population of some reactant species will be driven negative. Two recent papers have shown how that unacceptable occurrence can be avoided by replacing the Poisson random variables with binomial random variables, whose values are naturally bounded. This paper describes a modified Poisson tau-leaping procedure that also avoids negative populations, but is easier to implement than the binomial procedure. The new Poisson procedure also introduces a second control parameter, whose value essentially dials the procedure from the original Poisson tau-leaping at one extreme to the exact stochastic simulation algorithm at the other; therefore, the modified Poisson procedure will generally be more accurate than the original Poisson procedure. 9. Unimodularity criteria for Poisson structures on foliated manifolds Science.gov (United States) Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury 2018-03-01 We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class. 10. Poisson-Boltzmann-Nernst-Planck model International Nuclear Information System (INIS) Zheng Qiong; Wei Guowei 2011-01-01 The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external 11. Poisson-Boltzmann-Nernst-Planck model. Science.gov (United States) Zheng, Qiong; Wei, Guo-Wei 2011-05-21 The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external 12. Encapsulation methods for organic electrical devices Science.gov (United States) Blum, Yigal D.; Chu, William Siu-Keung; MacQueen, David Brent; Shi, Yijian 2013-06-18 The disclosure provides methods and materials suitable for use as encapsulation barriers in electronic devices. In one embodiment, for example, there is provided an electroluminescent device or other electronic device encapsulated by alternating layers of a silicon-containing bonding material and a ceramic material. The encapsulation methods provide, for example, electronic devices with increased stability and shelf-life. The invention is useful, for example, in the field of microelectronic devices. 13. Perspective of metal encapsulation of waste International Nuclear Information System (INIS) Jardine, L.J.; Steindler, M.J. 1978-01-01 A conceptual flow sheet is presented for encapsulating solid, stabilized calcine (e.g., supercalcine) in a solid lead alloy, using existing or developing technologies. Unresolved and potential problem areas of the flow sheet are outlined and suggestions are made as how metal encapsulation might be applied to other solid wastes from the fuel cycle. It is concluded that metal encapsulation is a technique applicable to many forms of solid wastes and is likely to meet future waste isolation criteria and regulations 14. Non-isothermal Smoluchowski-Poisson equation as a singular limit of the Navier-Stokes-Fourier-Poisson system Czech Academy of Sciences Publication Activity Database Feireisl, Eduard; Laurençot, P. 2007-01-01 Roč. 88, - (2007), s. 325-349 ISSN 0021-7824 R&D Projects: GA ČR GA201/05/0164 Institutional research plan: CEZ:AV0Z10190503 Keywords : Navier-Stokes-Fourier- Poisson system * Smoluchowski- Poisson system * singular limit Subject RIV: BA - General Mathematics Impact factor: 1.118, year: 2007 15. METHOD OF FOREST FIRES PROBABILITY ASSESSMENT WITH POISSON LAW Directory of Open Access Journals (Sweden) A. S. Plotnikova 2016-01-01 Full Text Available The article describes the method for the forest fire burn probability estimation on a base of Poisson distribution. The λ parameter is assumed to be a mean daily number of fires detected for each Forest Fire Danger Index class within specific period of time. Thus, λ was calculated for spring, summer and autumn seasons separately. Multi-annual daily Forest Fire Danger Index values together with EO-derived hot spot map were input data for the statistical analysis. The major result of the study is generation of the database on forest fire burn probability. Results were validated against EO daily data on forest fires detected over Irkutsk oblast in 2013. Daily weighted average probability was shown to be linked with the daily number of detected forest fires. Meanwhile, there was found a number of fires which were developed when estimated probability was low. The possible explanation of this phenomenon was provided. 16. Poisson goodness-of-fit tests for radiation-induced chromosome aberrations International Nuclear Information System (INIS) Merkle, W. 1981-01-01 Asymptotic and exact Poisson goodness-to-fit tests have been reviewed with regard to their applicability in analysing distributional properties of data on chromosome aberrations. It has been demonstrated that for typical cytogenetic samples, i.e. when the average number of aberrations per cell is smaller than one, results of asymptotic tests, especially of the most commonly used u-test, differ greatly from results of corresponding exact tests. While the u-statistic can serve as a qualitative index to indicate a tendency towards under- or over-dispersion, exact tests should be used if the assumption of a Poisson distribution is crucial, e.g. in investigating induction mechanisms. If the main interest is to detect a difference between the mean and the variance of a sample it is furthermore important to realize that a much larger sample size is required to detect underdispersion than it is to detect overdispersion. (author) 17. Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions Directory of Open Access Journals (Sweden) uma srivastava 2012-01-01 Full Text Available The paper deals with estimating  shift point which occurs in any sequence of independent observations  of Poisson model in statistical process control. This shift point occurs in the sequence when  i.e. m  life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with  R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution . 18. Hidden Markov models for zero-inflated Poisson counts with an application to substance use. Science.gov (United States) 2011-06-30 Paradigms for substance abuse cue-reactivity research involve pharmacological or stressful stimulation designed to elicit stress and craving responses in cocaine-dependent subjects. It is unclear as to whether stress induced from participation in such studies increases drug-seeking behavior. We propose a 2-state Hidden Markov model to model the number of cocaine abuses per week before and after participation in a stress-and cue-reactivity study. The hypothesized latent state corresponds to 'high' or 'low' use. To account for a preponderance of zeros, we assume a zero-inflated Poisson model for the count data. Transition probabilities depend on the prior week's state, fixed demographic variables, and time-varying covariates. We adopt a Bayesian approach to model fitting, and use the conditional predictive ordinate statistic to demonstrate that the zero-inflated Poisson hidden Markov model outperforms other models for longitudinal count data. Copyright © 2011 John Wiley & Sons, Ltd. 19. Test of Poisson Process for Earthquakes in and around Korea International Nuclear Information System (INIS) Noh, Myunghyun; Choi, Hoseon 2015-01-01 Since Cornell's work on the probabilistic seismic hazard analysis (hereafter, PSHA), majority of PSHA computer codes are assuming that the earthquake occurrence is Poissonian. To the author's knowledge, it is uncertain who first opened the issue of the Poisson process for the earthquake occurrence. The systematic PSHA in Korea, led by the nuclear industry, were carried out for more than 25 year with the assumption of the Poisson process. However, the assumption of the Poisson process has never been tested. Therefore, the test is of significance. We tested whether the Korean earthquakes follow the Poisson process or not. The Chi-square test with the significance level of 5% was applied. The test turned out that the Poisson process could not be rejected for the earthquakes of magnitude 2.9 or larger. However, it was still observed in the graphical comparison that some portion of the observed distribution significantly deviated from the Poisson distribution. We think this is due to the small earthquake data. The earthquakes of magnitude 2.9 or larger occurred only 376 times during 34 years. Therefore, the judgment on the Poisson process derived in the present study is not conclusive 20. Poisson Mixture Regression Models for Heart Disease Prediction Science.gov (United States) Erol, Hamza 2016-01-01 Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611 1. Understanding Statistics - Cancer Statistics Science.gov (United States) Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types. 2. Quantized Algebras of Functions on Homogeneous Spaces with Poisson Stabilizers Science.gov (United States) Neshveyev, Sergey; Tuset, Lars 2012-05-01 Let G be a simply connected semisimple compact Lie group with standard Poisson structure, K a closed Poisson-Lie subgroup, 0 topology on the spectrum of C( G q / K q ). Next we show that the family of C*-algebras C( G q / K q ), 0 < q ≤ 1, has a canonical structure of a continuous field of C*-algebras and provides a strict deformation quantization of the Poisson algebra {{C}[G/K]} . Finally, extending a result of Nagy, we show that C( G q / K q ) is canonically KK-equivalent to C( G/ K). 3. Network Traffic Monitoring Using Poisson Dynamic Linear Models Energy Technology Data Exchange (ETDEWEB) Merl, D. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2011-05-09 In this article, we discuss an approach for network forensics using a class of nonstationary Poisson processes with embedded dynamic linear models. As a modeling strategy, the Poisson DLM (PoDLM) provides a very flexible framework for specifying structured effects that may influence the evolution of the underlying Poisson rate parameter, including diurnal and weekly usage patterns. We develop a novel particle learning algorithm for online smoothing and prediction for the PoDLM, and demonstrate the suitability of the approach to real-time deployment settings via a new application to computer network traffic monitoring. 4. Poisson solvers for self-consistent multi-particle simulations International Nuclear Information System (INIS) Qiang, J; Paret, S 2014-01-01 Self-consistent multi-particle simulation plays an important role in studying beam-beam effects and space charge effects in high-intensity beams. The Poisson equation has to be solved at each time-step based on the particle density distribution in the multi-particle simulation. In this paper, we review a number of numerical methods that can be used to solve the Poisson equation efficiently. The computational complexity of those numerical methods will be O(N log(N)) or O(N) instead of O(N2), where N is the total number of grid points used to solve the Poisson equation 5. Boundary Lax pairs from non-ultra-local Poisson algebras International Nuclear Information System (INIS) Avan, Jean; Doikou, Anastasia 2009-01-01 We consider non-ultra-local linear Poisson algebras on a continuous line. Suitable combinations of representations of these algebras yield representations of novel generalized linear Poisson algebras or 'boundary' extensions. They are parametrized by a boundary scalar matrix and depend, in addition, on the choice of an antiautomorphism. The new algebras are the classical-linear counterparts of the known quadratic quantum boundary algebras. For any choice of parameters, the non-ultra-local contribution of the original Poisson algebra disappears. We also systematically construct the associated classical Lax pair. The classical boundary principal chiral model is examined as a physical example. 6. Numerical methods for realizing nonstationary Poisson processes with piecewise-constant instantaneous-rate functions DEFF Research Database (Denmark) Harrod, Steven; Kelton, W. David 2006-01-01 Nonstationary Poisson processes are appropriate in many applications, including disease studies, transportation, finance, and social policy. The authors review the risks of ignoring nonstationarity in Poisson processes and demonstrate three algorithms for generation of Poisson processes... 7. On the application of nonhomogeneous Poisson process to the reliability analysis of service water pumps of nuclear power plants International Nuclear Information System (INIS) Cruz Saldanha, Pedro Luiz da. 1995-12-01 The purpose of this study is to evaluate the nonhomogeneous Poisson process as a model to rate of occurrence of failures when it is not constant, and the times between failures are not independent nor identically distributed. To this evaluation, an analyse of reliability of service water pumps of a typical nuclear power plant is made considering the model discussed in the last paragraph, as long as the pumps are effectively repairable components. Standard statistical techniques, such as maximum likelihood and linear regression, are applied to estimate parameters of nonhomogeneous Poisson process model. As a conclusion of the study, the nonhomogeneous Poisson process is adequate to model rate of occurrence of failures that are function of time, and can be used where the aging mechanisms are present in operation of repairable systems. (author). 72 refs., 45 figs., 21 tabs 8. Preliminary investigation of cryopreservation by encapsulation ... African Journals Online (AJOL) Protocorm-like bodies (PLBs) of Brassidium Shooting Star, a new commercial ornamental orchid hybrid, were cryopreserved by an encapsulation-dehydration technique. The effects of PLB size, various sucrose concentrations in preculture media and sodium alginate concentration for encapsulation were the main ... 9. Different encapsulation strategies for implanted electronics Directory of Open Access Journals (Sweden) Winkler Sebastian 2017-09-01 Full Text Available Recent advancements in implant technology include increasing application of electronic systems in the human body. Hermetic encapsulation of electronic components is necessary, specific implant functions and body environments must be considered. Additional functions such as wireless communication systems require specialized technical solutions for the encapsulation. 10. Encapsulation of nodal segments of lobelia chinensis Directory of Open Access Journals (Sweden) Weng Hing Thong 2015-04-01 Full Text Available Lobelia chinensis served as an important herb in traditional chinese medicine. It is rare in the field and infected by some pathogens. Therefore, encapsulation of axillary buds has been developed for in vitro propagation of L. chinensis. Nodal explants of L. chinensis were used as inclusion materials for encapsulation. Various combinations of calcium chloride and sodium alginate were tested. Encapsulation beads produced by mixing 50 mM calcium chloride and 3.5% sodium alginate supported the optimal in vitro conversion potential. The number of multiple shoots formed by encapsulated nodal segments was not significantly different from the average of shoots produced by non-encapsulated nodal segments. The encapsulated nodal segments regenerated in vitro on different medium. The optimal germination and regeneration medium was Murashige-Skoog medium. Plantlets regenerated from the encapsulated nodal segments were hardened, acclimatized and established well in the field, showing similar morphology with parent plants. This encapsulation technology would serve as an alternative in vitro regeneration system for L. chinensis. 11. Poisson INAR(1)过程的质量控制%Quality control chart for poisson INAR(1) process Institute of Scientific and Technical Information of China (English) 睢立伟; 宋向东 2017-01-01 为解决在实际生产中,过程数据并不总能满足彼此独立的假设前提,从而使得一些控制图不再适用于具有相关性的过程的问题,以免在监控过程中出现大量的虚假警报.论文采用取整法研究一阶自回归泊松计数过程模型,首先将一阶自回归模型与泊松计数过程结合起来,然后对原有的模型就行修正,在新模型的基础上重新构造了c控制图和残差控制图的控制限,以使得这两种控制图能够适应新的模型.研究结果表明:两种控制图都只是在一定的情况下使用,研究结论对于研究具有相关性的统计过程有着重要的推进作用.%To solve the problem that in actual production,the process data is not always satisfied with the assumption of independence,which makes some control charts no longer suitable for the process of correlation,so as to avoid a large number of false alarm.This paper studied the poisson INAR(1) process with rounding method.Firstly,the AR(1) process was combined with the poisson counting process,and the original model was corrected.Based on the new model,the control limits of c-chart and residual chart were constructed,in order to adapt the new model.The results of the study indicate the application of the two control charts.The research results have important promoting effect on the study of statistical process. 12. Limonene encapsulation in freeze dried gellan systems. Science.gov (United States) Evageliou, Vasiliki; Saliari, Dimitra 2017-05-15 The encapsulation of limonene in freeze-dried gellan systems was investigated. Surface and encapsulated limonene content was determined by measurement of the absorbance at 252nm. Gellan matrices were both gels and solutions. For a standard gellan concentration (0.5wt%) gelation was induced by potassium or calcium chloride. Furthermore, gellan solutions of varying concentrations (0.25-1wt%) were also studied. Limonene was added at two different concentrations (1 and 2mL/100g sample). Gellan gels encapsulated greater amounts of limonene than solutions. Among all gellan gels, the KCl gels had the greater encapsulated limonene content. However, when the concentration of limonene was doubled in these KCl gels, the encapsulated limonene decreased. The surface limonene content was significant, especially for gellan solutions. The experimental conditions and not the mechanical properties of the matrices were the dominant factor in the interpretation of the observed results. Copyright © 2016 Elsevier Ltd. All rights reserved. 13. Statistical properties of superimposed stationary spike trains. Science.gov (United States) Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan 2012-06-01 The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness. 14. Contravariant gravity on Poisson manifolds and Einstein gravity International Nuclear Information System (INIS) Kaneko, Yukio; Watamura, Satoshi; Muraki, Hisayoshi 2017-01-01 A relation between gravity on Poisson manifolds proposed in Asakawa et al (2015 Fortschr. Phys . 63 683–704) and Einstein gravity is investigated. The compatibility of the Poisson and Riemann structures defines a unique connection, the contravariant Levi-Civita connection, and leads to the idea of the contravariant gravity. The Einstein–Hilbert-type action yields an equation of motion which is written in terms of the analog of the Einstein tensor, and it includes couplings between the metric and the Poisson tensor. The study of the Weyl transformation reveals properties of those interactions. It is argued that this theory can have an equivalent description as a system of Einstein gravity coupled to matter. As an example, it is shown that the contravariant gravity on a two-dimensional Poisson manifold can be described by a real scalar field coupled to the metric in a specific manner. (paper) 15. Modified Regression Correlation Coefficient for Poisson Regression Model Science.gov (United States) Kaengthong, Nattacha; Domthong, Uthumporn 2017-09-01 This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE). 16. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise. Science.gov (United States) Zhang, Jiachao; Hirakawa, Keigo 2017-04-01 This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique. 17. Transforming spatial point processes into Poisson processes using random superposition DEFF Research Database (Denmark) Møller, Jesper; Berthelsen, Kasper Klitgaaard with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking... 18. Optimal linear filtering of Poisson process with dead time International Nuclear Information System (INIS) Glukhova, E.V. 1993-01-01 The paper presents a derivation of an integral equation defining the impulsed transient of optimum linear filtering for evaluation of the intensity of the fluctuating Poisson process with allowance for dead time of transducers 19. A high order solver for the unbounded Poisson equation DEFF Research Database (Denmark) Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe 2013-01-01 . The method is extended to directly solve the derivatives of the solution to Poissonʼs equation. In this way differential operators such as the divergence or curl of the solution field can be solved to the same high order convergence without additional computational effort. The method, is applied......A high order converging Poisson solver is presented, based on the Greenʼs function solution to Poissonʼs equation subject to free-space boundary conditions. The high order convergence is achieved by formulating regularised integration kernels, analogous to a smoothing of the solution field...... and validated, however not restricted, to the equations of fluid mechanics, and can be used in many applications to solve Poissonʼs equation on a rectangular unbounded domain.... 20. Comparison between two bivariate Poisson distributions through the ... African Journals Online (AJOL) These two models express themselves by their probability mass function. ... To remedy this problem, Berkhout and Plug proposed a bivariate Poisson distribution accepting the correlation as well negative, equal to zero, that positive. 1. Eigenfunction statistics for Anderson model with Hölder continuous ... The Institute of Mathematical Sciences, Taramani, Chennai 600 113, India ... Anderson model; Hölder continuous measure; Poisson statistics. ...... [4] Combes J-M, Hislop P D and Klopp F, An optimal Wegner estimate and its application to. 2. Formality theory from Poisson structures to deformation quantization CERN Document Server Esposito, Chiara 2015-01-01 This book is a survey of the theory of formal deformation quantization of Poisson manifolds, in the formalism developed by Kontsevich. It is intended as an educational introduction for mathematical physicists who are dealing with the subject for the first time. The main topics covered are the theory of Poisson manifolds, star products and their classification, deformations of associative algebras and the formality theorem. Readers will also be familiarized with the relevant physical motivations underlying the purely mathematical construction. 3. Poisson structure of the equations of ideal multispecies fluid electrodynamics International Nuclear Information System (INIS) Spencer, R.G. 1984-01-01 The equations of the two- (or multi-) fluid model of plasma physics are recast in Hamiltonian form, following general methods of symplectic geometry. The dynamical variables are the fields of physical interest, but are noncanonical, so that the Poisson bracket in the theory is not the standard one. However, it is a skew-symmetric bilinear form which, from the method of derivation, automatically satisfies the Jacobi identity; therefore, this noncanonical structure has all the essential properties of a canonical Poisson bracket 4. Null canonical formalism 1, Maxwell field. [Poisson brackets, boundary conditions Energy Technology Data Exchange (ETDEWEB) Wodkiewicz, K [Warsaw Univ. (Poland). Inst. Fizyki Teoretycznej 1975-01-01 The purpose of this paper is to formulate the canonical formalism on null hypersurfaces for the Maxwell electrodynamics. The set of the Poisson brackets relations for null variables of the Maxwell field is obtained. The asymptotic properties of the theory are investigated. The Poisson bracket relations for the news-functions of the Maxwell field are computed. The Hamiltonian form of the asymptotic Maxwell equations in terms of these news-functions is obtained. 5. GEPOIS: a two dimensional nonuniform mesh Poisson solver International Nuclear Information System (INIS) Quintenz, J.P.; Freeman, J.R. 1979-06-01 A computer code is described which solves Poisson's equation for the electric potential over a two dimensional cylindrical (r,z) nonuniform mesh which can contain internal electrodes. Poisson's equation is solved over a given region subject to a specified charge distribution with either Neumann or Dirichlet perimeter boundary conditions and with Dirichlet boundary conditions on internal surfaces. The static electric field is also computed over the region with special care given to normal electric field components at boundary surfaces 6. A Note On the Estimation of the Poisson Parameter Directory of Open Access Journals (Sweden) S. S. Chitgopekar 1985-01-01 distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities. 7. 2D Poisson sigma models with gauged vectorial supersymmetry Energy Technology Data Exchange (ETDEWEB) Bonezzi, Roberto [Dipartimento di Fisica ed Astronomia, Università di Bologna and INFN, Sezione di Bologna,via Irnerio 46, I-40126 Bologna (Italy); Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Sundell, Per [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Torres-Gomez, Alexander [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Instituto de Ciencias Físicas y Matemáticas, Universidad Austral de Chile-UACh,Valdivia (Chile) 2015-08-12 In this note, we gauge the rigid vectorial supersymmetry of the two-dimensional Poisson sigma model presented in arXiv:1503.05625. We show that the consistency of the construction does not impose any further constraints on the differential Poisson algebra geometry than those required for the ungauged model. We conclude by proposing that the gauged model provides a first-quantized framework for higher spin gravity. 8. On the Fedosov deformation quantization beyond the regular Poisson manifolds International Nuclear Information System (INIS) Dolgushev, V.A.; Isaev, A.P.; Lyakhovich, S.L.; Sharapov, A.A. 2002-01-01 A simple iterative procedure is suggested for the deformation quantization of (irregular) Poisson brackets associated to the classical Yang-Baxter equation. The construction is shown to admit a pure algebraic reformulation giving the Universal Deformation Formula (UDF) for any triangular Lie bialgebra. A simple proof of classification theorem for inequivalent UDF's is given. As an example the explicit quantization formula is presented for the quasi-homogeneous Poisson brackets on two-plane 9. Identification d’une Classe de Processus de Poisson Filtres (Identification of a Class of Filtered Poisson Processes). Science.gov (United States) 1983-05-20 Poisson processes is introduced: the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown how such a model can be identified from experimental data. (Author) 10. Zero inflated Poisson and negative binomial regression models: application in education. Science.gov (United States) Salehi, Masoud; Roudbari, Masoud 2015-01-01 The number of failed courses and semesters in students are indicators of their performance. These amounts have zero inflated (ZI) distributions. Using ZI Poisson and negative binomial distributions we can model these count data to find the associated factors and estimate the parameters. This study aims at to investigate the important factors related to the educational performance of students. This cross-sectional study performed in 2008-2009 at Iran University of Medical Sciences (IUMS) with a population of almost 6000 students, 670 students selected using stratified random sampling. The educational and demographical data were collected using the University records. The study design was approved at IUMS and the students' data kept confidential. The descriptive statistics and ZI Poisson and negative binomial regressions were used to analyze the data. The data were analyzed using STATA. In the number of failed semesters, Poisson and negative binomial distributions with ZI, students' total average and quota system had the most roles. For the number of failed courses, total average, and being in undergraduate or master levels had the most effect in both models. In all models the total average have the most effect on the number of failed courses or semesters. The next important factor is quota system in failed semester and undergraduate and master levels in failed courses. Therefore, average has an important inverse effect on the numbers of failed courses and semester. 11. The Advanced Glaucoma Intervention Study (AGIS): 5. Encapsulated bleb after initial trabeculectomy. Science.gov (United States) Schwartz, A L; Van Veldhuisen, P C; Gaasterland, D E; Ederer, F; Sullivan, E K; Cyrlin, M N 1999-01-01 To compare the incidence of encapsulated bleb after trabeculectomy in eyes with and without previous argon laser trabeculoplasty and to assess other risk factors for encapsulated bleb development. After medical treatment failure, eyes enrolled in the Advanced Glaucoma Intervention Study (AGIS) were randomly assigned to sequences of interventions starting with either argon laser trabeculoplasty or trabeculectomy. In the present study we compared the clinical course for 1 year after trabeculectomy in 119 eyes with failed argon laser trabeculoplasty with that of 379 eyes without previous argon laser trabeculoplasty. Data on bleb encapsulation were collected at the time that the encapsulation was diagnosed, and 3 and 6 months later. Of multiple factors examined in the AGIS data for the risk of developing encapsulated bleb, only male gender and high school graduation without further formal education were statistically significant. Encapsulation occurred in 18.5% of eyes with previous argon laser trabeculoplasty failure and 14.5% of eyes without previous argon laser trabeculoplasty (unadjusted relative risk, 1.27; 95% confidence limits = 0.81, 2.00; P = .23). After adjusting for age, gender, educational achievement, prescribed systemic beta-blockers, diabetes, visual field score, and years since glaucoma diagnosis, this difference remains statistically not significant. Four weeks after trabeculectomy, mean intraocular pressure was 7.5 mm Hg higher in eyes with (22.5 mm Hg) than without (15.0 mm Hg) encapsulated bleb; at 1 year after trabeculectomy and the resumption of medical therapy when needed, this excess was reduced to 1.4 mm Hg. This study, as did two previous studies, found male gender to be a risk factor for bleb encapsulation. Four studies, including the present study, have reported a higher rate of encapsulation in eyes with previous argon laser trabeculoplasty; in two of the studies, one of which was the present study, the rate was not statistically 12. Log-normal frailty models fitted as Poisson generalized linear mixed models. Science.gov (United States) Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver 2016-12-01 The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved. 13. Exploring factors associated with traumatic dental injuries in preschool children: a Poisson regression analysis. Science.gov (United States) Feldens, Carlos Alberto; Kramer, Paulo Floriani; Ferreira, Simone Helena; Spiguel, Mônica Hermann; Marquezan, Marcela 2010-04-01 This cross-sectional study aimed to investigate the factors associated with dental trauma in preschool children using Poisson regression analysis with robust variance. The study population comprised 888 children aged 3- to 5-year-old attending public nurseries in Canoas, southern Brazil. Questionnaires assessing information related to the independent variables (age, gender, race, mother's educational level and family income) were completed by the parents. Clinical examinations were carried out by five trained examiners in order to assess traumatic dental injuries (TDI) according to Andreasen's classification. One of the five examiners was calibrated to assess orthodontic characteristics (open bite and overjet). Multivariable Poisson regression analysis with robust variance was used to determine the factors associated with dental trauma as well as the strengths of association. Traditional logistic regression was also performed in order to compare the estimates obtained by both methods of statistical analysis. 36.4% (323/888) of the children suffered dental trauma and there was no difference in prevalence rates from 3 to 5 years of age. Poisson regression analysis showed that the probability of the outcome was almost 30% higher for children whose mothers had more than 8 years of education (Prevalence Ratio = 1.28; 95% CI = 1.03-1.60) and 63% higher for children with an overjet greater than 2 mm (Prevalence Ratio = 1.63; 95% CI = 1.31-2.03). Odds ratios clearly overestimated the size of the effect when compared with prevalence ratios. These findings indicate the need for preventive orientation regarding TDI, in order to educate parents and caregivers about supervising infants, particularly those with increased overjet and whose mothers have a higher level of education. Poisson regression with robust variance represents a better alternative than logistic regression to estimate the risk of dental trauma in preschool children. 14. Nonextensive statistical mechanics of ionic solutions International Nuclear Information System (INIS) Varela, L.M.; Carrete, J.; Munoz-Sola, R.; Rodriguez, J.R.; Gallego, J. 2007-01-01 Classical mean-field Poisson-Boltzmann theory of ionic solutions is revisited in the theoretical framework of nonextensive Tsallis statistics. The nonextensive equivalent of Poisson-Boltzmann equation is formulated revisiting the statistical mechanics of liquids and the Debye-Hueckel framework is shown to be valid for highly diluted solutions even under circumstances where nonextensive thermostatistics must be applied. The lowest order corrections associated to nonadditive effects are identified for both symmetric and asymmetric electrolytes and the behavior of the average electrostatic potential in a homogeneous system is analytically and numerically analyzed for various values of the complexity measurement nonextensive parameter q 15. Encapsulation of curcumin in polyelectrolyte nanocapsules and their neuroprotective activity Science.gov (United States) Szczepanowicz, Krzysztof; Jantas, Danuta; Piotrowski, Marek; Staroń, Jakub; Leśkiewicz, Monika; Regulska, Magdalena; Lasoń, Władysław; Warszyński, Piotr 2016-09-01 Poor water solubility and low bioavailability of lipophilic drugs can be potentially improved with the use of delivery systems. In this study, encapsulation of nanoemulsion droplets was utilized to prepare curcumin nanocarriers. Nanosize droplets containing the drug were encapsulated in polyelectrolyte shells formed by the layer-by-layer (LbL) adsorption of biocompatible polyelectrolytes: poly-L-lysine (PLL) and poly-L-glutamic acid (PGA). The size of synthesized nanocapsules was around 100 nm. Their biocompatibility and neuroprotective effects were evaluated on the SH-SY5Y human neuroblastoma cell line using cell viability/toxicity assays (MTT reduction, LDH release). Statistically significant toxic effect was clearly observed for PLL coated nanocapsules (reduction in cell viability about 20%-60%), while nanocapsules with PLL/PGA coating did not evoke any detrimental effects on SH-SY5Y cells. Curcumin encapsulated in PLL/PGA showed similar neuroprotective activity against hydrogen peroxide (H2O2)-induced cell damage, as did 5 μM curcumin pre-dissolved in DMSO (about 16% of protection). Determination of concentration of curcumin in cell lysate confirmed that curcumin in nanocapsules has cell protective effect in lower concentrations (at least 20 times) than when given alone. Intracellular mechanisms of encapsulated curcumin-mediated protection engaged the prevention of the H2O2-induced decrease in mitochondrial membrane potential (MMP) but did not attenuate Reactive Oxygen Species (ROS) formation. The obtained results indicate the utility of PLL/PGA shell nanocapsules as a promising, alternative way of curcumin delivery for neuroprotective purposes with improved efficiency and reduced toxicity. 16. Sensitivity study of poisson corruption in tomographic measurements for air-water flows International Nuclear Information System (INIS) Munshi, P.; Vaidya, M.S. 1993-01-01 An application of computerized tomography (CT) for measuring void fraction profiles in two-phase air-water flows was reported earlier. Those attempts involved some special radial methods for tomographic reconstruction and the popular convolution backprojection (CBP) method. The CBP method is capable of reconstructing void profiles for nonsymmetric flows also. In this paper, we investigate the effect of corrupted CT data for gamma-ray sources and aCBP algorithm. The corruption in such a case is due to the statistical (Poisson) nature of the source 17. Nutritional management of encapsulating peritoneal sclerosis with ... African Journals Online (AJOL) Keywords: intradialytic parenteral nutrition, nutritional management, encapsulating peritoneal sclerosis ... reflection of fluid retention and the underlying inflammatory process, ... The patient appeared weak and frail, with severe generalised muscle ... was recommended on diagnosis of EPS to prevent further peritoneal. 18. Poisson sigma model with branes and hyperelliptic Riemann surfaces International Nuclear Information System (INIS) Ferrario, Andrea 2008-01-01 We derive the explicit form of the superpropagators in the presence of general boundary conditions (coisotropic branes) for the Poisson sigma model. This generalizes the results presented by Cattaneo and Felder [''A path integral approach to the Kontsevich quantization formula,'' Commun. Math. Phys. 212, 591 (2000)] and Cattaneo and Felder ['Coisotropic submanifolds in Poisson geometry and branes in the Poisson sigma model', Lett. Math. Phys. 69, 157 (2004)] for Kontsevich's angle function [Kontsevich, M., 'Deformation quantization of Poisson manifolds I', e-print arXiv:hep.th/0101170] used in the deformation quantization program of Poisson manifolds. The relevant superpropagators for n branes are defined as gauge fixed homotopy operators of a complex of differential forms on n sided polygons P n with particular ''alternating'' boundary conditions. In the presence of more than three branes we use first order Riemann theta functions with odd singular characteristics on the Jacobian variety of a hyperelliptic Riemann surface (canonical setting). In genus g the superpropagators present g zero mode contributions 19. Poisson image reconstruction with Hessian Schatten-norm regularization. Science.gov (United States) Lefkimmiatis, Stamatios; Unser, Michael 2013-11-01 Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework. Science.gov (United States) 2015-12-15 Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends. 1. Sclerosing encapsulating peritonitis: a case report Energy Technology Data Exchange (ETDEWEB) Candido, Paula de Castro Menezes; Werner, Andrea de Freitas; Pereira, Izabela Machado Flores; Matos, Breno Assuncao; Pfeilsticker, Rudolf Moreira; Silva Filho, Raul, E-mail: [email protected] [Hospital Felicio Rocho, Belo Horizonte, MG (Brazil) 2015-01-15 Sclerosing encapsulating peritonitis, a rare cause of bowel obstruction, was described as a complication associated with peritoneal dialysis which is much feared because of its severity. The authors report a case where radiological findings in association with clinical symptoms have allowed for a noninvasive diagnosis of sclerosing encapsulating peritonitis, emphasizing the high sensitivity and specificity of computed tomography to demonstrate the characteristic findings of such a condition. (author) 2. Encapsulated Islet Transplantation: Where Do We Stand? Science.gov (United States) Vaithilingam, Vijayaganapathy; Bal, Sumeet; Tuch, Bernard E 2017-01-01 Transplantation of pancreatic islets encapsulated within immuno-protective microcapsules is a strategy that has the potential to overcome graft rejection without the need for toxic immunosuppressive medication. However, despite promising preclinical studies, clinical trials using encapsulated islets have lacked long-term efficacy, and although generally considered clinically safe, have not been encouraging overall. One of the major factors limiting the long-term function of encapsulated islets is the host's immunological reaction to the transplanted graft which is often manifested as pericapsular fibrotic overgrowth (PFO). PFO forms a barrier on the capsule surface that prevents the ingress of oxygen and nutrients leading to islet cell starvation, hypoxia and death. The mechanism of PFO formation is still not elucidated fully and studies using a pig model have tried to understand the host immune response to empty alginate microcapsules. In this review, the varied strategies to overcome or reduce PFO are discussed, including alginate purification, altering microcapsule geometry, modifying alginate chemical composition, co-encapsulation with immunomodulatory cells, administration of pharmacological agents, and alternative transplantation sites. Nanoencapsulation technologies, such as conformal and layer-by-layer coating technologies, as well as nanofiber, thin-film nanoporous devices, and silicone based NanoGland devices are also addressed. Finally, this review outlines recent progress in imaging technologies to track encapsulated cells, as well as promising perspectives concerning the production of insulin-producing cells from stem cells for encapsulation. 3. Four-dimensional gravity as an almost-Poisson system Science.gov (United States) Ita, Eyo Eyo 2015-04-01 In this paper, we examine the phase space structure of a noncanonical formulation of four-dimensional gravity referred to as the Instanton representation of Plebanski gravity (IRPG). The typical Hamiltonian (symplectic) approach leads to an obstruction to the definition of a symplectic structure on the full phase space of the IRPG. We circumvent this obstruction, using the Lagrange equations of motion, to find the appropriate generalization of the Poisson bracket. It is shown that the IRPG does not support a Poisson bracket except on the vector constraint surface. Yet there exists a fundamental bilinear operation on its phase space which produces the correct equations of motion and induces the correct transformation properties of the basic fields. This bilinear operation is known as the almost-Poisson bracket, which fails to satisfy the Jacobi identity and in this case also the condition of antisymmetry. We place these results into the overall context of nonsymplectic systems. 4. The coupling of Poisson sigma models to topological backgrounds Energy Technology Data Exchange (ETDEWEB) Rosa, Dario [School of Physics, Korea Institute for Advanced Study,Seoul 02455 (Korea, Republic of) 2016-12-13 We extend the coupling to the topological backgrounds, recently worked out for the 2-dimensional BF-model, to the most general Poisson sigma models. The coupling involves the choice of a Casimir function on the target manifold and modifies the BRST transformations. This in turn induces a change in the BRST cohomology of the resulting theory. The observables of the coupled theory are analyzed and their geometrical interpretation is given. We finally couple the theory to 2-dimensional topological gravity: this is the first step to study a topological string theory in propagation on a Poisson manifold. As an application, we show that the gauge-fixed vectorial supersymmetry of the Poisson sigma models has a natural explanation in terms of the theory coupled to topological gravity. 5. Modified Poisson eigenfunctions for electrostatic Bernstein--Greene--Kruskal equilibria International Nuclear Information System (INIS) Ling, K.; Abraham-Shrauner, B. 1981-01-01 The stability of an electrostatic Bernstein--Greene--Kruskal equilibrium by Lewis and Symon's general linear stability analysis for spatially inhomogeneous Vlasov equilibria, which employs eigenfunctions and eigenvalues of the equilibrium Liouville operator and the modified Poisson operator, is considered. Analytic expressions for the Liouville eigenfuctions and eigenvalues have already been given; approximate analytic expressions for the dominant eigenfunction and eigenvalue of the modified Poisson operator are given. In the kinetic limit three methods are given: (i) the perturbation method, (ii) the Rayleigh--Ritz method, and (iii) a method based on a Hill's equation. In the fluid limit the Rayleigh--Ritz method is used. The dominant eigenfunction and eigenvalue are then substituted in the dispersion relation and the growth rate calculated. The growth rate agrees very well with previous results found by numerical simulation and by modified Poisson eigenfunctions calculated numerically 6. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator! International Nuclear Information System (INIS) Nutku, Yavuz 2003-01-01 Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems 7. Poisson structures for reduced non-holonomic systems International Nuclear Information System (INIS) Ramos, Arturo 2004-01-01 Borisov, Mamaev and Kilin have recently found certain Poisson structures with respect to which the reduced and rescaled systems of certain non-holonomic problems, involving rolling bodies without slipping, become Hamiltonian, the Hamiltonian function being the reduced energy. We study further the algebraic origin of these Poisson structures, showing that they are of rank 2 and therefore the mentioned rescaling is not necessary. We show that they are determined, up to a non-vanishing factor function, by the existence of a system of first-order differential equations providing two integrals of motion. We generalize the form of the Poisson structures and extend their domain of definition. We apply the theory to the rolling disc, the Routh's sphere, the ball rolling on a surface of revolution, and its special case of a ball rolling inside a cylinder 8. A high order solver for the unbounded Poisson equation DEFF Research Database (Denmark) Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe In mesh-free particle methods a high order solution to the unbounded Poisson equation is usually achieved by constructing regularised integration kernels for the Biot-Savart law. Here the singular, point particles are regularised using smoothed particles to obtain an accurate solution with an order...... of convergence consistent with the moments conserved by the applied smoothing function. In the hybrid particle-mesh method of Hockney and Eastwood (HE) the particles are interpolated onto a regular mesh where the unbounded Poisson equation is solved by a discrete non-cyclic convolution of the mesh values...... and the integration kernel. In this work we show an implementation of high order regularised integration kernels in the HE algorithm for the unbounded Poisson equation to formally achieve an arbitrary high order convergence. We further present a quantitative study of the convergence rate to give further insight... 9. Poisson-Fermi Formulation of Nonlocal Electrostatics in Electrolyte Solutions Directory of Open Access Journals (Sweden) Liu Jinn-Liang 2017-10-01 Full Text Available We present a nonlocal electrostatic formulation of nonuniform ions and water molecules with interstitial voids that uses a Fermi-like distribution to account for steric and correlation efects in electrolyte solutions. The formulation is based on the volume exclusion of hard spheres leading to a steric potential and Maxwell’s displacement field with Yukawa-type interactions resulting in a nonlocal electric potential. The classical Poisson-Boltzmann model fails to describe steric and correlation effects important in a variety of chemical and biological systems, especially in high field or large concentration conditions found in and near binding sites, ion channels, and electrodes. Steric effects and correlations are apparent when we compare nonlocal Poisson-Fermi results to Poisson-Boltzmann calculations in electric double layer and to experimental measurements on the selectivity of potassium channels for K+ over Na+. 10. Statistics for experimentalists CERN Document Server Cooper, B E 2014-01-01 Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experiment... 11. Cauchy–Schwarz inequality-based criteria for the non-classicality of sub-Poisson and antibunched light Energy Technology Data Exchange (ETDEWEB) Volovich, Igor V., E-mail: [email protected] 2016-01-08 We discuss non-classicality of photon antibunching and sub-Poisson photon statistics. The difference K between the variance and the mean of the particle number operator as a measure of non-classicality of a state is discussed. The non-classicality of quantum states, discussed here, is different from another non-classicality, related with Bell's inequalities and entanglement though both can be traced to the violation of an inequality implied by an assumption of classicality that utilized the Cauchy–Schwarz inequality in the derivation. - Highlights: • Non-classicality of photon antibunching and sub-Poisson statistics are discussed. • The Cauchy–Schwarz inequality provides criteria for the non-classicality. • The vacuum contribution makes the superposition of quantum states more classical. • Experiments to generate non-classical superpositions of the Fock states are suggested. 12. Efficient maximal Poisson-disk sampling and remeshing on surfaces KAUST Repository Guo, Jianwei; Yan, Dongming; Jia, Xiaohong; Zhang, Xiaopeng 2015-01-01 Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption. 13. Robust iterative observer for source localization for Poisson equation KAUST Repository 2017-01-05 Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end. 14. Gyrokinetic energy conservation and Poisson-bracket formulation International Nuclear Information System (INIS) Brizard, A. 1989-01-01 An integral expression for the gyrokinetic total energy of a magnetized plasma, with general magnetic field configuration perturbed by fully electromagnetic fields, was recently derived through the use of a gyrocenter Lie transformation. It is shown that the gyrokinetic energy is conserved by the gyrokinetic Hamiltonian flow to all orders in perturbed fields. An explicit demonstration that a gyrokinetic Hamiltonian containing quadratic nonlinearities preserves the gyrokinetic energy up to third order is given. The Poisson-bracket formulation greatly facilitates this demonstration with the help of the Jacobi identity and other properties of the Poisson brackets 15. Dilaton gravity, Poisson sigma models and loop quantum gravity International Nuclear Information System (INIS) Bojowald, Martin; Reyes, Juan D 2009-01-01 Spherically symmetric gravity in Ashtekar variables coupled to Yang-Mills theory in two dimensions and its relation to dilaton gravity and Poisson sigma models are discussed. After introducing its loop quantization, quantum corrections for inverse triad components are shown to provide a consistent deformation without anomalies. The relation to Poisson sigma models provides a covariant action principle of the quantum-corrected theory with effective couplings. Results are also used to provide loop quantizations of spherically symmetric models in arbitrary D spacetime dimensions. 16. ? filtering for stochastic systems driven by Poisson processes Science.gov (United States) Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya 2015-01-01 This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme. 17. Poisson's theorem and integrals of KdV equation International Nuclear Information System (INIS) Tasso, H. 1978-01-01 Using Poisson's theorem it is proved that if F = integral sub(-infinity)sup(+infinity) T(u,usub(x),...usub(n,t))dx is an invariant functional of KdV equation, then integral sub(-infinity)sup(+infinity) delta F/delta u dx integral sub(-infinity)sup(+infinity) delta T/delta u dx is also an invariant functional. In the case of a polynomial T, one finds in a simple way the known recursion ΔTr/Δu = Tsub(r-1). This note gives an example of the usefulness of Poisson's theorem. (author) 18. Robust iterative observer for source localization for Poisson equation KAUST Repository 2017-01-01 Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end. 19. Efficient maximal Poisson-disk sampling and remeshing on surfaces KAUST Repository Guo, Jianwei 2015-02-01 Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption. 20. Adaptive maximal poisson-disk sampling on surfaces KAUST Repository Yan, Dongming 2012-01-01 In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM. 1. Efficient triangulation of Poisson-disk sampled point sets KAUST Repository Guo, Jianwei 2014-05-06 In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg. 2. Modifications in the AUTOMESH and other POISSON Group Codes International Nuclear Information System (INIS) Gupta, R.C. 1986-01-01 Improvements in the POISSON Group Codes are discussed. These improvements allow one to compute magnetic field to an accuracy of a few parts in 100,000 in quite complicated geometries with a reduced requirement on computational time and computer memory. This can be accomplished mainly by making the mesh dense at some places and sparse at other places. AUTOMESH has been modified so that one can use variable mesh size conveniently and efficiently at a number of places. We will present an example to illustrate these techniques. Several other improvements in the codes AUTOMESH, LATTICE and POISSON will also be discussed 3. Quadratic Hamiltonians on non-symmetric Poisson structures International Nuclear Information System (INIS) Arribas, M.; Blesa, F.; Elipe, A. 2007-01-01 Many dynamical systems may be represented in a set of non-canonical coordinates that generate an su(2) algebraic structure. The topology of the phase space is the one of the S 2 sphere, the Poisson structure is the one of the rigid body, and the Hamiltonian is a parametric quadratic form in these 'spherical' coordinates. However, there are other problems in which the Poisson structure losses its symmetry. In this paper we analyze this case and, we show how the loss of the spherical symmetry affects the phase flow and parametric bifurcations for the bi-parametric cases 4. Durability of incinerator ash waste encapsulated in modified sulfur cement International Nuclear Information System (INIS) Kalb, P.D.; Heiser, J.H. III; Pietrzak, R.; Colombo, P. 1991-01-01 5. Encapsulation in the food industry: a review. Science.gov (United States) Gibbs, B F; Kermasha, S; Alli, I; Mulligan, C N 1999-05-01 Encapsulation involves the incorporation of food ingredients, enzymes, cells or other materials in small capsules. Applications for this technique have increased in the food industry since the encapsulated materials can be protected from moisture, heat or other extreme conditions, thus enhancing their stability and maintaining viability. Encapsulation in foods is also utilized to mask odours or tastes. Various techniques are employed to form the capsules, including spray drying, spray chilling or spray cooling, extrusion coating, fluidized bed coating, liposome entrapment, coacervation, inclusion complexation, centrifugal extrusion and rotational suspension separation. Each of these techniques is discussed in this review. A wide variety of foods is encapsulated--flavouring agents, acids bases, artificial sweeteners, colourants, preservatives, leavening agents, antioxidants, agents with undesirable flavours, odours and nutrients, among others. The use of encapsulation for sweeteners such as aspartame and flavours in chewing gum is well known. Fats, starches, dextrins, alginates, protein and lipid materials can be employed as encapsulating materials. Various methods exist to release the ingredients from the capsules. Release can be site-specific, stage-specific or signalled by changes in pH, temperature, irradiation or osmotic shock. In the food industry, the most common method is by solvent-activated release. The addition of water to dry beverages or cake mixes is an example. Liposomes have been applied in cheese-making, and its use in the preparation of food emulsions such as spreads, margarine and mayonnaise is a developing area. Most recent developments include the encapsulation of foods in the areas of controlled release, carrier materials, preparation methods and sweetener immobilization. New markets are being developed and current research is underway to reduce the high production costs and lack of food-grade materials. 6. Two-sample discrimination of Poisson means Science.gov (United States) Lampton, M. 1994-01-01 This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry. 7. Cytokine production induced by non-encapsulated and encapsulated Porphyromonas gingivalis strains NARCIS (Netherlands) Kunnen, A.; Dekker, D.C.; van Pampus, M.G.; Harmsen, H.J.; Aarnoudse, J.G.; Abbas, F.; Faas, M.M. Objective: Although the exact reason is not known, encapsulated gram-negative Porphyromonas gingivalis strains are more virulent than non-encapsulated strains. Since difference in virulence properties may be due to difference in cytokine production following recognition of the bacteria or their 8. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution. Science.gov (United States) Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton 2018-03-13 The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver. 9. Decomposition of almost-Poisson structure of generalised Chaplygin's nonholonomic systems International Nuclear Information System (INIS) Chang, Liu; Peng, Chang; Shi-Xing, Liu; Yong-Xin, Guo 2010-01-01 This paper constructs an almost-Poisson structure for the non-self-adjoint dynamical systems, which can be decomposed into a sum of a Poisson bracket and the other almost-Poisson bracket. The necessary and sufficient condition for the decomposition of the almost-Poisson bracket to be two Poisson ones is obtained. As an application, the almost-Poisson structure for generalised Chaplygin's systems is discussed in the framework of the decomposition theory. It proves that the almost-Poisson bracket for the systems can be decomposed into the sum of a canonical Poisson bracket and another two noncanonical Poisson brackets in some special cases, which is useful for integrating the equations of motion 10. Photopolymerizable liquid encapsulants for microelectronic devices Science.gov (United States) Baikerikar, Kiran K. 2000-10-01 Plastic encapsulated microelectronic devices consist of a silicon chip that is physically attached to a leadframe, electrically interconnected to input-output leads, and molded in a plastic that is in direct contact with the chip, leadframe, and interconnects. The plastic is often referred to as the molding compound, and is used to protect the chip from adverse mechanical, thermal, chemical, and electrical environments. Encapsulation of microelectronic devices is typically accomplished using a transfer molding process in which the molding compound is cured by heat. Most transfer molding processes suffer from significant problems arising from the high operating temperatures and pressures required to fill the mold. These aspects of the current process can lead to thermal stresses, incomplete mold filling, and wire sweep. In this research, a new strategy for encapsulating microelectronic devices using photopolymerizable liquid encapsulants (PLEs) has been investigated. The PLEs consist of an epoxy novolac-based vinyl ester resin (˜25 wt.%), fused silica filler (70--74 wt.%), and a photoinitiator, thermal initiator, and silane coupling agent. For these encapsulants, the use of light, rather than heat, to initiate the polymerization allows precise control over when the reaction starts, and therefore completely decouples the mold filling and the cure. The low viscosity of the PLEs allows for low operating pressures and minimizes problems associated with wire sweep. In addition, the in-mold cure time for the PLEs is equivalent to the in-mold cure times of current transfer molding compounds. In this thesis, the thermal and mechanical properties, as well as the viscosity and adhesion of photopolymerizable liquid encapsulants, are reported in order to demonstrate that a UV-curable formulation can have the material properties necessary for microelectronic encapsulation. In addition, the effects of the illumination time, postcure time, fused silica loading, and the inclusion 11. Encapsulation layer design and scalability in encapsulated vertical 3D RRAM International Nuclear Information System (INIS) Yu, Muxi; Fang, Yichen; Wang, Zongwei; Chen, Gong; Pan, Yue; Yang, Xue; Yin, Minghui; Yang, Yuchao; Li, Ming; Cai, Yimao; Huang, Ru 2016-01-01 Here we propose a novel encapsulated vertical 3D RRAM structure with each resistive switching cell encapsulated by dielectric layers, contributing to both the reliability improvement of individual cells and thermal disturbance reduction of adjacent cells due to the effective suppression of unwanted oxygen vacancy diffusion. In contrast to the traditional vertical 3D RRAM, encapsulated bar-electrodes are adopted in the proposed structure substituting the previous plane-electrodes, thus encapsulated resistive switching cells can be naturally formed by simply oxidizing the tip of the metal bar-electrodes. In this work, TaO x -based 3D RRAM devices with SiO 2 and Si 3 N 4 as encapsulation layers are demonstrated, both showing significant advantages over traditional unencapsulated vertical 3D RRAM. Furthermore, it was found thermal conductivity and oxygen blocking ability are two key parameters of the encapsulation layer design influencing the scalability of vertical 3D RRAM. Experimental and simulation data show that oxygen blocking ability is more critical for encapsulation layers in the relatively large scale, while thermal conductivity becomes dominant as the stacking layers scale to the sub-10 nm regime. Finally, based on the notable impacts of the encapsulation layer on 3D RRAM scaling, an encapsulation material with both excellent oxygen blocking ability and high thermal conductivity such as AlN is suggested to be highly desirable to maximize the advantages of the proposed encapsulated structure. The findings in this work could pave the way for reliable ultrahigh-density storage applications in the big data era. (paper) 12. Multi-parameter full waveform inversion using Poisson KAUST Repository Oh, Juwon 2016-07-21 In multi-parameter full waveform inversion (FWI), the success of recovering each parameter is dependent on characteristics of the partial derivative wavefields (or virtual sources), which differ according to parameterisation. Elastic FWIs based on the two conventional parameterisations (one uses Lame constants and density; the other employs P- and S-wave velocities and density) have low resolution of gradients for P-wave velocities (or ). Limitations occur because the virtual sources for P-wave velocity or (one of the Lame constants) are related only to P-P diffracted waves, and generate isotropic explosions, which reduce the spatial resolution of the FWI for these parameters. To increase the spatial resolution, we propose a new parameterisation using P-wave velocity, Poisson\\'s ratio, and density for frequency-domain multi-parameter FWI for isotropic elastic media. By introducing Poisson\\'s ratio instead of S-wave velocity, the virtual source for the P-wave velocity generates P-S and S-S diffracted waves as well as P-P diffracted waves in the partial derivative wavefields for the P-wave velocity. Numerical examples of the cross-triangle-square (CTS) model indicate that the new parameterisation provides highly resolved descent directions for the P-wave velocity. Numerical examples of noise-free and noisy data synthesised for the elastic Marmousi-II model support the fact that the new parameterisation is more robust for noise than the two conventional parameterisations. 13. Steady state solution of the Poisson-Nernst-Planck equations International Nuclear Information System (INIS) Golovnev, A.; Trimper, S. 2010-01-01 The exact steady state solution of the Poisson-Nernst-Planck equations (PNP) is given in terms of Jacobi elliptic functions. A more tractable approximate solution is derived which can be used to compare the results with experimental observations in binary electrolytes. The breakdown of the PNP for high concentration and high applied voltage is discussed. 14. Coefficient Inverse Problem for Poisson's Equation in a Cylinder NARCIS (Netherlands) Solov'ev, V. V. 2011-01-01 The inverse problem of determining the coefficient on the right-hand side of Poisson's equation in a cylindrical domain is considered. The Dirichlet boundary value problem is studied. Two types of additional information (overdetermination) can be specified: (i) the trace of the solution to the 15. Poisson equation in the Kohn-Sham Coulomb problem OpenAIRE Manby, F. R.; Knowles, Peter James 2001-01-01 We apply the Poisson equation to the quantum mechanical Coulomb problem for many-particle systems. By introducing a suitable basis set, the two-electron Coulomb integrals become simple overlaps. This offers the possibility of very rapid linear-scaling treatment of the Coulomb contribution to Kohn-Sham theory. 16. Poisson and Gaussian approximation of weighted local empirical processes NARCIS (Netherlands) Einmahl, J.H.J. 1995-01-01 We consider the local empirical process indexed by sets, a greatly generalized version of the well-studied uniform tail empirical process. We show that the weak limit of weighted versions of this process is Poisson under certain conditions, whereas it is Gaussian in other situations. Our main 17. An application of the Autoregressive Conditional Poisson (ACP) model CSIR Research Space (South Africa) Holloway, Jennifer P 2010-11-01 Full Text Available When modelling count data that comes in the form of a time series, the static Poisson regression and standard time series models are often not appropriate. A current study therefore involves the evaluation of several observation-driven and parameter... 18. Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX) DEFF Research Database (Denmark) Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used... 19. A high order solver for the unbounded Poisson equation DEFF Research Database (Denmark) Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe 2012-01-01 This work improves upon Hockney and Eastwood's Fourier-based algorithm for the unbounded Poisson equation to formally achieve arbitrary high order of convergence without any additional computational cost. We assess the methodology on the kinematic relations between the velocity and vorticity fields.... 20. Particle-wave discrimination in Poisson spot experiments International Nuclear Information System (INIS) Reisinger, T; Bracco, G; Holst, B 2011-01-01 Matter-wave interferometry has been used extensively over the last few years to demonstrate the quantum-mechanical wave nature of increasingly larger and more massive particles. We have recently suggested the use of the historical Poisson spot setup to test the diffraction properties of larger objects. In this paper, we present the results of a classical particle van der Waals (vdW) force model for a Poisson spot experimental setup and compare these to Fresnel diffraction calculations with a vdW phase term. We include the effect of disc-edge roughness in both models. Calculations are performed with D 2 and with C 70 using realistic parameters. We find that the sensitivity of the on-axis interference/focus spot to disc-edge roughness is very different in the two cases. We conclude that by measuring the intensity on the optical axis as a function of disc-edge roughness, it can be determined whether the objects behave as de Broglie waves or classical particles. The scaling of the Poisson spot experiment to larger molecular masses is, however, not as favorable as in the case of near-field light-grating-based interferometers. Instead, we discuss the possibility of studying the Casimir-Polder potential using the Poisson spot setup. 1. Monitoring Poisson time series using multi-process models DEFF Research Database (Denmark) Engebjerg, Malene Dahl Skov; Lundbye-Christensen, Søren; Kjær, Birgitte B. aspects of health resource management may also be addressed. In this paper we center on the detection of outbreaks of infectious diseases. This is achieved by a multi-process Poisson state space model taking autocorrelation and overdispersion into account, which has been applied to a data set concerning... 2. Area-to-Area Poisson Kriging and Spatial Bayesian Analysis Science.gov (United States) 2016-10-01 Background: In many countries gastric cancer has the highest incidence among the gastrointestinal cancers and is the second most common cancer in Iran. The aim of this study was to identify and map high risk gastric cancer regions at the county-level in Iran. Methods: In this study we analyzed gastric cancer data for Iran in the years 2003-2010. Areato- area Poisson kriging and Besag, York and Mollie (BYM) spatial models were applied to smoothing the standardized incidence ratios of gastric cancer for the 373 counties surveyed in this study. The two methods were compared in term of accuracy and precision in identifying high risk regions. Result: The highest smoothed standardized incidence rate (SIR) according to area-to-area Poisson kriging was in Meshkinshahr county in Ardabil province in north-western Iran (2.4,SD=0.05), while the highest smoothed standardized incidence rate (SIR) according to the BYM model was in Ardabil, the capital of that province (2.9,SD=0.09). Conclusion: Both methods of mapping, ATA Poisson kriging and BYM, showed the gastric cancer incidence rate to be highest in north and north-west Iran. However, area-to-area Poisson kriging was more precise than the BYM model and required less smoothing. According to the results obtained, preventive measures and treatment programs should be focused on particular counties of Iran. Creative Commons Attribution License 3. Ruin probabilities for a regenerative Poisson gap generated risk process DEFF Research Database (Denmark) Asmussen, Søren; Biard, Romain A risk process with constant premium rate c and Poisson arrivals of claims is considered. A threshold r is defined for claim interarrival times, such that if k consecutive interarrival times are larger than r, then the next claim has distribution G. Otherwise, the claim size distribution is F... 4. Optimality of Poisson Processes Intensity Learning with Gaussian Processes NARCIS (Netherlands) Kirichenko, A.; van Zanten, H. 2015-01-01 In this paper we provide theoretical support for the so-called "Sigmoidal Gaussian Cox Process" approach to learning the intensity of an inhomogeneous Poisson process on a d-dimensional domain. This method was proposed by Adams, Murray and MacKay (ICML, 2009), who developed a tractable computational 5. Rate-optimal Bayesian intensity smoothing for inhomogeneous Poisson processes NARCIS (Netherlands) Belitser, E.; Andrade Serra, De P.J.; Zanten, van J.H. 2013-01-01 We apply nonparametric Bayesian methods to study the problem of estimating the intensity function of an inhomogeneous Poisson process. We exhibit a prior on intensities which both leads to a computationally feasible method and enjoys desirable theoretical optimality properties. The prior we use is 6. Nonparametric Bayesian inference for multidimensional compound Poisson processes NARCIS (Netherlands) Gugushvili, S.; van der Meulen, F.; Spreij, P. 2015-01-01 Given a sample from a discretely observed multidimensional compound Poisson process, we study the problem of nonparametric estimation of its jump size density r0 and intensity λ0. We take a nonparametric Bayesian approach to the problem and determine posterior contraction rates in this context, 7. Poisson processes on groups and Feynamn path integrals International Nuclear Information System (INIS) Combe, P.; Rodriguez, R.; Aix-Marseille-2 Univ., 13 - Marseille; Sirugue, M.; Sirugue-Collin, M.; Centre National de la Recherche Scientifique, 13 - Marseille; Hoegh-Krohn, R. 1980-01-01 We give an expression for the perturbed evolution of a free evolution by gentle, possibly velocity dependent, potential, in terms of the expectation with respect to a Poisson process on a group. Various applications are given in particular to usual quantum mechanics but also to Fermi and spin systems. (orig.) 8. Some applications of the fractional Poisson probability distribution International Nuclear Information System (INIS) 2009-01-01 Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers. 9. Poisson processes on groups and Feynman path integrals International Nuclear Information System (INIS) Combe, P.; Rodriguez, R.; Sirugue-Collin, M.; Centre National de la Recherche Scientifique, 13 - Marseille; Sirugue, M. 1979-09-01 An expression is given for the perturbed evolution of a free evolution by gentle, possibly velocity dependent, potential, in terms of the expectation with respect to a Poisson process on a group. Various applications are given in particular to usual quantum mechanics but also to Fermi and spin systems 10. Poisson's equation in de Sitter space-time Energy Technology Data Exchange (ETDEWEB) Pessa, E [Rome Univ. (Italy). Ist. di Matematica 1980-11-01 Based on a suitable generalization of Poisson's equation for de Sitter space-time the form of gravitation's law in 'projective relativity' is examined; it is found that, in the interior case, a small difference with the customary Newtonian law arises. This difference, of a repulsive character, can be very important in cosmological problems. Science.gov (United States) Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas 1998-01-01 The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator... 12. Characterization and global analysis of a family of Poisson structures International Nuclear Information System (INIS) Hernandez-Bermejo, Benito 2006-01-01 A three-dimensional family of solutions of the Jacobi equations for Poisson systems is characterized. In spite of its general form it is possible the explicit and global determination of its main features, such as the symplectic structure and the construction of the Darboux canonical form. Examples are given 13. Characterization and global analysis of a family of Poisson structures Energy Technology Data Exchange (ETDEWEB) Hernandez-Bermejo, Benito [Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Universidad Rey Juan Carlos, Calle Tulipan S/N, 28933 (Mostoles), Madrid (Spain)]. E-mail: [email protected] 2006-06-26 A three-dimensional family of solutions of the Jacobi equations for Poisson systems is characterized. In spite of its general form it is possible the explicit and global determination of its main features, such as the symplectic structure and the construction of the Darboux canonical form. Examples are given. 14. A Poisson type formula for Hardy classes on Heisenberg's group Directory of Open Access Journals (Sweden) Lopushansky O.V. 2010-06-01 Full Text Available The Hardy type class of complex functions with infinite many variables defined on the Schrodinger irreducible unitary orbit of reduced Heisenberg group, generated by the Gauss density, is investigated. A Poisson integral type formula for their analytic extensions on an open ball is established. Taylor coefficients for analytic extensions are described by the associatedsymmetric Fock space. 15. Boundary singularity of Poisson and harmonic Bergman kernels Czech Academy of Sciences Publication Activity Database Engliš, Miroslav 2015-01-01 Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170 16. Adaptive maximal poisson-disk sampling on surfaces KAUST Repository Yan, Dongming; Wonka, Peter 2012-01-01 In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which 17. Quadratic Poisson brackets compatible with an algebra structure OpenAIRE Balinsky, A. A.; Burman, Yu. 1994-01-01 Quadratic Poisson brackets on a vector space equipped with a bilinear multiplication are studied. A notion of a bracket compatible with the multiplication is introduced and an effective criterion of such compatibility is given. Among compatible brackets, a subclass of coboundary brackets is described, and such brackets are enumerated in a number of examples. 18. On covariant Poisson brackets in classical field theory International Nuclear Information System (INIS) Forger, Michael; Salles, Mário O. 2015-01-01 How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra 19. On covariant Poisson brackets in classical field theory Energy Technology Data Exchange (ETDEWEB) Forger, Michael [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Salles, Mário O. [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Centro de Ciências Exatas e da Terra, Universidade Federal do Rio Grande do Norte, Campus Universitário – Lagoa Nova, BR–59078-970 Natal, RN (Brazil) 2015-10-15 How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra. 20. Poisson-generalized gamma empirical Bayes model for disease ... African Journals Online (AJOL) In spatial disease mapping, the use of Bayesian models of estimation technique is becoming popular for smoothing relative risks estimates for disease mapping. The most common Bayesian conjugate model for disease mapping is the Poisson-Gamma Model (PG). To explore further the activity of smoothing of relative risk ... 1. Poisson's Ratio and Auxetic Properties of Natural Rocks Science.gov (United States) Ji, Shaocheng; Li, Le; Motra, Hem Bahadur; Wuttke, Frank; Sun, Shengsi; Michibayashi, Katsuyoshi; Salisbury, Matthew H. 2018-02-01 Here we provide an appraisal of the Poisson's ratios (υ) for natural elements, common oxides, silicate minerals, and rocks with the purpose of searching for naturally auxetic materials. The Poisson's ratios of equivalently isotropic polycrystalline aggregates were calculated from dynamically measured elastic properties. Alpha-cristobalite is currently the only known naturally occurring mineral that has exclusively negative υ values at 20-1,500°C. Quartz and potentially berlinite (AlPO4) display auxetic behavior in the vicinity of their α-β structure transition. None of the crystalline igneous and metamorphic rocks (e.g., amphibolite, gabbro, granite, peridotite, and schist) display auxetic behavior at pressures of >5 MPa and room temperature. Our experimental measurements showed that quartz-rich sedimentary rocks (i.e., sandstone and siltstone) are most likely to be the only rocks with negative Poisson's ratios at low confining pressures (≤200 MPa) because their main constituent mineral, α-quartz, already has extremely low Poisson's ratio (υ = 0.08) and they contain microcracks, micropores, and secondary minerals. This finding may provide a new explanation for formation of dome-and-basin structures in quartz-rich sedimentary rocks in response to a horizontal compressional stress in the upper crust. 2. Hierarchy of Poisson brackets for elements of a scattering matrix International Nuclear Information System (INIS) Konopelchenko, B.G.; Dubrovsky, V.G. 1984-01-01 The infinite family of Poisson brackets [Ssub(i1k1) (lambda 1 ), Ssub(i2k2) (lambda 2 )]sub(n) (n=0, 1, 2, ...) between the elements of a scattering matrix is calculated for the linear matrix spectral problem. (orig.) 3. Nambu-Poisson reformulation of the finite dimensional dynamical systems International Nuclear Information System (INIS) Baleanu, D.; Makhaldiani, N. 1998-01-01 A system of nonlinear ordinary differential equations which in a particular case reduces to Volterra's system is introduced. We found in two simplest cases the complete sets of the integrals of motion using Nambu-Poisson reformulation of the Hamiltonian dynamics. In these cases we have solved the systems by quadratures 4. Sol-gel method for encapsulating molecules Science.gov (United States) Brinker, C. Jeffrey; Ashley, Carol S.; Bhatia, Rimple; Singh, Anup K. 2002-01-01 A method for encapsulating organic molecules, and in particular, biomolecules using sol-gel chemistry. A silica sol is prepared from an aqueous alkali metal silicate solution, such as a mixture of silicon dioxide and sodium or potassium oxide in water. The pH is adjusted to a suitably low value to stabilize the sol by minimizing the rate of siloxane condensation, thereby allowing storage stability of the sol prior to gelation. The organic molecules, generally in solution, is then added with the organic molecules being encapsulated in the sol matrix. After aging, either a thin film can be prepared or a gel can be formed with the encapsulated molecules. Depending upon the acid used, pH, and other processing conditions, the gelation time can be from one minute up to several days. In the method of the present invention, no alcohols are generated as by-products during the sol-gel and encapsulation steps. The organic molecules can be added at any desired pH value, where the pH value is generally chosen to achieve the desired reactivity of the organic molecules. The method of the present invention thereby presents a sufficiently mild encapsulation method to retain a significant portion of the activity of the biomolecules, compared with the activity of the biomolecules in free solution. 5. Pendugaan Angka Kematian Bayi dengan Menggunakan Model Poisson Bayes Berhirarki Dua-Level Directory of Open Access Journals (Sweden) Nusar Hajarisman 2013-06-01 Full Text Available Official institutions of national data providers such as the BPS-Statistics Indonesia is required to produce and present the statistical information, as necessary as a form of contributory BPS region in support of regional development policy and planning. There are survey conducted by BPS capability estimation techniques are still limited, due to the resulting estimators have not been able to directly assumed for small areas. In this article we propose the hierarchical Bayesian models, especially for count data which are Poisson distributed, in small area estimation problem. The model was developed by combining concept of generalized linear model and Fay-Herriot model. The results of the development of this model is implemented to estimate the infant mortality rate in Bojonegoro district, East Java Province. 6. Poisson traces, D-modules, and symplectic resolutions. Science.gov (United States) Etingof, Pavel; Schedler, Travis 2018-01-01 We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require. 7. Poisson structure of dynamical systems with three degrees of freedom Science.gov (United States) Gümral, Hasan; Nutku, Yavuz 1993-12-01 It is shown that the Poisson structure of dynamical systems with three degrees of freedom can be defined in terms of an integrable one-form in three dimensions. Advantage is taken of this fact and the theory of foliations is used in discussing the geometrical structure underlying complete and partial integrability. Techniques for finding Poisson structures are presented and applied to various examples such as the Halphen system which has been studied as the two-monopole problem by Atiyah and Hitchin. It is shown that the Halphen system can be formulated in terms of a flat SL(2,R)-valued connection and belongs to a nontrivial Godbillon-Vey class. On the other hand, for the Euler top and a special case of three-species Lotka-Volterra equations which are contained in the Halphen system as limiting cases, this structure degenerates into the form of globally integrable bi-Hamiltonian structures. The globally integrable bi-Hamiltonian case is a linear and the SL(2,R) structure is a quadratic unfolding of an integrable one-form in 3+1 dimensions. It is shown that the existence of a vector field compatible with the flow is a powerful tool in the investigation of Poisson structure and some new techniques for incorporating arbitrary constants into the Poisson one-form are presented herein. This leads to some extensions, analogous to q extensions, of Poisson structure. The Kermack-McKendrick model and some of its generalizations describing the spread of epidemics, as well as the integrable cases of the Lorenz, Lotka-Volterra, May-Leonard, and Maxwell-Bloch systems admit globally integrable bi-Hamiltonian structure. 8. Poisson traces, D-modules, and symplectic resolutions Science.gov (United States) Etingof, Pavel; Schedler, Travis 2018-03-01 We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require. 9. Nondestructive Assay Options for Spent Fuel Encapsulation Energy Technology Data Exchange (ETDEWEB) Tobin, Stephen J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jansson, Peter [Uppsala Univ. (Sweden) 2014-10-02 This report describes the role that nondestructive assay (NDA) techniques and systems of NDA techniques may have in the context of an encapsulation and deep geological repository. The potential NDA needs of an encapsulation and repository facility include safeguards, heat content, and criticality. Some discussion of the facility needs is given, with the majority of the report concentrating on the capability and characteristics of individual NDA instruments and techniques currently available or under development. Particular emphasis is given to how the NDA techniques can be used to determine the heat production of an assembly, as well as meet the dual safeguards needs of 1) determining the declared parameters of initial enrichment, burn-up, and cooling time and 2) detecting defects (total, partial, and bias). The report concludes with the recommendation of three integrated systems that might meet the combined NDA needs of the encapsulation/repository facility. 10. Suppression of intrinsic roughness in encapsulated graphene DEFF Research Database (Denmark) Thomsen, Joachim Dahl; Gunst, Tue; Gregersen, Søren Schou 2017-01-01 Roughness in graphene is known to contribute to scattering effects which lower carrier mobility. Encapsulating graphene in hexagonal boron nitride (hBN) leads to a significant reduction in roughness and has become the de facto standard method for producing high-quality graphene devices. We have...... fabricated graphene samples encapsulated by hBN that are suspended over apertures in a substrate and used noncontact electron diffraction measurements in a transmission electron microscope to measure the roughness of encapsulated graphene inside such structures. We furthermore compare the roughness...... of these samples to suspended bare graphene and suspended graphene on hBN. The suspended heterostructures display a root mean square (rms) roughness down to 12 pm, considerably less than that previously reported for both suspended graphene and graphene on any substrate and identical within experimental error... 11. Encapsulation - how it will be achieved International Nuclear Information System (INIS) Barlow, P. 1990-01-01 The work of the new Encapsulation Plant at British Nuclear Fuel Limited's (BNFL) Sellafield site is described in this article. Intermediate-level radioactive materials are encapsulated in a cement matrix in 500 litre stainless steel drums suitable for storage, transport and disposal. The drums will be stored in an above-ground air-cooled store until UK Nirex Limited have built the planned underground disposal facility. The concept of product specification is explored as it applies to the four stages of nuclear waste management, namely, processing, storage, transport and disposal. By following this approach the encapsulation plant will work within government regulations and the public concerns over safety and environmental issues can be met. U.K 12. Use of Red Cactus Pear (Opuntia ficus-indica Encapsulated Powder to Pigment Extruded Cereal Directory of Open Access Journals (Sweden) Martha G. Ruiz-Gutiérrez 2017-01-01 Full Text Available Encapsulated powder of the red cactus pear is a potential natural dye for the food industry and a known antioxidant. Although the use of this powder is possible, it is not clear how it alters food properties, thus ensuing commercial acceptability. The aim of this study was to evaluate the effect of encapsulated powder of the red cactus pear on the physicochemical properties of extruded cereals. The powder was mixed (2.5, 5.0, and 7.5% w/w with maize grits and extruded (mix moisture 22%, temperature 100°C, and screw speed 325 rpm. The physical, chemical, and sensory characteristics of the extruded cereal were evaluated; extruded cereal without encapsulated powder was used as a control. All cereal extrudates pigmented with the encapsulated powder showed statistically significant differences (P<0.05 in expansion, water absorption, color, density, and texture compared to the control. The encapsulated powder had a positive effect on expansion and water absorption indices, as well as color parameters, but a negative effect on density and texture. Extruded cereal properties were significantly (P<0.05 correlated. Sensorially, consumers accepted the extruded cereal with a lower red cactus pear powder content (2.5% w/w, because this presented characteristics similar to extruded cereal lacking pigment. 13. Statistical thermodynamics International Nuclear Information System (INIS) Lim, Gyeong Hui 2008-03-01 This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics 14. Performance evaluation soil samples utilizing encapsulation technology Science.gov (United States) Dahlgran, James R. 1999-01-01 Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration. 15. Encapsulated social perception of emotional expressions. Science.gov (United States) Smortchkova, Joulia 2017-01-01 In this paper I argue that the detection of emotional expressions is, in its early stages, informationally encapsulated. I clarify and defend such a view via the appeal to data from social perception on the visual processing of faces, bodies, facial and bodily expressions. Encapsulated social perception might exist alongside processes that are cognitively penetrated, and that have to do with recognition and categorization, and play a central evolutionary function in preparing early and rapid responses to the emotional stimuli. Copyright © 2016 Elsevier Inc. All rights reserved. 16. Suitability of cement encapsulated ILW for transport International Nuclear Information System (INIS) Fitzpatrick, J. 1989-01-01 ILW arising during the reprocessing of nuclear fuel is to be encapsulated in cement in nominal 500-litre drums. It is important that the waste package produced can be safely transported to a deep repository. Preliminary assessments of the performances of waste packages during transport for a number of the ILW streams to be generated at Sellafield have been carried out. The results show that the proposed encapsulation process produces a waste package which can be transported to an acceptable standard of safety and which does not prejudice any aspects of transport. (author) 17. Classifying next-generation sequencing data using a zero-inflated Poisson model. Science.gov (United States) Zhou, Yan; Wan, Xiang; Zhang, Baoxue; Tong, Tiejun 2018-04-15 With the development of high-throughput techniques, RNA-sequencing (RNA-seq) is becoming increasingly popular as an alternative for gene expression analysis, such as RNAs profiling and classification. Identifying which type of diseases a new patient belongs to with RNA-seq data has been recognized as a vital problem in medical research. As RNA-seq data are discrete, statistical methods developed for classifying microarray data cannot be readily applied for RNA-seq data classification. Witten proposed a Poisson linear discriminant analysis (PLDA) to classify the RNA-seq data in 2011. Note, however, that the count datasets are frequently characterized by excess zeros in real RNA-seq or microRNA sequence data (i.e. when the sequence depth is not enough or small RNAs with the length of 18-30 nucleotides). Therefore, it is desired to develop a new model to analyze RNA-seq data with an excess of zeros. In this paper, we propose a Zero-Inflated Poisson Logistic Discriminant Analysis (ZIPLDA) for RNA-seq data with an excess of zeros. The new method assumes that the data are from a mixture of two distributions: one is a point mass at zero, and the other follows a Poisson distribution. We then consider a logistic relation between the probability of observing zeros and the mean of the genes and the sequencing depth in the model. Simulation studies show that the proposed method performs better than, or at least as well as, the existing methods in a wide range of settings. Two real datasets including a breast cancer RNA-seq dataset and a microRNA-seq dataset are also analyzed, and they coincide with the simulation results that our proposed method outperforms the existing competitors. The software is available at http://www.math.hkbu.edu.hk/∼tongt. [email protected] or [email protected]. Supplementary data are available at Bioinformatics online. 18. Stability of free and encapsulated Lactobacillus acidophilus ATCC 4356 in yogurt and in an artificial human gastric digestion system. Science.gov (United States) Ortakci, F; Sert, S 2012-12-01 The objective of this study was to determine the effect of encapsulation on survival of probiotic Lactobacillus acidophilus ATCC 4356 (ATCC 4356) in yogurt and during artificial gastric digestion. Strain ATCC 4356 was added to yogurt either encapsulated in calcium alginate or in free form (unencapsulated) at levels of 8.26 and 9.47 log cfu/g, respectively, and the influence of alginate capsules (1.5 to 2.5mm) on the sensorial characteristics of yogurts was investigated. The ATCC 4356 strain was introduced into an artificial gastric solution consisting of 0.08 N HCl (pH 1.5) containing 0.2% NaCl or into artificial bile juice consisting of 1.2% bile salts in de Man, Rogosa, and Sharpe broth to determine the stability of the probiotic bacteria. When incubated for 2h in artificial gastric juice, the free ATCC 4356 did not survive (reduction of >7 log cfu/g). We observed, however, greater survival of encapsulated ATCC 4356, with a reduction of only 3 log cfu/g. Incubation in artificial bile juice (6 h) did not significantly affect the viability of free or encapsulated ATCC 4356. Moreover, statistically significant reductions (~1 log cfu/g) of both free and encapsulated ATCC 4356 were observed during 4-wk refrigerated storage of yogurts. The addition of probiotic cultures in free or alginate-encapsulated form did not significantly affect appearance/color or flavor/odor of the yogurts. However, significant deficiencies were found in body/texture of yogurts containing encapsulated ATCC 4356. We concluded that incorporation of free and encapsulated probiotic bacteria did not substantially change the overall sensory properties of yogurts, and encapsulation in alginate using the extrusion method greatly enhanced the survival of probiotic bacteria against an artificial human gastric digestive system. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved. 19. On understanding crosstalk in the face of small, quantized, signals highly smeared by Poisson statistics International Nuclear Information System (INIS) Lincoln, D.; Hsieh, F.; Li, H. 1995-01-01 As detectors become smaller and more densely packed, signals become smaller and crosstalk between adjacent channels generally increases. Since it is often appropriate to use the distribution of signals in adjacent channels to make a useful measurement, it is imperative that inter-channel crosstalk be well understood. In this paper we shall describe the manner in which Poissonian fluctuations can give counter-intuitive results and offer some methods for extracting the desired information from the highly smeared, observed distributions. (orig.) 20. Poisson-type inequalities for growth properties of positive superharmonic functions. Science.gov (United States) Luan, Kuan; Vieira, John 2017-01-01 In this paper, we present new Poisson-type inequalities for Poisson integrals with continuous data on the boundary. The obtained inequalities are used to obtain growth properties at infinity of positive superharmonic functions in a smooth cone. 1. Cumulative sum control charts for monitoring geometrically inflated Poisson processes: An application to infectious disease counts data. Science.gov (United States) Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E 2018-02-01 In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example. 2. Modeling an array of encapsulated germanium detectors International Nuclear Information System (INIS) Kshetri, R 2012-01-01 A probability model has been presented for understanding the operation of an array of encapsulated germanium detectors generally known as composite detector. The addback mode of operation of a composite detector has been described considering the absorption and scattering of γ-rays. Considering up to triple detector hit events, we have obtained expressions for peak-to-total and peak-to-background ratios of the cluster detector, which consists of seven hexagonal closely packed encapsulated HPGe detectors. Results have been obtained for the miniball detectors comprising of three and four seven hexagonal closely packed encapsulated HPGe detectors. The formalism has been extended to the SPI spectrometer which is a telescope of the INTEGRAL satellite and consists of nineteen hexagonal closely packed encapsulated HPGe detectors. This spectrometer comprises of twelve detector modules surrounding the cluster detector. For comparison, we have considered a spectrometer comprising of nine detector modules surrounding the three detector configuration of miniball detector. In the present formalism, the operation of these sophisticated detectors could be described in terms of six probability amplitudes only. Using experimental data on relative efficiency and fold distribution of cluster detector as input, the fold distribution and the peak-to-total, peak-to-background ratios have been calculated for the SPI spectrometer and other composite detectors at 1332 keV. Remarkable agreement between experimental data and results from the present formalism has been observed for the SPI spectrometer. 3. Encapsulation of thermal energy storage media Science.gov (United States) Dhau, Jaspreet; Goswami, Dharendra; Jotshi, Chand K.; Stefanakos, Elias K. 2017-09-19 In one embodiment, a phase change material is encapsulated by forming a phase change material pellet, coating the pellet with flexible material, heating the coated pellet to melt the phase change material, wherein the phase change materials expands and air within the pellet diffuses out through the flexible material, and cooling the coated pellet to solidify the phase change material. 4. Antidiabetic Activity from Gallic Acid Encapsulated Nanochitosan Science.gov (United States) Purbowatiningrum; Ngadiwiyana; Ismiyarto; Fachriyah, E.; Eviana, I.; Eldiana, O.; Amaliyah, N.; Sektianingrum, A. N. 2017-02-01 Diabetes mellitus (DM) has become a health problem in the world because it causes death. One of the phenolic compounds that have antidiabetic activity is gallic acid. However, the use of this compound still provides unsatisfactory results due to its degradation during the absorption process. The solution offered to solve the problem is by encapsulated it within chitosan nanoparticles that serve to protect the bioactive compound from degradation, increases of solubility and delivery of a bioactive compound to the target site by using freeze-drying technique. The result of chitosan nanoparticle’s Scanning Electron Microscopy (SEM) showed that chitosan nanoparticle’s size is uniform and it is smaller than chitosan. The value of encapsulation efficiency (EE) of gallic acid which encapsulated within chitosan nanoparticles is about 50.76%. Inhibition test result showed that gallic acid-chitosan nanoparticles at 50 ppm could inhibite α-glucosidase activity in 28.87% with 54.94 in IC50. So it can be concluded that gallic acid can be encapsulated in nanoparticles of chitosan and proved that it could inhibit α-glucosidase. 5. Magic ferritin: A novel chemotherapeutic encapsulation bullet International Nuclear Information System (INIS) Simsek, Ece; Akif Kilic, Mehmet 2005-01-01 The dissociation of apoferritin into subunits at pH 2 followed by its reformation at pH 7.4 in the presence of doxorubicin-HCl gives rise to a solution containing five doxorubicin-HCl molecules trapped within the apoferritin. This is the first report showing that ferritin can encapsulate an anti-cancer drug into its cavity 6. Bioactive Compounds And Encapsulation Of Yanang ( Tiliacora ... African Journals Online (AJOL) Furthermore, this paper reports the design of the experimental method for optimization of Yanang encapsulation using three independent variables: the ratio of core material (Yanang), to wall material (gum Arabic), gum Arabic concentration and inlet temperature of spray drying on bioactive compounds stability. The stability ... 7. Secure Hybrid Encryption from Weakened Key Encapsulation NARCIS (Netherlands) D. Hofheinz (Dennis); E. Kiltz (Eike); A. Menezes 2007-01-01 textabstractWe put forward a new paradigm for building hybrid encryption schemes from constrained chosen-ciphertext secure (CCCA) key-encapsulation mechanisms (KEMs) plus authenticated symmetric encryption. Constrained chosen-ciphertext security is a new security notion for KEMs that we propose. It 8. Anisotropic silica mesostructures for DNA encapsulation The encapsulation of biomolecules in inert meso or nanostructures is an important step towards controlling drug delivery agents. Mesoporous silica nanoparticles (MSN) are of immense importance owing to their high surface area, large pore size, uniform particle size and chemical inertness. Reverse micellar method with ... 9. Factors influencing insulin secretion from encapsulated islets NARCIS (Netherlands) de Haan, BJ; Faas, MM; de Vos, P 2003-01-01 Adequate regulation of glucose levels by a microencapsulated pancreatic islet graft requires a minute-to-minute regulation of blood glucose. To design such a transplant, it is mandatory to have sufficient insight in factors influencing the kinetics of insulin secretion by encapsulated islets. The 10. Oxygen Measurements in Liposome Encapsulated Hemoglobin Science.gov (United States) Phiri, Joshua Benjamin Liposome encapsulated hemoglobins (LEH's) are of current interest as blood substitutes. An analytical methodology for rapid non-invasive measurements of oxygen in artificial oxygen carriers is examined. High resolution optical absorption spectra are calculated by means of a one dimensional diffusion approximation. The encapsulated hemoglobin is prepared from fresh defibrinated bovine blood. Liposomes are prepared from hydrogenated soy phosphatidylcholine (HSPC), cholesterol and dicetylphosphate using a bath sonication method. An integrating sphere spectrophotometer is employed for diffuse optics measurements. Data is collected using an automated data acquisition system employing lock-in -amplifiers. The concentrations of hemoglobin derivatives are evaluated from the corresponding extinction coefficients using a numerical technique of singular value decomposition, and verification of the results is done using Monte Carlo simulations. In situ measurements are required for the determination of hemoglobin derivatives because most encapsulation methods invariably lead to the formation of methemoglobin, a nonfunctional form of hemoglobin. The methods employed in this work lead to high resolution absorption spectra of oxyhemoglobin and other derivatives in red blood cells and liposome encapsulated hemoglobin (LEH). The analysis using singular value decomposition method offers a quantitative means of calculating the fractions of oxyhemoglobin and other hemoglobin derivatives in LEH samples. The analytical methods developed in this work will become even more useful when production of LEH as a blood substitute is scaled up to large volumes. 11. Nanoprecipitation process: From encapsulation to drug delivery. Science.gov (United States) Martínez Rivas, Claudia Janeth; Tarhini, Mohamad; Badri, Waisudin; Miladi, Karim; Greige-Gerges, Hélène; Nazari, Qand Agha; Galindo Rodríguez, Sergio Arturo; Román, Rocío Álvarez; Fessi, Hatem; Elaissari, Abdelhamid 2017-10-30 12. Nano spray drying for encapsulation of pharmaceuticals. Science.gov (United States) Arpagaus, Cordin; Collenberg, Andreas; Rütti, David; Assadpour, Elham; Jafari, Seid Mahdi 2018-05-17 Many pharmaceuticals such as pills, capsules, or tablets are prepared in a dried and powdered form. In this field, spray drying plays a critical role to convert liquid pharmaceutical formulations into powders. In addition, in many cases it is necessary to encapsulate bioactive drugs into wall materials to protect them against harsh process and environmental conditions, as well as to deliver the drug to the right place and at the correct time within the body. Thus, spray drying is a common process used for encapsulation of pharmaceuticals. In view of the rapid progress of nanoencapsulation techniques in pharmaceutics, nano spray drying is used to improve drug formulation and delivery. The nano spray dryer developed in the recent years provides ultrafine powders at nanoscale and high product yields. In this paper, after explaining the concept of nano spray drying and understanding the key elements of the equipment, the influence of the process parameters on the final powders properties, like particle size, morphology, encapsulation efficiency, drug loading and release, will be discussed. Then, numerous application examples are reviewed for nano spray drying and encapsulation of various drugs in the early stages of product development along with a brief overview of the obtained results and characterization techniques. Copyright © 2018 Elsevier B.V. All rights reserved. 13. Antidiabetic activity from cinnamaldydhe encapsulated by nanochitosan Science.gov (United States) Purbowatingrum; Ngadiwiyana; Fachriyah, E.; Ismiyarto; Ariestiani, B.; Khikmah 2018-04-01 Diabetes mellitus (DM) is a disease characterized by chronic hyperglycemia and metabolic disorders of carbohydrates, proteins, and fats due to reduced function of insulin. Treatment of diabetes can be done by insulin therapy or hypoglycemic drugs. Hypoglycemic drugs usually contain compounds that can inhibit the action of α-glucosidase enzymes that play a role in breaking carbohydrates into blood sugar. Cinnamaldehyde has α-glucosidase inhibit activity because it has a functional group of alkene that is conjugated with a benzene ring and a carbonyl group. However, the use of this compound still provides unsatisfactory results due to its degradation during the absorption process. The solution offered to solve the problem is by encapsulated it within chitosan nanoparticles that serve to protect the bioactive compound from degradation, increases of solubility and delivery of a bioactive compound to the target site by using freeze-drying technique. The value of encapsulation efficiency (EE) of cinnamaldyhde which encapsulated within chitosan nanoparticles is about 74%. Inhibition test result showed that cinnamaldehyde-chitosan nanoparticles at 100 ppm could inhibit α-glucosidase activity in 23.9% with 134,13 in IC50. So it can be concluded that cinnamaldehyde can be encapsulated in nanoparticles of chitosan and proved that it could inhibit α-glucosidase. 14. Process for encapsulating active agents in gels NARCIS (Netherlands) Yilmaz, G.; Jongboom, R.O.J.; Oosterhaven, J. 2001-01-01 The present invention relates to a process for encapsulating an active agent in a biopolymer in the form of a gel, comprising the steps of: a) forming a dispersion or solution of the biopolymer in water; and b) adding the active agent to the dispersion or solution obtained in step a); wherein the 15. Treatment of Diabetes with Encapsulated Islets NARCIS (Netherlands) de Vos, Paul; Spasojevic, Milica; Faas, Marijke M.; Pedraz, JL; Orive, G 2010-01-01 Cell encapsulation has been proposed for the treatment of a wide variety of diseases since it allows for transplantation of cells in the absence of undesired immunosuppression. The technology has been proposed to be a solution for the treatment of diabetes since it potentially allows a mandatory 16. Plastic Encapsulated Microcircuits (PEMs) Reliability Guide Science.gov (United States) Sandor, M. 2000-01-01 It is reported by some users and has been demonstrated by others via testing and qualification that the quality and reliability of plastic-encapsulated microcircuits (PEMs) manufactured today are excellent in commercial applications and closely equivalent, and in some cases superior to their hemetic counterparts. 17. Data Qualification and Data Summary Report: Intact Rock Properties Data on Poisson's Ratio and Young's Modulus International Nuclear Information System (INIS) Cikanek, E.M.; Safley, L.E.; Grant, T.A. 2003-01-01 This report reviews all potentially available Yucca Mountain Project (YMP) data in the Technical Data Management System and compiles all relevant qualified data, including data qualified by this report, on elastic properties, Poisson's ratio and Young's modulus, into a single summary Data Tracking Number (DTN) MO0304DQRIRPPR.002. Since DTN MO0304DQRIRPPR.002 was compiled from both qualified and unqualified sources, this report qualifies the DTN in accordance with AP-SIII.2Q. This report also summarizes the individual test results in MO0304DQRIRPPR.002 and provides summary values using descriptive statistics for Poisson's ratio and Young's modulus in a Reference Information Base Data Item. This report found that test conditions such as temperature, saturation, and sample size could influence test results. The largest influence, however, is the lithologic variation within the tuffs themselves. Even though the summary DTN divided the results by lithostratigrahic units within each formation, there was still substantial variation in elastic properties within individual units. This variation was attributed primarily to the presence or absence of lithophysae, fractures, alteration, pumice fragments, and other lithic clasts within the test specimens as well as changes in porosity within the units. As a secondary cause, substantial variations can also be attributed to test conditions such as the type of test (static or dynamic), size of the test specimen, degree of saturation, temperature, and strain rate conditions. This variation is characteristic of the tuffs and the testing methods, and should be considered when using the data summarized in this report 18. WAITING TIME DISTRIBUTION OF SOLAR ENERGETIC PARTICLE EVENTS MODELED WITH A NON-STATIONARY POISSON PROCESS International Nuclear Information System (INIS) Li, C.; Su, W.; Fang, C.; Zhong, S. J.; Wang, L. 2014-01-01 We present a study of the waiting time distributions (WTDs) of solar energetic particle (SEP) events observed with the spacecraft WIND and GOES. The WTDs of both solar electron events (SEEs) and solar proton events (SPEs) display a power-law tail of ∼Δt –γ . The SEEs display a broken power-law WTD. The power-law index is γ 1 = 0.99 for the short waiting times (<70 hr) and γ 2 = 1.92 for large waiting times (>100 hr). The break of the WTD of SEEs is probably due to the modulation of the corotating interaction regions. The power-law index, γ ∼ 1.82, is derived for the WTD of the SPEs which is consistent with the WTD of type II radio bursts, indicating a close relationship between the shock wave and the production of energetic protons. The WTDs of SEP events can be modeled with a non-stationary Poisson process, which was proposed to understand the waiting time statistics of solar flares. We generalize the method and find that, if the SEP event rate λ = 1/Δt varies as the time distribution of event rate f(λ) = Aλ –α exp (– βλ), the time-dependent Poisson distribution can produce a power-law tail WTD of ∼Δt α –3 , where 0 ≤ α < 2 19. Improved mesh generator for the POISSON Group Codes International Nuclear Information System (INIS) Gupta, R.C. 1987-01-01 This paper describes the improved mesh generator of the POISSON Group Codes. These improvements enable one to have full control over the way the mesh is generated and in particular the way the mesh density is distributed throughout this model. A higher mesh density in certain regions coupled with a successively lower mesh density in others keeps the accuracy of the field computation high and the requirements on the computer time and computer memory low. The mesh is generated with the help of codes AUTOMESH and LATTICE; both have gone through a major upgrade. Modifications have also been made in the POISSON part of these codes. We shall present an example of a superconducting dipole magnet to explain how to use this code. The results of field computations are found to be reliable within a few parts in a hundred thousand even in such complex geometries 20. Histogram bin width selection for time-dependent Poisson processes International Nuclear Information System (INIS) Koyama, Shinsuke; Shinomoto, Shigeru 2004-01-01 In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method 1. Histogram bin width selection for time-dependent Poisson processes Energy Technology Data Exchange (ETDEWEB) Koyama, Shinsuke; Shinomoto, Shigeru [Department of Physics, Graduate School of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan) 2004-07-23 In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method. 2. Nonlocal surface plasmons by Poisson Green's function matching International Nuclear Information System (INIS) Morgenstern Horing, Norman J 2006-01-01 The Poisson Green's function for all space is derived for the case in which an interface divides space into two separate semi-infinite media, using the Green's function matching method. Each of the separate semi-infinite constituent parts has its own dynamic, nonlocal polarizability, which is taken to be unaffected by the presence of the interface and is represented by the corresponding bulk response property. While this eliminates Friedel oscillatory phenomenology near the interface with p ∼ 2p F , it is nevertheless quite reasonable and useful for a broad range of lower (nonvanishing) wavenumbers, p F . The resulting full-space Poisson Green's function is dynamic, nonlocal and spatially inhomogeneous, and its frequency pole yields the surface plasmon dispersion relation, replete with dynamic and nonlocal features. It also accommodates an ambient magnetic field 3. Reference manual for the POISSON/SUPERFISH Group of Codes Energy Technology Data Exchange (ETDEWEB) 1987-01-01 The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane. 4. Invariants and labels for Lie-Poisson Systems International Nuclear Information System (INIS) Thiffeault, J.L.; Morrison, P.J. 1998-04-01 Reduction is a process that uses symmetry to lower the order of a Hamiltonian system. The new variables in the reduced picture are often not canonical: there are no clear variables representing positions and momenta, and the Poisson bracket obtained is not of the canonical type. Specifically, we give two examples that give rise to brackets of the noncanonical Lie-Poisson form: the rigid body and the two-dimensional ideal fluid. From these simple cases, we then use the semidirect product extension of algebras to describe more complex physical systems. The Casimir invariants in these systems are examined, and some are shown to be linked to the recovery of information about the configuration of the system. We discuss a case in which the extension is not a semidirect product, namely compressible reduced MHD, and find for this case that the Casimir invariants lend partial information about the configuration of the system 5. 2D sigma models and differential Poisson algebras International Nuclear Information System (INIS) Arias, Cesar; Boulanger, Nicolas; Sundell, Per; Torres-Gomez, Alexander 2015-01-01 We construct a two-dimensional topological sigma model whose target space is endowed with a Poisson algebra for differential forms. The model consists of an equal number of bosonic and fermionic fields of worldsheet form degrees zero and one. The action is built using exterior products and derivatives, without any reference to a worldsheet metric, and is of the covariant Hamiltonian form. The equations of motion define a universally Cartan integrable system. In addition to gauge symmetries, the model has one rigid nilpotent supersymmetry corresponding to the target space de Rham operator. The rigid and local symmetries of the action, respectively, are equivalent to the Poisson bracket being compatible with the de Rham operator and obeying graded Jacobi identities. We propose that perturbative quantization of the model yields a covariantized differential star product algebra of Kontsevich type. We comment on the resemblance to the topological A model. 6. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model Science.gov (United States) Zamzuri, Zamira Hasanah binti 2015-10-01 Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set. 7. A physiologically based nonhomogeneous Poisson counter model of visual identification DEFF Research Database (Denmark) Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus 2018-01-01 A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are ......A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects...... that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model... 8. Statistical methods in nuclear theory International Nuclear Information System (INIS) Shubin, Yu.N. 1974-01-01 The paper outlines statistical methods which are widely used for describing properties of excited states of nuclei and nuclear reactions. It discusses physical assumptions lying at the basis of known distributions between levels (Wigner, Poisson distributions) and of widths of highly excited states (Porter-Thomas distribution, as well as assumptions used in the statistical theory of nuclear reactions and in the fluctuation analysis. The author considers the random matrix method, which consists in replacing the matrix elements of a residual interaction by random variables with a simple statistical distribution. Experimental data are compared with results of calculations using the statistical model. The superfluid nucleus model is considered with regard to superconducting-type pair correlations 9. Investigation of Random Switching Driven by a Poisson Point Process DEFF Research Database (Denmark) Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef 2015-01-01 This paper investigates the switching mechanism of a two-dimensional switched system, when the switching events are generated by a Poisson point process. A model, in the shape of a stochastic process, for such a system is derived and the distribution of the trajectory's position is developed...... together with marginal density functions for the coordinate functions. Furthermore, the joint probability distribution is given explicitly.... 10. Application of Poisson random effect models for highway network screening. Science.gov (United States) Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer 2014-02-01 In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved. 11. Numerical solution of dynamic equilibrium models under Poisson uncertainty DEFF Research Database (Denmark) Posch, Olaf; Trimborn, Timo 2013-01-01 We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations of the retar...... solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households.... 12. On terminating Poisson processes in some shock models Energy Technology Data Exchange (ETDEWEB) Finkelstein, Maxim, E-mail: [email protected] [Department of Mathematical Statistics, University of the Free State, Bloemfontein (South Africa); Max Planck Institute for Demographic Research, Rostock (Germany); Marais, Francois, E-mail: [email protected] [CSC, Cape Town (South Africa) 2010-08-15 A system subject to a point process of shocks is considered. Shocks occur in accordance with the homogeneous Poisson process. Different criteria of system failure (termination) are discussed and the corresponding probabilities of failure (accident)-free performance are derived. The described analytical approach is based on deriving integral equations for each setting and solving these equations through the Laplace transform. Some approximations are analyzed and further generalizations and applications are discussed. 13. On terminating Poisson processes in some shock models International Nuclear Information System (INIS) Finkelstein, Maxim; Marais, Francois 2010-01-01 A system subject to a point process of shocks is considered. Shocks occur in accordance with the homogeneous Poisson process. Different criteria of system failure (termination) are discussed and the corresponding probabilities of failure (accident)-free performance are derived. The described analytical approach is based on deriving integral equations for each setting and solving these equations through the Laplace transform. Some approximations are analyzed and further generalizations and applications are discussed. 14. Density of states, Poisson's formula of summation and Walfisz's formula International Nuclear Information System (INIS) Fucho, P. 1980-06-01 Using Poisson's formula for summation, we obtain an expression for density of states of d-dimensional scalar Helmoholtz's equation under various boundary conditions. Likewise, we also obtain formulas of Walfisz's type. It becomes evident that the formulas obtained by Pathria et al. in connection with ideal bosons in a finite system are exactly the same as those obtained by utilizing the formulas for density of states. (author) 15. Generalized Poisson processes in quantum mechanics and field theory International Nuclear Information System (INIS) Combe, P.; Rodriguez, R.; Centre National de la Recherche Scientifique, 13 - Marseille; Hoegh-Krohn, R.; Centre National de la Recherche Scientifique, 13 - Marseille; Sirugue, M.; Sirugue-Collin, M.; Centre National de la Recherche Scientifique, 13 - Marseille 1981-01-01 In section 2 we describe more carefully the generalized Poisson processes, giving a realization of the underlying probability space, and we characterize these processes by their characteristic functionals. Section 3 is devoted to the proof of the previous formula for quantum mechanical systems, with possibly velocity dependent potentials and in section 4 we give an application of the previous theory to some relativistic Bose field models. (orig.) 16. Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data Directory of Open Access Journals (Sweden) Xueqin Zhou 2017-01-01 Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well. 17. Group-buying inventory policy with demand under Poisson process Directory of Open Access Journals (Sweden) Tammarat Kleebmek 2016-02-01 Full Text Available The group-buying is the modern business of selling in the uncertain market. With an objective to minimize costs for sellers arising from ordering and reordering, we present in this paper the group buying inventory model, with the demand governed by a Poisson process and the product sale distributed as Binomial distribution. The inventory level is under continuous review, while the lead time is fixed. A numerical example is illustrated. 18. Poisson noise removal with pyramidal multi-scale transforms Science.gov (United States) Woiselle, Arnaud; Starck, Jean-Luc; Fadili, Jalal M. 2013-09-01 In this paper, we introduce a method to stabilize the variance of decimated transforms using one or two variance stabilizing transforms (VST). These VSTs are applied to the 3-D Meyer wavelet pyramidal transform which is the core of the first generation 3D curvelets. This allows us to extend these 3-D curvelets to handle Poisson noise, that we apply to the denoising of a simulated cosmological volume. 19. Poisson-Like Spiking in Circuits with Probabilistic Synapses Science.gov (United States) Moreno-Bote, Rubén 2014-01-01 Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705 20. A physiologically based nonhomogeneous Poisson counter model of visual identification. Science.gov (United States) Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren 2018-04-30 A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved). 1. Modeling environmental noise exceedances using non-homogeneous Poisson processes. Science.gov (United States) Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R 2014-10-01 In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models. 2. Blind beam-hardening correction from Poisson measurements Science.gov (United States) Gu, Renliang; Dogandžić, Aleksandar 2016-02-01 We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme. 3. A generalized Poisson solver for first-principles device simulations Energy Technology Data Exchange (ETDEWEB) Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: [email protected] [Nanoscale Simulations, ETH Zürich, 8093 Zürich (Switzerland); Brück, Sascha; Luisier, Mathieu [Integrated Systems Laboratory, ETH Zürich, 8092 Zürich (Switzerland) 2016-01-28 Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated. 4. Complete synchronization of the global coupled dynamical network induced by Poisson noises. Science.gov (United States) Guo, Qing; Wan, Fangyi 2017-01-01 The different Poisson noise-induced complete synchronization of the global coupled dynamical network is investigated. Based on the stability theory of stochastic differential equations driven by Poisson process, we can prove that Poisson noises can induce synchronization and sufficient conditions are established to achieve complete synchronization with probability 1. Furthermore, numerical examples are provided to show the agreement between theoretical and numerical analysis. 5. Estimating Bird / Aircraft Collision Probabilities and Risk Utilizing Spatial Poisson Processes Science.gov (United States) 2012-06-10 ESTIMATING BIRD/AIRCRAFT COLLISION PROBABILITIES AND RISK UTILIZING SPATIAL POISSON PROCESSES GRADUATE...AND RISK UTILIZING SPATIAL POISSON PROCESSES GRADUATE RESEARCH PAPER Presented to the Faculty Department of Operational Sciences...COLLISION PROBABILITIES AND RISK UTILIZING SPATIAL POISSON PROCESSES Brady J. Vaira, BS, MS Major, USAF Approved 6. A comparison of Poisson-one-inflated power series distributions for ... African Journals Online (AJOL) A class of Poisson-one-inflated power series distributions (the binomial, the Poisson, the negative binomial, the geometric, the log-series and the misrecorded Poisson) are proposed for modeling rural out-migration at the household level. The probability mass functions of the mixture distributions are derived and fitted to the ... 7. Action-angle variables and a KAM theorem for b-Poisson manifolds OpenAIRE Kiesenhofer, Anna; Miranda Galcerán, Eva; Scott, Geoffrey 2015-01-01 In this article we prove an action-angle theorem for b-integrable systems on b-Poisson manifolds improving the action-angle theorem contained in [14] for general Poisson manifolds in this setting. As an application, we prove a KAM-type theorem for b-Poisson manifolds. (C) 2015 Elsevier Masson SAS. All rights reserved. 8. A Raikov-Type Theorem for Radial Poisson Distributions: A Proof of Kingman's Conjecture OpenAIRE Van Nguyen, Thu 2011-01-01 In the present paper we prove the following conjecture in Kingman, J.F.C., Random walks with spherical symmetry, Acta Math.,109, (1963), 11-53. concerning a famous Raikov's theorem of decomposition of Poisson random variables: "If a radial sum of two independent random variables X and Y is radial Poisson, then each of them must be radial Poisson." 9. Statistical properties of several models of fractional random point processes Science.gov (United States) Bendjaballah, C. 2011-08-01 Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations. 10. Cancer Statistics Science.gov (United States) ... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ... 11. Overdispersion in nuclear statistics International Nuclear Information System (INIS) Semkow, Thomas M. 1999-01-01 The modern statistical distribution theory is applied to the development of the overdispersion theory in ionizing-radiation statistics for the first time. The physical nuclear system is treated as a sequence of binomial processes, each depending on a characteristic probability, such as probability of decay, detection, etc. The probabilities fluctuate in the course of a measurement, and the physical reasons for that are discussed. If the average values of the probabilities change from measurement to measurement, which originates from the random Lexis binomial sampling scheme, then the resulting distribution is overdispersed. The generating functions and probability distribution functions are derived, followed by a moment analysis. The Poisson and Gaussian limits are also given. The distribution functions belong to a family of generalized hypergeometric factorial moment distributions by Kemp and Kemp, and can serve as likelihood functions for the statistical estimations. An application to radioactive decay with detection is described and working formulae are given, including a procedure for testing the counting data for overdispersion. More complex experiments in nuclear physics (such as solar neutrino) can be handled by this model, as well as distinguishing between the source and background 12. Acoustically excited encapsulated microbubbles and mitigation of biofouling KAUST Repository 2017-08-31 Provided herein is a universally applicable biofouling mitigation technology using acoustically excited encapsulated microbubbles that disrupt biofilm or biofilm formation. For example, a method of reducing biofilm formation or removing biofilm in a membrane filtration system is provided in which a feed solution comprising encapsulated microbubbles is provided to the membrane under conditions that allow the encapsulated microbubbles to embed in a biofilm. Sonication of the embedded, encapsulated microbubbles disrupts the biofilm. Thus, provided herein is a membrane filtration system for performing the methods and encapsulated microbubbles specifically selected for binding to extracellular polymeric substances (EFS) in a biofilm. 13. Prediction accident triangle in maintenance of underground mine facilities using Poisson distribution analysis Science.gov (United States) Khuluqi, M. H.; Prapdito, R. R.; Sambodo, F. P. 2018-04-01 In Indonesia, mining is categorized as a hazardous industry. In recent years, a dramatic increase of mining equipment and technological complexities had resulted in higher maintenance expectations that accompanied by the changes in the working conditions, especially on safety. Ensuring safety during the process of conducting maintenance works in underground mine is important as an integral part of accident prevention programs. Accident triangle has provided a support to safety practitioner to draw a road map in preventing accidents. Poisson distribution is appropriate for the analysis of accidents at a specific site in a given time period. Based on the analysis of accident statistics in the underground mine maintenance of PT. Freeport Indonesia from 2011 through 2016, it is found that 12 minor accidents for 1 major accident and 66 equipment damages for 1 major accident as a new value of accident triangle. The result can be used for the future need for improving the accident prevention programs. 14. Testing independence between two Poisson-generated multinomial variables in case-series and cohort studies. Science.gov (United States) Hocine, Mounia; Guillemot, Didier; Tubert-Bitter, Pascale; Moreau, Thierry 2005-12-30 In case-series or cohort studies, we propose a test of independence between the occurrences of two types of recurrent events (such as two repeated infections) related to an intermittent exposure (such as an antibiotic treatment). The test relies upon an extension of a recent method for analysing case-series data, in the presence of one type of recurrent event. The test statistic is derived from a bivariate Poisson generated-multinomial distribution. Simulations for checking the validity of the test concerning the type I error and the power properties are presented. The test is illustrated using data from a cohort on antibiotics bacterial resistance in schoolchildren. Copyright 2005 John Wiley & Sons, Ltd. 15. Analytic Bayesian solution of the two-stage poisson-type problem in probabilistic risk analysis International Nuclear Information System (INIS) Frohner, F.H. 1985-01-01 The basic purpose of probabilistic risk analysis is to make inferences about the probabilities of various postulated events, with an account of all relevant information such as prior knowledge and operating experience with the specific system under study, as well as experience with other similar systems. Estimation of the failure rate of a Poisson-type system leads to an especially simple Bayesian solution in closed form if the prior probabilty implied by the invariance properties of the problem is properly taken into account. This basic simplicity persists if a more realistic prior, representing order of magnitude knowledge of the rate parameter, is employed instead. Moreover, the more realistic prior allows direct incorporation of experience gained from other similar systems, without need to postulate a statistical model for an underlying ensemble. The analytic formalism is applied to actual nuclear reactor data 16. Photon statistics in scintillation crystals Science.gov (United States) Bora, Vaibhav Joga Singh Scintillation based gamma-ray detectors are widely used in medical imaging, high-energy physics, astronomy and national security. Scintillation gamma-ray detectors are eld-tested, relatively inexpensive, and have good detection eciency. Semi-conductor detectors are gaining popularity because of their superior capability to resolve gamma-ray energies. However, they are relatively hard to manufacture and therefore, at this time, not available in as large formats and much more expensive than scintillation gamma-ray detectors. Scintillation gamma-ray detectors consist of: a scintillator, a material that emits optical (scintillation) photons when it interacts with ionization radiation, and an optical detector that detects the emitted scintillation photons and converts them into an electrical signal. Compared to semiconductor gamma-ray detectors, scintillation gamma-ray detectors have relatively poor capability to resolve gamma-ray energies. This is in large part attributed to the "statistical limit" on the number of scintillation photons. The origin of this statistical limit is the assumption that scintillation photons are either Poisson distributed or super-Poisson distributed. This statistical limit is often dened by the Fano factor. The Fano factor of an integer-valued random process is dened as the ratio of its variance to its mean. Therefore, a Poisson process has a Fano factor of one. The classical theory of light limits the Fano factor of the number of photons to a value greater than or equal to one (Poisson case). However, the quantum theory of light allows for Fano factors to be less than one. We used two methods to look at the correlations between two detectors looking at same scintillation pulse to estimate the Fano factor of the scintillation photons. The relationship between the Fano factor and the correlation between the integral of the two signals detected was analytically derived, and the Fano factor was estimated using the measurements for SrI2:Eu, YAP 17. Thin film Encapsulations of Flexible Organic Light Emitting Diodes Directory of Open Access Journals (Sweden) Tsai Fa-Ta 2016-01-01 Full Text Available Various encapsulated films for flexible organic light emitting diodes (OLEDs were studied in this work, where gas barrier layers including inorganic Al2O3 thin films prepared by atomic layer deposition, organic Parylene C thin films prepared by chemical vapor deposition, and their combination were considered. The transmittance and water vapor transmission rate of the various organic and inorgabic encapsulated films were tested. The effects of the encapsulated films on the luminance and current density of the OLEDs were discussed, and the life time experiments of the OLEDs with these encapsulated films were also conducted. The results showed that the transmittance are acceptable even the PET substrate were coated two Al2O3 and Parylene C layers. The results also indicated the WVTR of the PET substrate improved by coating the barrier layers. In the encapsulation performance, it indicates the OLED with Al2O3 /PET, 1 pair/PET, and 2 pairs/PET presents similarly higher luminance than the other two cases. Although the 1 pair/PET encapsulation behaves a litter better luminance than the 2 pairs/PET encapsulation, the 2 pairs/PET encapsulation has much better life time. The OLED with 2 pairs/PET encapsulation behaves near double life time to the 1 pair encapsulation, and four times to none encapsulation. 18. Hybrid chip-on-board LED module with patterned encapsulation Science.gov (United States) Soer, Wouter Anthon; Helbing, Rene; Huang, Guan 2018-02-27 Different wavelength conversion materials, or different concentrations of a wavelength conversion material are used to encapsulate the light emitting elements of different colors of a hybrid light emitting module. In an embodiment of this invention, second light emitting elements (170) of a particular color are encapsulated with a transparent second encapsulant (120;420;520), while first light emitting elements (160) of a different color are encapsulated with a wavelength conversion first encapsulant (110;410;510). In another embodiment of this invention, a particular second set of second and third light emitting elements (170,580) of different colors is encapsulated with a different encapsulant than another first set of first light emitting elements (160). 19. New trends in encapsulation of liposoluble vitamins. Science.gov (United States) Gonnet, M; Lethuaut, L; Boury, F 2010-09-15 Liposoluble vitamins (A, D, E, and K) and carotenoids have many benefits on health. They are provided mainly by foods. At pharmacological doses, they can also be used to treat skin diseases, several types of cancer or decrease oxidative stress. These molecules are sensitive to oxidation, thus encapsulation might constitute an appropriate mean to preserve their properties during storage and enhance their physiological potencies. Formulation processes have been adapted for sensitive molecule, limiting their exposure to high temperature, light or oxygen. Each administration pathway, oral, systemic, topical, transdermal and local, requires different particle sizes and release profile. Encapsulation can lead to greater efficiency allowing smaller administration doses thus diminishing potential hypervitaminosis syndrome appearance and side effects. Carrier formulation can be based on vitamin dissolution in lipid media and its stabilization by surfactant mixture, on its entrapment in a matrix or molecular system. Suitability of each type of carrier will be discussed for each pathway. 2010 Elsevier B.V. All rights reserved. 20. Design documentation: Krypton encapsulation preconceptual design International Nuclear Information System (INIS) Knecht, D.A. 1994-10-01 US EPA regulations limit the release of Krypton-85 to the environment from commercial facilities after January 1, 1983. In order to comply with these regulations, Krypton-85, which would be released during reprocessing of commercial nuclear fuel, must be collected and stored. Technology currently exists for separation of krypton from other inert gases, and for its storage as a compressed gas in steel cylinders. The requirements, which would be imposed for 100-year storage of Krypton-85, have led to development of processes for encapsulation of krypton within a stable solid matrix. The objective of this effort was to provide preconceptual engineering designs, technical evaluations, and life cycle costing data for comparison of two alternate candidate processes for encapsulation of Krypton-85. This report has been prepared by The Ralph M. Parsons Company for the US Department of Energy 1. Encapsulation of high temperature thermoelectric modules Energy Technology Data Exchange (ETDEWEB) Salvador, James R.; Sakamoto, Jeffrey; Park, Youngsam 2017-07-11 A method of encapsulating a thermoelectric device and its associated thermoelectric elements in an inert atmosphere and a thermoelectric device fabricated by such method are described. These thermoelectric devices may be intended for use under conditions which would otherwise promote oxidation of the thermoelectric elements. The capsule is formed by securing a suitably-sized thin-walled strip of oxidation-resistant metal to the ceramic substrates which support the thermoelectric elements. The thin-walled metal strip is positioned to enclose the edges of the thermoelectric device and is secured to the substrates using gap-filling materials. The strip, substrates and gap-filling materials cooperatively encapsulate the thermoelectric elements and exclude oxygen and water vapor from atmospheric air so that the elements may be maintained in an inert, non-oxidizing environment. 2. Design documentation: Krypton encapsulation preconceptual design Energy Technology Data Exchange (ETDEWEB) Knecht, D.A. [Idaho National Engineering Lab., Idaho Falls, ID (United States) 1994-10-01 US EPA regulations limit the release of Krypton-85 to the environment from commercial facilities after January 1, 1983. In order to comply with these regulations, Krypton-85, which would be released during reprocessing of commercial nuclear fuel, must be collected and stored. Technology currently exists for separation of krypton from other inert gases, and for its storage as a compressed gas in steel cylinders. The requirements, which would be imposed for 100-year storage of Krypton-85, have led to development of processes for encapsulation of krypton within a stable solid matrix. The objective of this effort was to provide preconceptual engineering designs, technical evaluations, and life cycle costing data for comparison of two alternate candidate processes for encapsulation of Krypton-85. This report has been prepared by The Ralph M. Parsons Company for the US Department of Energy. 3. Hanford waste encapsulation: strontium and cesium International Nuclear Information System (INIS) Jackson, R.R. 1976-06-01 The strontium and cesium fractions separated from high radiation level wastes at Hanford are converted to the solid strontium fluoride and cesium chloride salts, doubly encapsulated, and stored underwater in the Waste Encapsulation and Storage Facility (WESF). A capsule contains approximately 70,000 Ci of 137 Cs or 70,000 to 140,000 Ci of 90 Sr. Materials for fabrication of process equipment and capsules must withstand a combination of corrosive chemicals, high radiation dosages and frequently, elevated temperatures. The two metals selected for capsules, Hastelloy C-276 for strontium fluoride and 316-L stainless steel for cesium chloride, are adequate for prolonged containment. Additional materials studies are being done both for licensing strontium fluoride as source material and for second generation process equipment 4. Encapsulated Ball Bearings for Rotary Micro Machines Science.gov (United States) 2007-01-01 occurrence as well as the overall tribological properties of the bearing mechanism. Firstly, the number of stainless steel balls influences not only the load...stacks.iop.org/JMM/17/S224 Abstract We report on the first encapsulated rotary ball bearing mechanism using silicon microfabrication and stainless steel balls...The method of capturing stainless steel balls within a silicon race to support a silicon rotor both axially and radially is developed for rotary micro 5. Encapsulating peritonitis: computed tomography and surgical correlation Energy Technology Data Exchange (ETDEWEB) Kadow, Juliana Santos; Fingerhut, Carla Jeronimo Peres; Fernandes, Vinicius de Barros; Coradazzi, Klaus Rizk Stuhr; Silva, Lucas Marciel Soares; Penachim, Thiago Jose, E-mail: [email protected] [Pontificia Universidade Catolica de Campinas (PUC-Campinas), Campinas, SP (Brazil). Hospital e Maternidade Celso Pierro 2014-07-15 Sclerosing encapsulating peritonitis is a rare and frequently severe entity characterized by total or partial involvement of small bowel loops by a membrane of fibrous tissue. The disease presents with nonspecific clinical features of intestinal obstruction, requiring precise imaging diagnosis to guide the treatment. The present report emphasizes the importance of computed tomography in the diagnosis of this condition and its confirmation by surgical correlation. (author) 6. Micelle-encapsulated fullerenes in aqueous electrolytes Energy Technology Data Exchange (ETDEWEB) Ala-Kleme, T., E-mail: [email protected] [Department of Chemistry, University of Turku, 20014 Turku (Finland); Maeki, A.; Maeki, R.; Kopperoinen, A.; Heikkinen, M.; Haapakka, K. [Department of Chemistry, University of Turku, 20014 Turku (Finland) 2013-03-15 Different micellar particles Mi(M{sup +}) (Mi=Triton X-100, Triton N-101 R, Triton CF-10, Brij-35, M{sup +}=Na{sup +}, K{sup +}, Cs{sup +}) have been prepared in different aqueous H{sub 3}BO{sub 3}/MOH background electrolytes. It has been observed that these particles can be used to disperse the highly hydrophobic spherical [60]fullerene (1) and ellipsoidal [70]fullerene (2). This dispersion is realised as either micelle-encapsulated monomers Mi(M{sup +})1{sub m} and Mi(M{sup +})2{sub m} or water-soluble micelle-bound aggregates Mi(M{sup +})1{sub agg} and Mi(M{sup +})2{sub agg}, where especially the hydration degree and polyoxyethylene (POE) thickness of the micellar particle seems to play a role of vital importance. Further, the encapsulation microenvironment of 1{sub m} was found to depend strongly on the selected monovalent electrolyte cation, i.e., the encapsulated 1{sub m} is accommodated in the more hydrophobic microenvironment the higher the cationic solvation number is. - Highlights: Black-Right-Pointing-Pointer Different micellar particles is used to disperse [60]fullerene and [70]fullerene. Black-Right-Pointing-Pointer Fullerene monomers or aggregates are dispersed encaging or bounding by micelles. Black-Right-Pointing-Pointer Effective facts are hydration degree and polyoxyethylene thickness of micelle. 7. Ultrasonographic findings of sclerosing encapsulating peritonitis Energy Technology Data Exchange (ETDEWEB) Han, Jong Kyu; Lee, Hae Kyung; Moon, Chul; Hong, Hyun Sook; Kwon, Kwi Hyang; Choi, Deuk Lin [Soonchunhyangi University College of Medicine, Seoul (Korea, Republic of) 2001-03-15 To evaluate the ultrasonographic findings of the patients with sclerosing encapsulating peritonitis (SEP). Thirteen patients with surgically confirmed sclerosing encapsulating peritonitis were involved in this study. Because of intestinal obstruction, all patients had received operations. Among 13 patients, 12 cases had continuous ambulatory peritoneal dialysis (CAPD) for 2 months-12 years and 4 months from (mean; 6 years and 10 months), owing to chronic renal failure and one patient had an operation due to variceal bleeding caused by liver cirrhosis. On ultrasonographic examination, all patients showed loculated ascites which were large (n=7) or small (n=6) in amount with multiple separations. The small bowel loops were tethered posteriorly perisaltic movement and covered with the thick membrane. The ultrasonographic of findings of sclerosing encapsulating peritonitis were posteriorly tethered small bowels covered with a thick membrane and loculated ascites with multiple septa. Ultrasonographic examination can detect the thin membrane covering the small bowel loops in the early phase of the disease, therefore ultrasonography would be a helpful modality to diagnose SEP early. 8. A Generalized QMRA Beta-Poisson Dose-Response Model. Science.gov (United States) Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie 2016-10-01 Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with K min = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model. © 2016 Society for Risk Analysis. 9. Collision prediction models using multivariate Poisson-lognormal regression. Science.gov (United States) El-Basyouny, Karim; Sayed, Tarek 2009-07-01 This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models. 10. Bases chimiosensorielles du comportement alimentaire chez les poissons Directory of Open Access Journals (Sweden) SAGLIO Ph. 1981-07-01 Full Text Available Le comportement alimentaire, indispensable à la survie de l'individu et donc de l'espèce, occupe à ce titre une position de première importance dans la hiérarchie des comportements fondamentaux qui tous en dépendent très étroitement. Chez les poissons, cette prééminence se trouve illustrée par l'extrême diversité des supports sensoriels impliqués et des expressions comportementales qui leur sont liées. A la suite d'un certain nombre de mises en évidence neurophysiologiques et éthologiques de l'importance du sens chimique (olfaction, gustation dans le comportement alimentaire des poissons, de très importants secteurs d'études électrophysiologiques et d'analyses physico-chimiques visant à en déterminer la nature exacte (en termes de substances actives se sont développés ces vingt dernières années. De tous ces travaux dont les plus avancés sont présentés ici, il ressort que les acides aminés de série L plus ou moins associés à d'autres composés de poids moléculaires < 1000 constituent des composés chimiques jouant un rôle déterminant dans le comportement alimentaire de nombreuses espèces de poissons carnivores. 11. Micro-Encapsulated Phase Change Materials: A Review of Encapsulation, Safety and Thermal Characteristics Directory of Open Access Journals (Sweden) Ahmed Hassan 2016-10-01 Full Text Available Phase change materials (PCMs have been identified as potential candidates for building energy optimization by increasing the thermal mass of buildings. The increased thermal mass results in a drop in the cooling/heating loads, thus decreasing the energy demand in buildings. However, direct incorporation of PCMs into building elements undermines their structural performance, thereby posing a challenge for building integrity. In order to retain/improve building structural performance, as well as improving energy performance, micro-encapsulated PCMs are integrated into building materials. The integration of microencapsulation PCMs into building materials solves the PCM leakage problem and assures a good bond with building materials to achieve better structural performance. The aim of this article is to identify the optimum micro-encapsulation methods and materials for improving the energy, structural and safety performance of buildings. The article reviews the characteristics of micro-encapsulated PCMs relevant to building integration, focusing on safety rating, structural implications, and energy performance. The article uncovers the optimum combinations of the shell (encapsulant and core (PCM materials along with encapsulation methods by evaluating their merits and demerits. 12. Gap processing for adaptive maximal poisson-disk sampling KAUST Repository Yan, Dongming 2013-10-17 In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM. 13. Identifying traffic accident black spots with Poisson-Tweedie models DEFF Research Database (Denmark) Debrabant, Birgit; Halekoh, Ulrich; Bonat, Wagner Hugo 2018-01-01 This paper aims at the identification of black spots for traffic accidents, i.e. locations with accident counts beyond what is usual for similar locations, using spatially and temporally aggregated hospital records from Funen, Denmark. Specifically, we apply an autoregressive Poisson-Tweedie model...... considered calendar years and calculated by simulations a probability of p=0.03 for these to be chance findings. Altogether, our results recommend these sites for further investigation and suggest that our simple approach could play a role in future area based traffic accident prevention planning.... 14. Gap processing for adaptive maximal poisson-disk sampling KAUST Repository Yan, Dongming; Wonka, Peter 2013-01-01 In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM. 15. A Poisson process approximation for generalized K-5 confidence regions Science.gov (United States) Arsham, H.; Miller, D. R. 1982-01-01 One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems. 16. Fission meter and neutron detection using poisson distribution comparison Science.gov (United States) Rowland, Mark S; Snyderman, Neal J 2014-11-18 A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material. 17. Localization of Point Sources for Poisson Equation using State Observers KAUST Repository 2016-08-09 A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. 18. Theoretical analysis of radiographic images by nonstationary Poisson processes International Nuclear Information System (INIS) Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao. 1980-01-01 This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author) 19. Team behaviour analysis in sports using the poisson equation OpenAIRE Direkoglu, Cem; O'Connor, Noel E. 2012-01-01 We propose a novel physics-based model for analysing team play- ers’ positions and movements on a sports playing field. The goal is to detect for each frame the region with the highest population of a given team’s players and the region towards which the team is moving as they press for territorial advancement, termed the region of intent. Given the positions of team players from a plan view of the playing field at any given time, we solve a particular Poisson equation to generate a smooth di... 20. Localization of Point Sources for Poisson Equation using State Observers KAUST Repository 2016-01-01 A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8200841546058655, "perplexity": 2184.859302308109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159160.59/warc/CC-MAIN-20180923055928-20180923080328-00174.warc.gz"}
https://mathemerize.com/integration-of-trigonometric-function/
# Integration of Trigonometric Function Here you will learn integration of trigonometric function with examples. Let’s begin – ## Integration of Trigonometric Function (i)  $$\int$$ $$sin^mxcos^nx$$ Case-1 : When both m & n $$\in$$ natural numbers. (a)  If one of them is odd, then substitute for the term of even power. (b)  If both are odd, substitute either of them. (c)  If both are even, use trigonometric identities to convert integrand into cosines of multiple angles. Case-2 : When m + n is a negative even integer. In this case the best substitution is tanx = t. Example : Evaluate $$\int$$ $$sin^3xcos^5x$$ dx Solution : We have $$\int$$ $$sin^3xcos^5x$$ dx, Put cosx = t; -sinx dx = dt so that I = – $$\int$$ $$(1 – t^2).t^5$$ dt = $$\int$$ $$(t^7 – t^5)$$ dt = $$t^8\over 8$$ – $$t^6\over 6$$ = $$cos^8x\over 8$$ – $$cos^6x\over 6$$ + C (ii)  $$\int$$ $$dx\over {a + bsin^2x}$$ OR $$\int$$ $$dx\over {a + bcos^2x}$$ OR $$\int$$ $$dx\over {asin^2x + bsinxcosx + ccos^2x}$$ Divide Numerator and Denominator by $$cos^2x$$ & put tanx = t. Example : Evaluate : $$\int$$ $$dx\over {(2sinx + 3cosx)}^2$$ Solution : Divide numerator and denominator by $$cos^2x$$ $$\therefore$$   I = $$\int$$ $$sec^2x dx\over {(2sinx + 3cosx)}^2$$ Let 2tanx + 3 = t,       $$\therefore$$ 2$$sec^2x$$ dx = dt I = $$1\over 2$$ $$\int$$ $$dt\over {t^2}$$ = -$$1\over 2t$$ + C = -$$1\over {2(2tanx+3)}$$ + C (iii)  $$\int$$ $$dx\over {a + bsinx}$$ OR $$\int$$ $$dx\over {a + bcosx}$$ OR $$\int$$ $$dx\over {a + bsinx + ccosx}$$ convert sines and cosines into their respective tangents of half the angles and put tan$$x\over 2$$ = t In this case sinx = $$2t\over {1+t^2}$$, cosx = $${1-t^2}\over {1+t^2}$$, x = 2$$tan^{-1}t$$; dx = $$2dt\over {1+t^2}$$ (iv)  $$\int$$ $$acosx + bsinx + c\over {pcosx + qsinx + r}$$ Express Numerator = a(Denominator) + b$$d\over dx$$(Denominator) + c & proceed. Hope you learnt integration of trigonometric function, learn more concepts of integration and practice more questions to get ahead in competition. Good Luck!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660038352012634, "perplexity": 1243.4330747201839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00470.warc.gz"}
https://ru.scribd.com/document/321665904/mg-cu-y
Вы находитесь на странице: 1из 13 J. Chem. Thermodynamics 40 (2008) 1064–1076 Contents lists available at ScienceDirect J. Chem. Thermodynamics journal homepage: www.elsevier.com/locate/jct The equilibrium phase diagram of the magnesium–copper–yttrium system Mohammad Mezbahul-Islam, Dmytro Kevorkov, Mamoun Medraj * Department of Mechanical Engineering, Concordia University, 1455 de Maisonneuve Boulevard West, Montreal, Canada H3G 1M8 article info Article history: Received 8 November 2007 Received in revised form 5 March 2008 Accepted 5 March 2008 Available online 15 March 2008 Keywords: Ternary phase diagram Thermodynamic modelling Modified quasichemical model Mg alloys abstract Thermodynamic modelling of the Mg–Cu–Y system is carried out as a part of thermodynamic database construction for Mg alloys. This system is being modelled for the first time using the modified quasi- chemical model which considers the presence of short range ordering in the liquid. A self-consistent ther- modynamic database for the Mg–Cu–Y system was constructed by combining the thermodynamic descriptions of the constituent binaries, Mg–Cu, Cu–Y, and Mg–Y. All the three binaries have been re- optimized based on the experimental phase equilibrium and thermodynamic data available in the liter- ature. The constructed database is used to calculate and predict thermodynamic properties, the binary phase diagrams and liquidus projections of the ternary Mg–Cu–Y system. The current calculation results are in good agreement with the experimental data reported in the literature. 2008 Elsevier Ltd. All rights reserved. 1. Introduction Magnesium alloys are getting considerable attention for auto- mobile and aerospace applications because they are the lightest among the commercially available structural alloys. They have high specific properties but low corrosion resistance which limited their use. While metallic glass, a new class of wonder material, is attracting attention due to its high mechanical strength and good corrosion resistance [1] . Mg–Cu–Y alloy system is a promising can- didate for metallic glass since it has the largest super cooled liquid region among other Mg-alloy systems [2–5] . Despite its importance, this system has not yet been modelled thermodynamically. Also, the available descriptions for the bina- ries are contradictory to each other and none of the assessment was done considering the presence of short range ordering in the liquid. Hence the main objective of this work is to construct a reli- able thermodynamic database of the Mg–Cu–Y system using sound thermodynamic models. The three constituent binary systems Mg–Cu, Cu–Y, and Mg–Y were re-optimized using the modified quasichemical model for the liquid phase. This model has the abil- ity to take into consideration the presence of short range ordering in the liquid. * Corresponding author. Tel.: +1 514 848 2424x3146; fax: +1 514 848 3175. E-mail address: [email protected] (M. Medraj). URL: http://www.me.concordia.ca/~mmedraj (M. Medraj). doi:10.1016/j.jct.2008.03.004 2. Analytical descriptions of the thermodynamic models employed The Gibbs free energy function used for the pure elements (Mg, Cu, and Y) are taken from the SGTE (Scientific Group Thermodata Europe) compilation of Dinsdale [6] . The Gibbs free energy of a binary stoichiometric phase is given by G / ¼ x i 0 G ð 1 Þ i 1 þ x j 0 G / j 2 þ D G f ; / where / denotes the phase of interest, x i and x j are mole fractions of elements i and j which are given by the stoichiometry of the com- pound, and are the respective reference states of elements i and j in their standard state and D G f = a + b (T /K) represents the Gibbs free energy of formation of the stoichiometric compound. The parame- ters a and b were obtained by optimization using experimental data. The Gibbs free energy of the terminal solid solutions is de- scribed by the following equation: G / ¼ x i 0 G þ x i 0 G þ R ð T = K Þ½x i ln x i þ x j ln x j þ ex G / : ð 2 Þ / i / j The excess Gibbs free energy ex G / is described by the Redlich–Kister polynomial model [7]. The modified quasichemical model [8–10] was chosen to de- scribe the liquid phases of the three constituent binaries. From the literature survey, it was found that all the three binary systems have a very high negative enthalpy of mixing. Also, the calculated entropy of mixing curves of the Cu–Y and Mg–Y systems assume m-shaped characteristics. All these are indications of the presence of short range ordering [8] . The modified quasichemical model has M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 1065 three distinct characteristics [8–10] : (i) it permits the composition of maximum short range ordering in a binary system to be freely chosen, (ii) it expresses the energy of pair formation as a function of composition which can be expanded as a polynomial in the pair fraction and the coordination numbers are permitted to vary with the composition, and (iii) the model can be extended to multi-com- ponent systems. The model has been discussed extensively in the literature [8–10] and will be outlined briefly here. The energy of pair formation can be expressed by the following equation: D g AB ¼ D g AB þ X i P 1 g i AB X i AA þ X j P 1 g j AB X j BB ; ð 3 Þ where D g AB , D g AB , and Dg AB are the parameters of the model and can be expressed as functions of temperature ð D g AB ¼ a þ bð T = KÞÞ . Also, the atom to atom coordination numbers, Z A and Z B , can be ex- pressed as a function of composition and can be represented by the following equations: i j 1 1 2 n AA þ 1 þ n AB ; Z 1 A Z A AA 1 2 n AA þ n AB 2 n BB Z A AB 1 2 n AA þ n AB n AB ; ¼ ¼ Z B Z B BB 2 n BB þ n AB Z B BA 2 n BB þ n AB ð ð 4 5 Þ Þ where n ij is the number of moles of (i j) pairs, Z A AA and Z AB A are the coordination numbers when all nearest neighbours of an A atom are A or B atoms, respectively. The composition at maximum short range ordering is determined by the ratio Z B BA = Z A AB . Values of Z AB A and Z BA B are unique to the A – B binary system and should be carefully determined to fit the thermodynamic experimental data (enthalpy of mixing, activity, etc.). The value of Z AA A is common for all systems containing A as a component. In this work, the value of Z Mg Y MgCu , Z CuMg , Z Mg MgY , Z YMg Y , Z Y MgMg , Z Cu CuCu and Z YY Y was chosen to be 6 because it gave the best possible fit for many binary systems and is recommended by Dr. Pelton’s group [8–10]. The values of Z Mg CuY , and Z Y YCu are chosen to permit the composition at maximum short range order- ing in the binary system to be consistent with the composition that corresponds to the minimum enthalpy of mixing. These values are given in table 1 . The tendency to maximum short range ordering near the composition 40 atomic per cent (at.%) Mg in the Mg–Cu system was obtained by setting Z Mg MgCu ¼ 4 and Z CuMg Y ¼ 2. For the Cu–Y system, maximum short range ordering near 30 at.% Y was TABLE 1 Atom–atom ‘‘coordination numbers” of the liquid Z A AB B A B Z AB Mg Mg 6 6 Cu Cu 6 6 YY66 Mg Cu 4 2 Cu Y 3 6 Mg Y 2 4 obtained by setting Z Y CuY ¼ 3 and Z YCu Y ¼ 6. Similarly for the Mg–Y system, the tendency to maximum short range ordering near the composition 30 at.% Y was obtained by setting Z MgY Mg ¼ 2 and Z Y YMg ¼ 4. The Gibbs free energy of intermediate solid solutions is de- scribed by the compound energy formalism as shown in the fol- lowing equations: G ¼ G ref þ G ideal þ G excess ; G ref ¼ X y l m i y j y q k 0 G ð i: j : : kÞ ; G ideal ¼ R ð T = K Þ X l l j y k m G excess ¼ X y l i y f l X X y i ln y i ; i c L ð i; j Þ : k ð y l l c¼ 0 l i l y j Þ c ; 6 Þ ð 7 Þ ð 8 Þ ð ð 9 Þ where i , j, sent sub-lattices, y i is the site fraction of component i on sub-lattice l, f l is the fraction of sub-lattice l relative to the total lattice sites, k ) represents a real or a hypothetical compound (end mem- ber) energy, and c L ( i , j ) represent the interaction parameters which describe the interaction within the sub-lattice. Modelling of the intermetallic solid solution phases requires information regarding the crystal structure of the phases and their homogeneity range. From the crystallographic data summarized in table 2 , the following model is applied to represent the MgCu 2 phase: ð Mg % ; Cu Þ 8 : ð Cu % ; Mg Þ 16 : Here, the ‘%’ denotes the major constituent of the sub-lattice. This model covers the 0 6 X Mg 6 1 composition range and, of course, in- cludes the homogeneity range of 0.31 6 X Mg 6 0.353 which was re- ported by Bagnoud and Feschotte [11]. The crystal structure data of the Cu 6 Y intermediate solid solu- tion was obtained by Buschow and Goot [12] and are listed in table 3 . According to Buschow and Goot [12] some of the Yttrium atom- ic sites are occupied by a pair of Cu atoms, which can be described by the following model with two sub-lattices: ð Y % ; Cu 2 Þð Cu Þ 5 0 G ( i : j : k represent components or vacancy, l, m and q repre- l This is actually a Wagner–Schottky type model [13] . The same mod- el was used by Fries et al. [14] to represent Cu 6 Y in their assessment of the Cu–Y system. This type of model can be used only for inter- mediate phases with a narrow homogeneity range [15] . This model covers 0.83 6 X Cu 6 1 composition range. This range includes homogeneity 0.84 6 X Cu 6 0.87 which was reported by Fries et al. [14]. The optimized model parameters are listed in table 4 . 3. Experimental data evaluation According to the CALPHAD method, the first step of the thermo- dynamic optimization is to extract and categorize the available TABLE 2 Crystal structure and lattice parameters of MgCu 2 phase Phase Crystal data Atom WP a CN b Atomic position Reference XYZ Prototype Pearson symbol Space group Space group No. Lattice parameter/nm Angles: a = 90 , b = 90 , c = 90 MgCu 2 Cu 16d 12 0.625 0.625 0.625 [68] MgCu 2 cF24 Mg 8a 16 0 0 0 Fd 3m 227 a = 0.7035 [11] a WP, Wyckoff Position. b CN, coordination number. 1066 M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 TABLE 3 Crystal structure and lattice parameters of Cu 6 Y phase Phase Crystal data Atoms WP a CN b Atomic position References XYZ Cu 6 Y Prototype Pearson symbol Space group Space group No. TbCu 7 Y 1a 20 0 0 0 [12] hP8 Cu 1 2e 8 0 0 0.306 P6/mmm Cu 2 2c 12 0.333 0.667 0 191 Cu 3 3g 16 0.5 0 0.5 Lattice parameter/nm a = 0.4940 b = 0.4157 Angles: a = 90 , b = 90 , c = 120 a WP, Wyckoff position. b CN, coordination number. TABLE 4 Optimized model parameters for the liquid, Mg-hcp, Cu-fcc, Mg 2 Cu and MgCu 2 phases in the Mg–Cu system Phase Terms a /(J mol 1 ) b /(J mol 1 K 1 ) Liquid D g AB 12975.95 0 i g AB 6153.13 1.26 j g AB 13528.50 0 Mg-hcp 0 L Mg-hcp 8371.60 0 Cu-fcc Mg 2 Cu 0 L Cu-fcc 21923.39 5.37 D G f 28620.00 0.03 MgCu 2 (Mg%, Cu) 8 (Cu%, Mg) 16 0 G MgCu 2 Cu:Cu 16743.20 0 0 G MgCu 2 Mg:Cu 37684.26 0 0 G MgCu 2 Cu:Mg 0 0 0 G MgCu 2 Mg:Mg 6278.7 0 0 L MgCu 2 13011.35 0 0 Mg ; L MgCu Mg ; Cu:Cu 2 Cu:Mg 13011.35 0 0 L MgCu Cu:Mg; Cu 2 6599.45 0 0 L MgCu 2 Mg:Mg; Cu 6599.45 0 experimental information from the literature. Various pieces of information related to the Gibbs free energy can be taken as input data for the optimization that includes crystallographic informa- tion. All relevant data should be critically evaluated to choose the most reliable sets to be used for the optimization [15] . 3.1. The Mg–Cu binary system 3.1.1. Phase diagram The Mg–Cu system was evaluated by Boudouard [16] , Sahmen [17] , and Urazov [18] . They reported the existence of two congru- ently melting compounds, and three eutectic points in the system. The analysis of Boudouard [16] showed one more line compound and one eutectic point which was, however, not accepted by other researchers [17–19]. The most extensive work on the Mg–Cu sys- tem was done by Jones [19] using both thermal and microscopic analyses. His reported data were not fully consistent with the pre- vious authors. Since he used a huge number of samples and took extreme precautions during the sample preparations, his results are more reliable and will be used in this work. No homogeneity range is mentioned for the intermediate phase of Mg 2 Cu, whereas MgCu 2 was reported with a narrow homogene- ity range that extends on both sides of the stoichiometric compo- sition. Grime and Morris-Jones [20] showed a homogeneity range of (2 to 3) at.% on both sides of the stoichiometry. Also, X-ray dif- fraction from Sederman [21] disclosed that the extent of solubility on both sides of MgCu 2 at T = 773 K does not exceed 2.55 at.% (from 64.55 to 67.20) at.% Cu and considerably less at lower tempera- tures. However X-ray diffraction, microscopic, simple, and differ- ential thermal analysis by Bagnoud and Feschotte [11] confirmed that the maximum solid solubility at the eutectic temperatures on both sides of MgCu 2 are (64.7 and 69) at.% Cu. Hansen [22] reported that the solubility of Cu in Mg increases from about 0.1 at.% Cu at room temperature to about (0.4 to 0.5) at.% Cu at T = 758 K. However, Jenkin [23] was doubtful about the accuracy of the above solubility limit and reported that the lim- it should be very much less. The metallography of the high-purity alloys prepared by Jenkin [23] clearly indicated that the solubility of Cu in Mg is less than 0.02 at.% Cu at T = 723 K. Elaborate metal- lographic analysis of Jones [19] also showed that the solubility of Cu in Mg is only 0.007 at.% Cu at room temperature, increasing to about 0.012 at.% Cu near the eutectic temperature. These values are contradictory to those given by Hansen [22] . Later, Stepanov and Kornilov [24] revealed that the solubility is 0.2 at.% Cu at T = 573 K, This is in considerable agreement with the metallographic work of Hansen [22] . However considering the accuracy of the analysis by Jones and Jenkin [19,23] , it appears that the solubility limits gi- ven in [22,24] are quite high. Hence in this work, the results of Jones [19] are used. The solubility of Mg in Cu was determined by Grime and Mor- ris-Jones [20] . According to their X-ray powder diffraction results, the maximum solubility is approximately 7.5 at.% Mg. According to Jones [19] , the solubility is about 5.3 at.% Mg at T = 773 K, increasing to about 6.3 at.% Mg at T = 1003 K. Stepanov [25] , how- ever, determined the maximum solid solubility of 10.4 at.% Mg using an electrical resistance method. Bagnoud and Feschotte [11] placed the maximum solubility at 6.94 at.% Mg. Except for the results by Stepanov [25] , most of the data [19,20,11] are in close agreement with each other. In this work, the results of Jones [19] have been used during optimization for their consistency in representing the entire phase diagram. Thermodynamic modelling on Mg–Cu system was done by Nayeb-Hashemi and Clark [26] , Coughanowr et al. [27] , and Zuo and Chang [28] . The homogeneity range of MgCu 2 phase was reproduced by both [27,28] using the Wagner–Schottky [13] type model while, sub-lattice model was used in this work. 0.3 at.% Cu at T = 673 K, and 0.55 at.% Cu at T = 753 K. 3.1.2. Thermodynamic data Garg et al. [29] , Schmahl and Sieben [30] , Juneja et al. [31] , and Hino et al. [32] measured the vapour pressure of Mg over Mg–Cu liquid alloys using different techniques. The activity measured by these four different groups is more or less in good agreement with each other. Enthalpy of mixing for the Mg–Cu liquid was measured by Sommer et al. [33] and Batalin et al. [34] using the calorimetric method. King and Kleppa [35] determined the enthalpies of forma- tion for MgCu 2 and Mg 2 Cu by the calorimetric method. Similar val- ues have been determined by Eremenko et al. [36] using EMF measurements. The vapour pressure measurements of Smith et al. [37] showed discrepancy with the other results. Due to M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 1067 different measurement techniques, the reported values are mutu- ally contradictory. Since vapour pressure measurements usually do not provide highly reliable results, the values in [35,36] are con- sidered more acceptable and hence were given higher weighing factors during optimization. 3.2. The Cu–Y binary system 3.2.1. Phase diagram Domagala et al. [38] studied the Cu–Y system by metallographic method, X-ray analysis, and critical temperature work to construct the phase diagram. They reported the composition and tempera- ture of four eutectic points, one peritectic point, and four interme- diate compounds CuY, Cu 2 Y, Cu 4 Y, and Cu 6 Y. They also reported the maximum solid solubility of Cu in Y as well as Y in Cu to be less than 1 wt%. It is worth noting that Domagala et al. [38] missed the presence of the Cu 7 Y 2 compound. Massalski et al. [39] pub- lished a phase diagram of the Cu–Y system based on the experi- mental data of Domagala et al. [38] . Buschow and Goot [12] investigated the system by metallography and X-ray diffraction and obtained evidence for the existence of two hexagonal Cu-rich phases. They defined the composition as Cu 5 Y, having a hexagonal CaCu 5 type structure and Cu 7 Y, having a hexagonal TbCu 7 type structure. Chakrabarti and Laughlin [40] proposed a phase diagram for this system using the experimental data of Domagala et al. [38] . The information they provided on this system was incomplete especially with regard to the entire liquidus region. Various transi- tion temperatures were also not accurately determined. Guojun et al. [41] measured the heat contents and determined the melting point of the intermetallic compounds using drop calorimeter in a temperature range of (850 to1300) K. Domagala et al. [38] deter- mined the melting point by visual analysis of the samples and re- ported error of measurement by ±15 K. This method is not precise and therefore the data reported by Guojun and Itagaki [41] are pre- ferred. They determined the critical temperature of the intermetal- lic compounds by drop calorimetry by analyzing the deflection points in the J T T curve, where J T is the heat content and T is the absolute temperature. Itagaki et al. [42] optimized the Cu–Y system using the experimental data of Guojun and Itagaki [41] . Unlike Chakrabarti and Laughlin [40] , they considered Cu 4 Y as a stoichi- ometric compound. However, all the experimental and calculated data reported by different authors are contradictory to one an- other. To resolve the controversy, Fries et al. [14] reinvestigated the Cu–Y system by DTA and XRD analysis, with emphasis on the range between (55 and 90) at.% Cu, and they proposed a new phase diagram. Their DTA results provide evidence for the possible exis- tence of a high temperature phase transformation in the Cu 2 Y com- pound {Cu 2 Y(h) M Cu 2 Y(r)}. The invariant points obtained in [14] show fair agreement with the experimental data from Guojun et al. [41] but differ markedly from those of [38,39] along the ( a - Y) liquidus line. Later, Abend et al. [43] made another attempt to investigate the Cu–Y system between (30 and 90) at.% Cu using EMF, DTA, and XRD. Their reported values are in fair agreement with those in [14,41] and differ from those in [38,39]. These results will be compared with the current assessment of the equilibria in the Cu–Y system. The XRD results of Fries et al. [14] confirmed a range of solubil- ity for the Cu 6 Y phase. The limits of Y-rich and Cu-rich sides were determined to be (84.5 ± 0.5) at.% Cu and (87.0 ± 0.5) at.% Cu, respectively, over the temperature range of (973 to 1123) K. This is in reasonable agreement with the reported values (85.7 to 87.5) at.% Cu by Massalski et al. [39] and (84 to 88) at.% Cu by Okamoto [44] . Also, the emf measurement by Abend and Schaller [43] showed a similar range of homogeneity. The Cu 4 Y phase is considered as a stoichiometric compound in this work since no definite homogeneity range was reported. The experimental results available for the Cu–Y system are not in good accord with each other. However after reviewing all the available data for this system, it appears that the results of Guojun et al. [41] and Fries et al. [14] are the most reliable and therefore were given higher weighing factors during optimization. 3.2.2. Thermodynamic properties The amount of thermodynamic data for the Cu–Y system is lim- ited. The yttrium is highly reactive and hence it is very difficult to handle the alloys during high temperature experiments. However, heats of mixing of liquid Cu–Y alloys have been determined calori- metrically by Watanabe et al. [45] at T = 1373 K, Sudavtsova et al. [46] at T = 1415 K and also by Sidorov et al. [47] at T = 1963 K. Ber- ezutskii and Lukashenko [48] measured the vapour pressure and activity coefficients of liquid Cu over the composition range of (19.8 to100) at.% Cu at T = 1623 K. Abend and Schaller [43] mea- sured the activity of Y in the solid state using the emf measure- ment technique. Different thermodynamic properties of the Cu–Y liquid were calculated by Ganesan et al. [49] . Guojun et al. [41] and Watanabe et al. [45] determined the enthalpy of formation of CuY, Cu 2 Y, and Cu 4 Y. These values along with the reported val- ues of Cu 6 Y and Cu 7 Y 2 by Itagaki et al. [42] will be compared with the current modelling results. 3.3. Mg–Y binary system 3.3.1. Phase diagram Gibson et al. [50] were the first researchers who reported the Mg–Y phase diagram. They determined the maximum primary so- lid solubility of Y in Mg as 9 wt% Y at the eutectic temperature (840 K). This agrees well with the results of Sviderskaya and Padezhnova [51] who used thermal analysis to study the Mg-rich region of the Mg–Y system. Another investigation by Mizer and Clark [52] on this system using thermal analysis and metallogra- phy showed that the maximum solubility of Y in solid Mg was approximately 12.6 wt% Y at the eutectic temperature. This is, also, in good accord with the results of [50,51] . As reported by Gibson et al. [50], there is one eutectic reaction occurring at 74 wt% Mg and T = 840 K and one eutectoid reaction at 11 wt% Mg and T = 1048 K. The latter reaction was associated with a high temperature allotropic transformation of yttrium. Three intermediate phases were identified as c at 21.5 wt% Mg, d at 41 wt% Mg and e at 60 wt% Mg. But any definite composition for these phases was not mentioned. However, e and c were re- ported to have compositions of Mg 24 Y 5 and MgY, respectively, by Sviderskaya and Padezhnova [51] . The thermodynamic optimization of Ran et al. [53] showed a very good agreement with the measured values of Gibson et al. [50]. Massalski [54] assessed the Mg–Y phase diagram using the available experimental data from the literature. He used the exper- imental data of [51] for the Mg-rich region. Smith et al. [55] inves- tigated the crystallography of MgY ( c ), Mg 2 Y ( d ) and Mg 24 Y 5 ( e ) intermediate phases. The tangible homogeneity ranges of e and c determined by them will be compared with the current analysis. The d -phase was predicted as a stoichiometric compound in [50,54,55]. Their results do not agree with Flandorfer et al. [56] , who employed XRD, optical microscopy, and microprobe analyses to study the Ce–Mg–Y isothermal section at T = 773 K. Based on the experimental work of [56] , the homogeneity range of d -phase was obtained and will be compared with the current results. The X-ray diffraction analysis of Smith et al. [55] showed that c -phase has CsCl type structure, d -phase has MgZn 2 structure, and e -phase has a -Mn structure. Another investigation on the crystal structure of e -phase by Zhang and Kelly [57] using transmission electron microscopy (TEM) showed the same structure as found by Smith et al. [55] but with one difference in the occupying atoms at the 1068 M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 2a Wyckoff position. However, since the work of Zhang and Kelly [57] used TEM, it is used during modelling of the e -phase, because it is considered more precise than that of Smith et al. [55] where XRD was used. Thermodynamic investigations on the Mg–Y system were carried out by Fabrichnaya et al. [58] and Shakhshir and Me- draj [59] . Both of them reproduced the homogeneity range of the intermetallic phases using the required crystallographic informa- tion but with two different approaches. However, for this work the same models reported by Shakhshir and Medraj [59] was used since their modelling approach was the same as the other interme- tallic phases modelled in this work. 3.3.2. Thermodynamic data Agrawal et al. [60] measured calorimetrically the enthalpy of mixing of the liquid Mg–Y alloy near the Mg-rich region (up to 21.8 at.% Y) at different temperatures. Activities of Mg was mea- sured by Gansen et al. [49] using the vapour pressure technique which is in agreement with the results of Isper and Gansen [61] who used the same method for the measurement. The enthalpy of formation of all three compounds was determined calorimetri- cally by Pyagai et al. [62] . Their results are in reasonable agreement with the calorimetric data of Smith et al. [55] except for the c - phase, for which the value of Pyagai et al. [62] is twice more negative than that obtained by Smith et al. [55] . This is due to the difficulties in measuring the enthalpy of formation when the yttrium content increases and hence the reactions become more exothermic. Also, Y has a high melting point compared to Mg and this leads to the sublimation of Mg during fusion of the metals [60] . The experimental results for enthalpy of formation of the compounds in the Mg–Y system will be compared with the current modelling results. 3.4. Mg–Cu–Y ternary system Inoue et al. [3] , Ma et al. [4] , and Busch et al. [63] carried out some experimental investigations on the Mg–Cu–Y system to find the glass-forming ability at different compositions. However, their reported results cannot be used in this work since equilibrium con- ditions cannot be achieved during the preparation of amorphous material. Ganesan et al. [49] measured the enthalpy of mixing of the liquid by a calorimetric method along five different isopleths. Also, the activity of magnesium in the ternary liquid was reported by them. These results will be compared with the calculated values from the current work. Two ternary compounds of composition Y 2 Cu 2 Mg and YCu 9 Mg 2 were identified by Mishra et al. [64] and Solokha et al. [65] . How- ever no thermodynamic properties for these compounds could be found in the literature. For this reason, it was not possible to in- clude them in the present work by the conventional method. But for better understanding of the ternary system, these two com- pounds were included in the optimization by an alternative meth- od which will be discussed later. A thermodynamic calculation of the Mg–Cu–Y system was carried out by Palumbo et al. [66] . They proposed a new modelling approach for the description of the spe- cific heat of the liquid to include the glass transition phenomenon. The ternary compounds were not included in their assessment. A complete thermodynamic optimization for the Mg–Cu–Y ter- nary system is still unknown. Also, the liquid phases of the three constituent binary systems Mg–Cu, Cu–Y, and Mg–Y need to be remodelled in order to consider the presence of short range ordering. 4. Results and discussion 4.1. The Mg–Cu binary system 4.1.1. Phase diagram The calculated Mg–Cu phase diagram is shown in figure 1 which shows reasonable agreement with the experimental data from the literature. All the optimized parameters are listed in table 4 . The excess entropy term ( b ) for the Mg 2 Cu phase is 0.03 J mol 1 K 1 which is very close to zero. Although it is desired to reduce the number of parameters, it is very difficult to get exact zero value for this term because for modelling the stoichiometric phase using the FactSage software [69] , one needs to provide some entropy va- lue for the compound and the change of entropy is calculated using the corresponding reaction. The congruent melting temperature of MgCu 2 was calculated as 1061 K. The experimental values of tem- perature reported in [17,18,11] are 1070 K, 1072 K, and (1066 ± 4) K, respectively. On the other hand, Jones [19] determined this 1500 1300 Liquid 1100 Cu (fcc) 900 Cu (fcc) + MgCu 2 MgCu 2 + CuMg 2 700 Mg (hcp) + CuMg 2 Mg (hcp) 500 300 0.00 0.20 0.40 0.60 0.80 1.00 T/K Mole Fraction, Mg M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 1069 melting temperature as 1092 K which is not consistent with the other experimental values. This may be because it is an old mea- surement (1931). Hence, it was decided to be consistent with the most recent value reported by Bagnoud and Feschotte [11] . Also, the homogeneity range of MgCu 2 is consistent with the experimen- tal data of Bagnoud and Feschotte [11] . The remaining parts of the 0 1.0 -2 0.8 -4 0.6 -6 0.4 -8 -10 0.2 -12 0.0 0.0 0.20 0.40 0.60 0.8 1.0 0.0 0.20 0.40 0.60 0.80 1.00 Mole fraction Mg Mole fraction Mg 0 Cu 2 Mg CuMg 2 -15 -30 -45 0.0 0.20 0.40 0.60 0.80 1.0 Enthalpy of mixing/ (kJ.mol - ) Enthaply of formation / (kJ.mol - ) Activity Mg Mole fraction Mg FIGURE 2. Plot of calculated thermodynamic properties against mole fraction Mg with the literature values: (a) enthalpy of mixing in kJ mol 1 for the Mg–Cu liquid at T = 1100 K: O : [29] at 1100 K, : [31] at 1100 K, s : [33] at 1120 K and 1125 K, h : [34] at 1100 K. (b) Activity of Mg in the Mg–Cu liquid at T = 1123 K: : [29] at 1100 K, M : [30] at 1123 K, h : [31] at 1100 K, s : [32] at 1100 K. (c) Enthalpy of formation in kJ mol 1 for the stoichiometric compounds: : [this work], e : [35] , s : [36], h : [37] . 2 Cu Y(h) 1800 bcc-Y 1600 1400 1200 Cu 6 Y Cu 7 Y 2 CuY Cu 2 Y(r) 1000 0.00 0.20 0.40 0.60 0.80 1.00 T/K Cu 4 Y Mole fraction, Y 1070 M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 phase diagram are in fair agreement with the experimental data from the literature . 4.1.2. Thermodynamic properties The calculated enthalpy of mixing, shown in figure 2 a, is in good agreement with the experimental data. The calculated activity of Mg in Mg–Cu liquid at T = 1100 K is shown in figure 2 b which agrees well with the experimental results from the literature [29–32]. The experimental data for the activity of Cu could not be found in the literature. Figure 2 c shows good agreement be- TABLE 5 Optimized model parameters for the liquid, CuY, Cu 2 Y(h), Cu 2 Y(r), Cu 4 Y, Cu 7 Y 2 , and Cu 6 Y phases Phase Terms a /(J mol 1 ) B /(J mol 1 K 1 ) Liquid D g AB 28718.77 6.28 Cu 6 Y (Y%, Cu 2 ) (Cu) 5 i g AB j g AB G Cu Y Cu:Cu 7 G Cu 6 hcp 6278.70 6906.57 0 0.84 2.09 0 G Cu 6 Y 5 G hcp G hcp 65.8 0 Cu:Y Cu 6 Y Y ; Cu 0 L 2 : Cu Cu Y 4794.8 0.45 CuY Cu 2 Y (h) Cu 2 Y (r) Cu 4 Y D G f D G f D G f D G f D G f 22517.5 3.311 17416.2 1.63 21997.9 2.44 17,888 1.65 Cu 7 Y 2 18775.5 1.73 tween the calculated heats of formation of MgCu 2 and Mg 2 Cu, ob- tained in this study and the experimental results reported by King and Kleppa [35] and Eremenko et al. [36] . The measured values of Smith and Christian [37] are less negative than those calculated and also inconsistent with other experimental results. This is prob- ably due to the inaccuracy involved in the vapour pressure mea- surement carried out by Smith and Christian [37] . 4.2. The Cu–Y binary system 4.2.1. Phase diagram The Cu–Y calculated phase diagram with the available experi- mental data from the literature is shown in figure 3 . The solid sol- ubilities of Y in Cu and Cu in Y are negligible and hence were not included in this work. The optimized parameters are shown in ta- ble 5 . Except few discrepancies with the results from Domagala [38] , the phase diagram shows reasonable agreement with all the other experimental points. Composition of the eutectic point near the Cu-rich side shows small deviation from the experimental data. But the thermodynamic properties especially the enthalpy of mix- ing near the Cu-rich side shows strong agreement with the exper- imental data; hence this amount of error is acceptable. It is worth noting that trying to be consistent with the experimental eutectic composition resulted in deviation from the experimental thermo- dynamic properties of the Cu–Y liquid. The solid phase transforma- tion of Cu 2 Y(h) ¡ Cu 2 Y(r) was included in the current assessment 0 1.0 -3 0.8 -9 0.6 -15 0.4 -21 0.2 -27 0.0 0.0 0.20 0.40 0.60 0.8 1.0 0.0 0.20 0.40 0.60 0.8 1.00 Mole fraction Y Mole fraction Y 0 -10 -20 0.0 0.20 0.40 0.60 0.80 1.0 Mole fraction Mg Enthalpy of mixing / (kJ.mol -1 ) Enthalpy of formation / (kJ. mol -1 .atom -1 ) Cu 4 Y Cu 7 Y 2 Cu 2 Y(r) CuY Activity Cu FIGURE 4. Plot of calculated thermodynamic properties against mole fraction with literature values: (a) enthalpy of mixing in kJ mol 1 for the Cu–Y liquid at T = 1410 K: h : [45] at 1373 K, s , \$ : [46] at 1410 K e : [47] at 1963 K. (b) Activity of Cu in the Cu–Y liquid at T = 1623 K; s : [48] at 1623 K. (c) Enthalpy of formation in kJ mol 1 for the stoichiometric compounds: e : [this work], : [14] , M : [42] , s : [45] . M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 1071 of this system and the temperature and composition of the two eu- tectic points around this compound remained within the limits of the experimental errors. 4.2.2. Thermodynamic properties The calculated enthalpy of mixing of the Cu–Y liquid at T = 1410 K in relation to the experimental results from the litera- ture is shown in figure 4 a. The calculated activity of Cu at T = 1623 K is shown in figure 4 b, which is in good agreement with the experimental results of Berezutskii and Lukashenko [48] near the Y-rich corner. The curve shows some deviation from the exper- imental values between (20 and 35) at.% Y. However, the calcula- tion of Ganesan et al. [49] showed very similar results to the present calculation. A comparison between the calculated enthalpy of formation for the stoichiometric compounds and other works is shown in figure 4 c. Discrepancy can be seen between the different experimental works which is not unexpected since both Cu and Y are highly reactive elements and it is very difficult to perform any kind of experimental investigation on this system. Nevertheless, the results obtained in this work lie within the error limits of the experimental measurements. 4.3. The Mg–Y binary system 4.3.1. Phase diagram The calculated phase diagram is shown in figure 5 with the experimental data from the literature. The optimized parameters are listed in table 6 . The homogeneity ranges of the e , d , and c phases were reproduced using the same models reported by Shakhshir and Medraj [59] with a fewer number of excess Gibbs free energy parameters. The models reported by Shakhshir and Medraj [59] are acceptable since both crystallographic information and homogeneity range data were taken into consideration during modelling process. 4.3.2. Thermodynamic properties The calculated enthalpy of mixing at T = 984 K is shown in fig- ure 6 a with the available experimental data in the literature. The TABLE 6 Optimized model parameters for liquid, hcp-Mg, b-Y, e, d, and c phases in Mg–Y system Phase Terms A /(J mol 1 ) b /(J mol 1 K 1 ) Liquid D g MgY 13059.70 6.45 g i MgY 13394.56 7.20 j g MgY 6529.85 1.26 hcp-Mg 0 L Mg-hcp 1 L Mg-hcp 2 L Mg-hcp 0 L Y- b 1 L Y- b 12476.78 7.49 2724.56 2.4 8788.22 2 b -Y 29760.18 13.49 2005.86 1.5 e (Mg%, Y) 29 (Y%, Mg) 10 (Mg) 19 0 G Mg:Mg:Mg e 1585 0 0 G e Mg:Y:Mg 5891.23 0 0 G e Y:Y:Mg 6000 0 0 G 0 G e Y:Mg:Mg d Mg:Mg:Mg 0 2148.82 0 0 d (Mg%, Y) 6 (Y%, Mg) 4 (Mg) 2 0 0 0 G Mg:Y:Mg d Y:Y:Mg d G G d 8902.47 0 0 0 0 0 0 0 0 0 Y:Mg:Mg d L Mg ; Y : Mg :Mg d L Mg ; Y : Y : Mg L Mg : Mg ; Y :Mg d L Y : Mg ; Y : Mg d 9006.45 641.82 3910.07 9006.45 88.50 11.86 3.46 88.50 c (Mg%, Y) (Y%: Va) c 0 G Mg:Y 10727.25 1.26 0 G c Mg:Va 10464.50 0.0 0 c G Y:Y 13432.86 0 0 c G Y:Va 13483.55 0 0 L c Mg ; Y : Y 15006.45 16 0 L c Mg ; Y : Va 15006.45 15 0 L c Mg : Y ; Va 5000 7 0 L c Y : Y ; Va 5000 7 activity of Mg in liquid Mg–Y at T = 1173 K is shown in figure 6 b which shows very good agreement with the experimental works in [49,61]. Better fit with the experimental data than the calcula- tions of [58,59] was achieved in this work. The calculated partial Gibbs free energy of mixing of Mg and Y in the Mg–Y liquid at T = 1173 K shows good agreement with the experimental results of [63] as shown in figure 6 c. Figure 6 d shows the calculated 1800 1500 Liquid 1200 hcp 900 δ γ hcp γ ε + hcp δ + γ 600 ε + hcp 300 0.00 0.20 0.40 0.60 0.80 1.00 Y β L + β − Y T/K − Y+ hcp β Mole fraction, Y FIGURE 5. Plot of temperature against mole fraction yttrium to illustrate the calculated Mg–Y phase diagram: s : [50] , h : one phase region [50] , j : two phase region [50] , M : [51] , : [55] , . : [56] . 1072 M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 0 -2 -6 -10 0.0 0.20 0.40 0.60 0.8 1.0 Enthalpy of mixing / (kJ.mol -1 ) Activity Mg Mole fraction Y 0 -20 -40 -60 -80 -10 0 0.0 0.20 0.40 0.60 0.80 1.0 Partial gibbs energy of Mg and Y / (kJ.mol -1 ) Heat of formation / (kJ.mol -1 . atom - ) Mole fraction Y 1.0 0.8 0.6 0.4 0.2 0.0 0.0 0.20 0.40 0.60 0.8 1.00 Mole fraction Y 0 -5 -15 -25 -35 ε δ γ 0.0 0.20 0.40 0.60 0.80 1.0 Mole fraction Y FIGURE 6. Plot of calculated thermodynamic properties against mole fraction yttrium with literature values: (a) enthalpy of mixing in kJ mol 1 for the Mg–Y liquid at T = 984 K: h : at 1020 K, s : at 1057 K, M : at 984 K, e : at 975 K, : at 955 K. [60] . (b) Activity of Mg in the Mg–Y liquid at T = 1173 K: j : [49] at 1173 K, . : [61] at 1173 K. (c) Partial Gibbs free energy of mixing of Mg and Y at T = 1173 K: s : [49] . (d) Enthalpy of formation in kJ mol 1 for the stoichiometric compounds: s : [this work], M : [55] , h : [62] . enthalpy of formation of the intermediate compounds in the Mg–Y system in relation to the experimental results from the literature. A good agreement between the calculated and the experimental data of Smith et al. [55] and Pyagai et al. [62] can be seen. However, the enthalpy of formation for the c phase measured by Pyagai et al. [62] is not consistent with the experimental value of Smith et al. [55] and the calculated value in this work. However, the results of Smith et al. [55] are more reliable since they used both the calori- metric and vapour pressure techniques in their investigation. 4.4. The Mg–Cu–Y system A self-consistent thermodynamic database for the Mg–Cu–Y system has been constructed by combining the thermodynamic descriptions of the three constituent binaries Mg–Cu, Cu–Y, and Mg–Y. For the extrapolation of the ternary system, the Kohler geometric model [70] was used since none of the three binary systems showed much dissimilarity in their thermodynamic characteristics. 4.4.1. Phase diagram The liquidus projection shown in figure 7 was calculated using FactSage software [69] with the optimized parameters of the three constituent binary systems without introducing any ternary inter- action parameters. The univariant valleys are shown by the heavier lines and the arrows on these lines indicate the directions of decreasing temper- ature. There are four ternary eutectic (E 1 to E 4 ) points, eight ternary quasi-peritectic (U 1 to U 8 ) points and three maximum (m 1 to m 3 ) points present in this system. A summary of the reactions at these invariant points is given in table 7 . 4.4.2. Thermodynamic properties Ganesan et al. [49] measured the enthalpy of mixing of the ter- nary Mg–Cu–Y liquid alloys by calorimetric method along five dif- ferent isopleths. Their results were compared with the present calculated enthalpy of mixing for three different sections as shown in figures 8 a to c. The initial discrepancy with the experimental re- sults in figure 8 c reflects the fact that the results of Ganesan et al. [49] for the enthalpy of mixing of ternary alloys is not consistent with the experimental binary enthalpy of mixing for the Cu–Y liquid. The calculated activity of Mg in the ternary liquid alloy at T = 1173 K is shown in figure 8 d, with the experimental data of Ganesan et al. [49] . The calculated values showed negative devia- tion from Raoult’s law unlike the measured activity that showed positive deviation. The reason for this is not known. Nevertheless, the present calculated activity of Mg showed similar trend as the one calculated by Ganesan et al. [49] who, also, could not explain this inconsistency. M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 1073 FIGURE 7. Liquidus projection of Mg–Cu–Y ternary system. TABLE 7 Calculated equilibrium points and their reactions in the Mg–Cu–Y system No. Reaction Calculated (this work) T /K Type Y/at.% Mg/at.% Cu/at.% 1 Liquid ¡ hcp-Mg + e + CuMg 2 709.8 E 1 7.7 79.2 13.1 2 Liquid ¡ d + CuMg 2 + CuY 662.6 E 2 17.8 59.6 22.6 3 Liquid ¡ Cu 6 Y + MgCu 2 + Cu 956.3 E 3 5.1 15.9 79 4 Liquid ¡ c + hcp-Y + CuY 910.1 E 4 54.6 22.4 23 5 Liquid + e ¡ d + Mg 2 Cu 680.9 U 1 14.7 67.9 17.3 6 Liquid + Cu 2 Y ¡ CuY + CuMg 2 672.89 U 2 17.4 58.3 24.3 7 Liquid + MgCu 2 ¡ CuMg 2 + Cu 2 Y 761.22 U 3 8.8 51.4 39.8 8 Liquid + Cu 7 Y 2 ¡ MgCu 2 + Cu 2 Y 849.4 U 4 9.8 40.6 49.6 9 Liquid + Cu 4 Y ¡ MgCu 2 + Cu 7 Y 2 957.4 U 5 10.6 25.2 64.2 10 Liquid + Cu 4 Y ¡ Cu 6 Y + MgCu 2 961.4 U 6 5.8 15.9 78.3 11 Liquid + c ¡ d + CuY 794.23 U 7 27.3 52.2 20.5 12 Liquid + b -Y ¡ hcp-Y + c 1038 U 8 54 32.1 13.9 13 Liquid ¡ e + Mg 2 Cu 710.5 m 1 8.8 77.4 13.8 14 Liquid ¡ MgCu 2 + Cu 4 Y 995.5 m 2 8.0 19.9 72.1 15 Liquid ¡ c + CuY 918.3 m 3 49.6 26.1 24.3 4.5. Approximation of the Mg–Cu–Y system with ternary intermetallics Two ternary compounds Y 2 Cu 2 Mg and YCu 9 Mg 2 were reported in the literature by Mishra et al. [64] and Solokha et al. [65] , but information on melting temperatures or the enthalpies of forma- tion of these compounds could not be found in the literature. The only available information is the annealing temperatures of these two compounds. Annealing was used by Mishra et al. [64] and Sol- okha et al. [65] to grow single-crystals in the samples. The reported annealing temperatures of Y 2 Cu 2 Mg and YCu 9 Mg 2 are 900 K and 673 K, respectively. This means that these ternary compounds are stable at the annealing temperatures, and most probably they exist at low temperatures. In order to reflect this important information in our work, we have created the approximate thermodynamic model of the Mg–Cu–Y system with ternary intermetallics. Absence of experi- mental data on melting temperatures and the enthalpies of forma- tion of these compounds limited our thermodynamic model, but it could be useful for the readers who are interested in the ternary phase equilibrium below T = 600 K. The liquidus surface in this model may have substantial devia- tions from reality because only indirect experimental data were available to approximate the melting temperatures of ternary com- pounds. That forced us to make some assumptions based on the available experiments on amorphous Mg–Cu–Y alloys and binary phase diagrams. Inoue et al. [3] and Ma et al. [4] studied the Mg–Cu–Y system to find suitable compositions for metallic glass. They used XRD and DSC analyses to examine different amorphous samples. During their experiments, proper equilibrium conditions did not prevail, hence their experimental results cannot be used directly in this work. But after reviewing their [3,4] works, some information regarding the system can be obtained. The DSC analysis of Ma Enthalpy of mixing / (kJ.mol -1 ) 1074 Enthalpy of mixing / (kJ.mol -1 ) Enthalpy of mixing / (kJ.mol -1 ) M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 0 -5 -10 -15 0.0 0.20 0.40 0.60 0.8 1.0 Mole fraction Cu Cu Mg 0.92 Y 0.8 0 -7 -14 -21 0.0 0.20 0.40 0.60 0.8 1.0 Cu 0.33 Y 0.67 Mole fraction Mg Mg 0 -3 -6 -9 -12 0.0 0.20 0.40 0.60 0.8 1.0 Cu 0.1 Mg 0.9 Mole fraction Y Y 1.0 0.8 0.6 0.4 0.2 0.0 0.0 0.20 0.40 0.60 0.80 1.00 Mole fraction Mg Mg Cu 0.33 Y 0.67 Activity,Mg FIGURE 8. Plot of thermodynamic properties against mole fraction for the Mg–Cu–Y liquid: integral enthalpy of mixing in kJ mol 1 for (a) (Mg 0.92 Y 0.08 ) 1 x Cu x ternary liquid at T = 1023 K: M : [49] at 1023 K. (b) (Cu 0.1 Mg 0.9 ) 1 x Y x ternary liquid at T = 1023 K: e : [49] at 1023 K. (c)(Cu 0.33 Y 0.67 ) 1 x Mg x ternary liquid at T = 1107 K: h : [49] at 1107 K. (d) Activity of Mg at T = 1173 K, - - - ideal mixing: M , h , e , s : [49] . FIGURE 9. Liquidus projection of the Mg–Cu–Y system with the ternary compounds (tentative). M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 1075 TABLE 8 Calculated equilibrium points and their reactions in the Mg–Cu–Y system after including the ternary compounds (tentative) No. Reaction Calculated (this work) T /K Type Y/at.% Mg/at.% Cu/at.% 1 Liquid ¡ hcp-Mg + e + CuMg 2 709.8 E 1 7.7 79.2 13.1 2 Liquid ¡ d + CuMg 2 + Y 2 Cu 2 Mg 682.0 E 2 14.6 68.1 17.3 3 Liquid ¡ CuMg 2 + Cu 2 Y+Y 2 Cu 2 Mg 760.8 E 3 9.0 51.6 39.4 4 Liquid ¡ Cu 6 Y + MgCu 2 + Cu 956.3 E 4 5.1 15.9 79.0 5 Liquid ¡ Cu 2 Y + CuY + Y 2 Cu 2 Mg 1059.9 E 5 37.1 5.5 57.4 6 Liquid ¡ hcp-Y + CuY + Y 2 Cu 2 Mg 993.5 E 6 60.8 11.1 28.1 7 Liquid + d ¡ e + CuMg 2 684.1 U 1 14.8 68.1 17.1 8 Liquid + Cu 2 Mg ¡ Cu 2 Y + CuMg 2 761.2 U 2 8.8 51.7 39.5 9 Liquid + Cu 7 Y 2 ¡ MgCu 2 + Cu 2 Y 868.0 U 3 9.7 39.3 51.0 10 Liquid + Cu 4 Y ¡ MgCu 2 + Cu 7 Y 2 957.2 U 4 10.6 25.4 64.1 11 Liquid + Cu 4 Y ¡ Cu 6 Y + MgCu 2 961.4 U 5 6.0 15.7 78.3 12 Liquid + b -Y ¡ hcp-Y + c 1038 U 6 54.8 32.1 13.1 13 Liquid + c ¡ d + Y 2 Cu 2 Mg 900.2 U 7 26.6 60.7 12.7 14 Liquid ¡ e + CuMg 2 710.54 m 1 8.7 77.5 13.8 15 Liquid ¡ CuMg 2 + Y 2 Cu 2 Mg 766.1 m 2 9.6 55.5 34.9 16 Liquid ¡ MgCu 2 + Cu 4 Y 995.5 m 3 8.0 19.9 72.1 17 Liquid ¡ Cu 2 Y+Y 2 Cu 2 Mg 1062.4 m 4 35.5 6.4 58.1 18 Liquid ¡ CuY + Y 2 Cu 2 Mg 1136.3 m 5 46.6 6.8 46.6 19 Liquid ¡ hcp-Y + Y 2 Cu 2 Mg 1010.3 m 6 58.9 17.1 24.0 20 Liquid ¡ c + Y 2 Cu 2 Mg 1013.4 m 7 45.1 38.2 16.7 a 1400 Liquid 1256K 1200 Liquid + Y 2 Cu 2 Mg Liquid + Y 2 Cu 2 Mg Liquid + Y 2 Cu 2 Mg + CuY 1000 Y 2 Cu 2 Mg Liquid + Y 2 Cu 2 Mg + γ + CuY + Cu 2 Y δ + γ + Y 2 Cu 2 Mg 800 0.10 0.16 0.22 0.28 0.34 0.40 T/K Y 2 Cu 2 Mg Mole fraction Mg b 1200 Liquid Liquid + Cu 4 Y Liquid + Cu 7 Y 2 1000 MgCu 2 MgCu 2 MgCu 2 + + 852K + Cu 4 Y Cu Y 6 Cu 4 Y 800 MgCu 2 + Cu 7 Y 2 + Cu 2 Y 600 0.00 0.04 0.08 0.12 0.16 0.20 T/K YCu 9 Mg 2 Mole fraction Mg FIGURE 10. Plot of temperature against mole fraction Mg to show isopleths (constant composition section) of the Mg–Cu–Y system at (a) 40 at.% Y and at (b)- 75 at.% Cu, showing the tentative melting temperature of Y 2 Cu 2 Mg and YCu 9 Mg 2 compound, respectively (tentative). et al. [4] shows that near Mg 65 Cu 25 Y 11 composition, a ternary eu- tectic point exists at a temperature around 730 K. Same composi- tion for the eutectic was reported earlier by Inoue et al. [3] . Consequently, it can be assumed that the actual eutectic point would be found near this composition at a similar temperature, if proper equilibrium conditions are maintained. Depending on this information, the heat and entropy of formation of the ternary com- pounds were varied until the eutectic composition and tempera- ture are close to those reported by Inoue et al. [3] and the melting temperatures for the ternary compounds were estimated from the resulting liquidus surface. Based on the abovementioned assumptions, the liquidus projec- tion was recalculated as shown in figure 9 . The eutectic point (E 2 ) is observed at a composition of Mg 68 Cu 17.3 Y 14.6 and T = 682 K. The composition deviates from that of Inoue et al. [3] and Ma et al. [4] by about 8 at.% Cu and temperature around T = 50 K. A more accu- rate result can be obtained by introducing some ternary interaction parameters, but since the experimental data were not for equilib- rium conditions, it was decided to accept the current calculation without using any interaction parameter. All the invariant points calculated after incorporating the ternary compounds are listed in table 8 . The melting temperature of the Y 2 Cu 2 Mg compound was ad- justed to be 1256 K by a trial and error method, so that the eutectic composition and temperature lies in the desired range. The com- pound melts congruently at this temperature as can be seen in fig- ure 10 a. The melting temperature of YCu 9 Mg 2 should be lower than that of Y 2 Cu 2 Mg, since the overall Y content is less (8 versus 40 at.% Y). The reported annealing temperature (673 K), which is lower than that of Y 2 Cu 2 Mg, also supports this assumption. The XRD analysis of Ma et al. [4] on the alloy composition Mg 58.5 Cu 30.5 Y 11 , shows the existence of Mg 2 Cu, Mg 24 Y 5 and one unidentified phase. It may be assumed that this unidentified phase is in fact the Y 2 Cu 2 Mg compound. To be consistent with this information, the melting temperature of YCu 9 Mg 2 was adjusted to be 852 K. A vertical sec- tion for 75 at.% Cu in figure 10 b shows that this compound melts incongruently. The presence of incongruently melting binary (Cu 6 Y) compound near YCu 9 Mg 2 also justifies this. The above discussion shows the legitimacy of the current work. The analysis may not be totally accurate but at least it will give closer approximation of the actual equilibrium in the Mg–Cu–Y system. Some key experiments on this system may resolve this uncertainty. This should be attempted during further studies on this system. 1076 M. Mezbahul-Islam et al. / J. Chem. Thermodynamics 40 (2008) 1064–1076 5. Concluding remarks A comprehensive thermodynamic assessment of the ternary Mg–Cu–Y system was conducted using available experimental data. Based on the current assessment, the following conclusions can be drawn: The modified quasichemical model was used to describe the liquid phase in the Mg–Cu–Y system. The calculated phase dia- grams of the binary systems as well as thermodynamic proper- ties such as activity, enthalpy of mixing, and enthalpy of formation of the compounds show good agreement with the experimental data available in the literature. A self-consistent database for the Mg–Cu–Y system was con- structed by combining the optimized parameters of the three constituent binary systems. No ternary interaction parameters were used for the extrapolation. The presence of two ternary compounds was included in the optimization considering the limited experimental information available in the literature for these compounds. More experimental investigation is required to obtain detailed information regarding the two ternary compounds (Y 2 Cu 2 Mg and YCu 9 Mg 2 ). The melting temperatures of these two com- pounds should be determined experimentally which is very important to establish a more accurate assessment of the Mg–Cu–Y system. Also, all the predicted invariant points in the Mg–Cu–Y ternary system are to be verified experimentally. The present work can be used to design key experiments for fur- ther verification of this system. References [1] J.M. Kim, K. Shin, K.T. Kim, W.J. Jung, Scripta Mater. 49 (2003) 687–691. [2] R.H. Tendler, M.R. Soriano, M.E. Pepe, J.A. Kovacs, E.E. Vicente, J.A. Alonso, Intermetallics 14 (2006) 297–307. [3] A. Inoue, A. Kato, T. Zhang, S.G. Kim, T. Masumoto, Mater. Trans. 32 (1991) 609–616. [4] H. Ma, Q. Zheng, J. Xu, Y. Li, E. Ma, J. Mater. Res. 20 (2005) 2225–2252. [5] H. Men, W.T. Kim, D.H. Kim, J. Non-Cryst. Solids 337 (2004) 29–35. [6] A.T. Dinsdale, CALPHAD 15 (1991) 317–425. [7] O. Redlich, A.T. Kister, J. Ind. Eng. Chem. 40 (1948) 341–345. [8] A.D. Pelton, S.A. Degterov, G. Eriksson, C. Robelin, Y. Dessureault, Metall. Mater. Trans. B 31 (2000) 651–659. [9] P. Chartrand, A.D. Pelton, Metall. Mater. Trans. A 32 (2001) 1397–1407. [10] A.D. Pelton, P. Chartrand, G. Eriksson, Metall. Mater. Trans. A 32 (2001) 1409– 1416. [11] P. Bagnoud, P. Feschotte, Z. Metallkd. 69 (1978) 114–120. [12] K.H.J. Buschow, A.S. Goot, Acta Crystallogr. B 27 (1971) 1085–1088. [13] C. Wagner, W. Schottky, J. Phys. Chem. B 11 (1930) 163–210. [14] S.G. Fries, H.L. Lukas, R. Konetzki, R. Schmid-Fetzer, J. Phase Equilibr. 15 (1994) 606–614. [15] K.C. Kumar, P. Wollants, J. Alloys Compd. 320 (2001) 189–198. [16] O. Boudouard, Comptes Rendus 135 (1902) 794–796. [17] R. Sahmen, Z. Anorg. Allg. Chem. 57 (1908) 1–
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738157153129578, "perplexity": 4208.669358122779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00208.warc.gz"}
https://authorea.com/doi/full/10.22541/au.159618101.13945018
Almost sharp global well-posedness for the defocusing Hartree equation with radial data in $\mathbb R^5$ • +1 • Xingjian Mu, • Xingdong Tang, • Guixiang Xu, • Jianwei Yang Xingjian Mu Beijing Normal University Author Profile Xingdong Tang Nanjing University of Information Science and Technology Author Profile Guixiang Xu Beijing Normal University Author Profile Jianwei Yang Beijing Institute of Technology Author Profile ## Abstract We show global well-posedness and scattering for the defocusing, energy-subcritical Hartree equation \begin{equation*} iu_t + \Delta u =F(u), (t,x)\in\mathbb{R}\times\mathbb{R}^5 \end{equation*} where $F(u)= \big( V* |u|^2 \big) u$, $V(x)=|x|^{-\gamma}$, $3< \gamma< 4$, and initial data $u_0(x)$ is radial in almost sharp Sobolev space $H^{s}\left(\R^5\right)$ for $s>s_c=\gamma/2-1$. Main difficulty is the lack of the conservation law. The main stategy is to use I-method together with the radial Sobolev inequality, the interaction Morawetz estimate, long-time Strichartz estimate and local smoothing effect to control the energy transfer of the solution and obtain the increment estimate of the modified energy $E(Iu)(t)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677259087562561, "perplexity": 3335.5198549054267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401632671.79/warc/CC-MAIN-20200929060555-20200929090555-00507.warc.gz"}
https://www.physicsforums.com/threads/a-spring-is-compressed-by-90-cm.583786/
# A spring is compressed by 90 cm 1. Mar 4, 2012 ### ScienceGeek24 1. The problem statement, all variables and given/known data A spring is compressed by 90 cm at the bottom of a frictionless hill of height 18m. the spring is realeased and propels a 2 kg block over the hill. At the top of the hill, the block's speed is 6 m/s. Find the force constant (spring constant) of the spring. 2. Relevant equations 1/2kx^2, 1/2mv^2 and mgy 3. The attempt at a solution mgy=1/2kx^2+1/2mv^2 I solved for K and it did no give me the right answer. what i'm I doing wrong is this the right set up??? 2. Mar 4, 2012 ### spf At the beginning, the whole energy of the system is in the elastic energy of the spring. When the block is at the top of the hill, the energy is in the potential energy of the block and the kinetic energy of the block. Thus, it's NOT the potential energy which equals the sum of the elastic energy and the kinetic energy as you have written BUT the elastic energy which equals the sum of the potential energy and the kinetic energy. 3. Mar 4, 2012 ### ScienceGeek24 thanks man it worked! 4. Mar 4, 2012 5. Mar 4, 2012 ### ScienceGeek24 But i forgot also to emphazie the second part of the question where is asking waht is the smallest compression of the spring that will allow the block to reach the op of the hill and not slide down? I tried Fs=-kdeltax and it did not give me the right answer. than i tried the same thing from part 1 solving for x and nope. 6. Mar 4, 2012 ### spf The only difference to part 1 of the question is that now the speed of the block at the top of the hill is not 6 m/s but... how much? 7. Mar 4, 2012 ### ScienceGeek24 om/s yess!! thanks man! Similar Discussions: A spring is compressed by 90 cm
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579818844795227, "perplexity": 566.4977681355898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686983.8/warc/CC-MAIN-20170920085844-20170920105844-00371.warc.gz"}
http://mathhelpforum.com/advanced-algebra/106780-closed-subsets.html
# Math Help - Closed Subsets 1. ## Closed Subsets Give and prove an example of a subset A $\subseteq$ $\mathbb{R} ^{2}$ which is closed under scalar multiplication, but not under vector addition. Give and prove an example of a subset A $\subseteq$ $\mathbb{R} ^{2}$ which is closed under vector addition but not under scalar multiplication. 2. Originally Posted by studentmath92 Give and prove an example of a subset A $\subseteq$ $\mathbb{R} ^{2}$ which is closed under scalar multiplication, but not under vector addition. A = { (x,y) ; |x| - |y| = 0 } . Clearly closed under scalar multiplication but (1, -1) and (1,1) belong to A, but (1,-1) + (1,1) = (2,0) doeen't. Give and prove an example of a subset A $\subseteq$ $\mathbb{R} ^{2}$ which is closed under vector addition but not under scalar multiplication. Try this one by yourself. It's easier than the above one. Tonio 3. Originally Posted by studentmath92 Give and prove an example of a subset A $\subseteq$ $\mathbb{R} ^{2}$ which is closed under scalar multiplication, but not under vector addition. Give and prove an example of a subset A $\subseteq$ $\mathbb{R} ^{2}$ which is closed under vector addition but not under scalar multiplication. 1) $A=\{(x,0)|x\in \mathbb{R}\} \cup \{(0,y)|y\in \mathbb{R}\}$ See why its closed under scalar multiplication? but for instance $(0,1)+(1,0)=(1,1)$ is not in there. 2) Let A be the set of vectors with integer entries. So (1,1) is in there and it is clear that its closed under addition since the integers are. But take .5(1,1) and it isnt in there anymore.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565885066986084, "perplexity": 275.6799099629257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928869.6/warc/CC-MAIN-20150521113208-00211-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-6-systems-of-equations-and-inequalities-6-2-solving-systems-using-substitution-practice-and-problem-solving-exercises-page-376/26
## Algebra 1: Common Core (15th Edition) $y=0.5x+3\\ 2y-x=6\\ 2(0.5x+3)-x=6\\ x+6-x=6\\ 6=6$ The system has infinitely many solutions, for 6 is always equal to 6 regardless of what values are chosen for x and y.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822259306907654, "perplexity": 327.7556839173813}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825123.5/warc/CC-MAIN-20181214001053-20181214022553-00527.warc.gz"}
https://www.physicsforums.com/threads/find-the-n-value-of-a-function.834213/
# Find the n value of a function 1. Sep 24, 2015 ### chwala 1. Given a curve y= x^n, where n is an integer. If the curve passes between the points (2,200) and (2,2000), determine the values of n 2. Relevant equations 3. The attempt at a solution the n value by trial and error can be 2^8=256 2^9=512 and 2^10=1024. The correct answer to this problem is n=9. How did they arrive at the answer? 2. Sep 24, 2015 ### SteamKing Staff Emeritus The two points (2, 200) and (2, 2000) represent a vertical line. Have you copied these coordinates correctly? 3. Sep 24, 2015 ### chwala Yes the question has been copied correctly from pure mathematics 1 hugh neill and Douglas quadling, Cambridge page 49, number 3 4. Sep 24, 2015 ### chwala The two points don't represent a line. 5. Sep 24, 2015 ### andrewkirk Not necessarily. Say rather 'The author thought the correct answer was n=9'. 29 times out of 30 book answers are correct. The rest of the time they're not. Physicists are woefully underpaid and overworked and don't have time to check their drafts as meticulously as they'd like. This may be the 30th case, or it may be one of the 29. What were the exact words of the question? 6. Sep 24, 2015 ### chwala Note the ter note The term between and not through ,I may refer you to check online on the problem. 7. Sep 24, 2015 ### Ray Vickson There is no curve in the world of the form $y = x^n$ that goes through the two points $(x,y) = (2,200)$ and $(x,y) = (2,2000)$ Both of these points have the same $x$-value, so how could they possibly have different values of $y$? The short answer is: they can't. 8. Sep 24, 2015 ### chwala Sorry dont forget that it is 2 raised to a power n And Not 2 raised to 0. The x values are not 2 and 2 respectively, but the y values are 200 and 2000 respectively 9. Sep 24, 2015 ### HallsofIvy Staff Emeritus Frankly, it looks like you don't know what you are talking about. $y= x^n$, for any n, is a function. It cannot give two different values of for x= 2. And now you say "The x values are not 2 and 2 respectively". What?? You said the graph passes through (2, 200) and (2, 2000). Both those points have x= 2. You say "the correct answer is 9". That obviously is NOT correcr, $2^9= 512$, not 200 or 2000. 10. Sep 24, 2015 ### chwala The curve passes between (2,200) and (2,2000) whereby this two points lie on the curve y=x raised to n obviously the first x values should be (2 raised to 7...., where Y=200) for first point and similarly same for second point 11. Sep 24, 2015 ### chwala N n is an integer value, sorry.... 12. Sep 24, 2015 ### chwala Am usin Smart phone 13. Sep 24, 2015 ### Ray Vickson I did not forget anything! YOU were the one that wrote (2,200), not me; and in Mathematics, the notation (2,200) stands for a point in the xy-plane where x = 2 and y = 200. If you mean $2^{200}$ then you must write it correctly, as 2^200 or 2200. Anyway, assuming you meant $2^{200}$ and $2^{2000}$, there is still not enough information to solve the problem. Is $2^{200}$ a value of $y$, and if it is, what is the corresponding value of $x$? Is $2^{200}$ a value of $x$, and if so, what is the corresponding value of $y$? Or, do you mean that the curve goes through the point $(2^{200},2^{2000})$? If that is what you meant, then that should be what you write; in plain text it can be typed as (2^200,2^2000). 14. Sep 24, 2015 ### chwala Thanks to all of you, I agree that there's a problem with the question,it can't be solved, I copied it accurately from the text book. Regards 15. Sep 24, 2015 ### HallsofIvy Staff Emeritus What are you talking about? $2^7= 128$ NOT 200 so you cannot mean that "$y= x^7$" which would be saying $200= 2^7$ which is NOT TRUE! Again, if $y= 2^x$ then 2 raised to the same value, not two different values. Perhaps you are stating the problem incorrectly. 16. Sep 24, 2015 ### andrewkirk I think you misread the question. He wrote 'the curve passes between the points (2,200) and (2,2000)' not 'the curve passes through the points (2,200) and (2,2000)'. In other words, if we denote the curve by f(x), he is saying that 200<f(2)<2000. The question makes sense (although it should say 'determine the possible values of n', not just 'determine the values of n'), as does chwala's concern about the answer in the book. The possible values of n are 8, 9 and 10, but the book suggests (I suspect erroneously) that only 9 is possible. Communication seems to be hampered by the fact that chwala's smartphone doesn't seem to interact reliably with the PF site. 17. Sep 24, 2015 ### chwala i have managed tyo solve the problem as follows: y=x^n 200=2^n giving us log 200= n log 2, n=7.6 implying that since n is an integer, and we are looking for an n value passing between (2,200) and (2,2000), then n cannot be equal to 8. Again 2000= 2^n, log 2000=n log 2, n= 10.9 implying that the n value we are looking for cannot be equal to 10, thus n=9 18. Sep 24, 2015 ### andrewkirk That doesn't follow. It is not $n$ that is required to pass between those two points, but the curve $f_8(x)=x^n$. The curve $f_8(x)=x^8$ does that, as does $f_9(x)=x^9$ and $f_{10}(x)=x^{10}$. Again I ask: What were the exact words of the questionin the book? 19. Sep 25, 2015 ### chwala Those were the exact words. I have quoted the textbook and author and the page in my previous posts. Regards 20. Sep 25, 2015 ### HallsofIvy Staff Emeritus This is what you wrote initially: Perhaps we have been misinterpreting this. I thought you were asking for a curve that passed through the points (2, 200) and (2, 20000). Apparently you just want an integer value of n such that 2^n is somewhere between 200 and 2000. Yes, the "values (plural) of n" that satisfy that are 8, 9, and 10. 2^7= 128 which is less than 200 and 2^11= 2048 which is larger than 2000. If that is what the question is asking, then the answers are 8, 9, and 10, not just "9". Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add? Draft saved Draft deleted Similar Discussions: Find the n value of a function
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252367377281189, "perplexity": 965.1609491811433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109803.8/warc/CC-MAIN-20170822011838-20170822031838-00213.warc.gz"}
http://clay6.com/qa/26589/find-the-angle-between-the-lines-whose-slope-are-large-frac-and-7-
# Find the angle between the lines whose slope are $\large\frac{3}{2}$ and 7? $\begin{array}{1 1}(a)\;\theta=\tan^{-1}\large\frac{11}{24}&(b)\;\theta=\tan^{-1}\large\frac{11}{23}\\(c)\;\theta=\tan^{-1}\large\frac{12}{23}&(d)\;\theta=\tan^{-1}\large\frac{10}{23}\end{array}$ $\tan \theta=\bigg| \large\frac{m_1+m_2}{1+m_1m_2}\bigg|$ $\qquad=\bigg|\large\frac{7-\Large\frac{3}{2}}{1+\Large\frac{7\times 3}{2}}\bigg|$ $\tan \theta=\bigg|\large\frac{11}{23}\bigg|$ $\theta=\tan^{-1}\large\frac{11}{23}$ Hence (b) is the correct option.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582383632659912, "perplexity": 321.6197103336974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00012.warc.gz"}
https://physics.stackexchange.com/questions/277479/accelerating-a-magnetic-dipole-through-a-uniform-magnetic-field
# Accelerating a Magnetic dipole through a uniform magnetic field If an (uncharged) magnetic dipole travels through a uniform magnetic field with uniform velocity perpendicular to the direction of the B field (lines) it experiences no force. But if I somehow ACCELERATE the magnetic dipole through the same uniform B field will it experience in its frame a magnetic field gradient and therefore experience a force? IOW, under acceleration does it see a spatial gradient dB/dr...and thus experience a force in the direction of the neg. gradient. And if so, can someone derive or supply the equation of motion please. Thanks. I could give you an answer only for such elementary particles like neutron and electron. Lorentz force for electrons From yor question it seems clear that you are familiar with this phenomenon. To go over to the neutrons behaviour first I have to explain to you what is the mechanism behind this deflection. Perhaps you know that electrons have an intrinsic magnetic dipole moment and a parallel to this moment itrinsic spin. If an electron is under the influence of an external magnetic field the electrons magnetic dipole moment gets aligned. If this happens with a moving (no parallel to the external field) electron the gyroscopic effect takes place and the electron gets deflected sideways. But any deflection is in physics an acceleration and since we observed many times, an acceleration is accompanied with the emission of photons. Photons have momentum and the emission from the electron dissalign the electrons magnetic dipole moment again. The game starts again until the electron came in rest to the external magnetic field. Force on a neutron from an external magnetic field What is the difference between a neutron and an electron? The neutron has no electrical charge and the values of the intrinsic spin and the magnetic dipole moment are different (but existing!). To make a experiment with neutrons is more difficult due to their production and their high velocity as well as their approx. 1000 times - in comparison to the electron - mass. But due to the neutrons intrinsic properties and kinetic energy it will be deflected in an external magnetic field.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896902203559875, "perplexity": 283.5724010369788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00275.warc.gz"}
https://tex.stackexchange.com/questions/245153/split-within-a-matrix-environment
# Split within a matrix environment I have an array that looks like this: I'm not sure how I can add the curly brackets and text there. I formatted the array like this: \begin{equation*} \begin{array}{r*{10}{l}} \trom{1} & 2 + x_1 -2 = x_1 & \\ \trom{2} & 1 + 2s + 0t = x_2 & \\ & 1 + 2s = x_2 \\ \trom{3} & 0 + 2s + 3t = x_3 \\ & \text{Some more calculations} \end{array} \end{equation*} I tried to use the split environment within the array environment: \begin{equation*} \begin{array}{r*{10}{l}} \left . \begin{split} \trom{1} & 2 + x_1 -2 = x_1 & \\ \trom{2} & 1 + 2s + 0t = x_2 & \\ & 1 + 2s = x_2 \\ \end{split} \right \} \textit{Test....} \\ \trom{3} & 0 + 2s + 3t = x_3 \\ & \text{Some more calculations} \end{array} \end{equation*} But that messes up the "grid": How can I use the split environment within an array environment? If that's not possible (or if there's a more elegant way), please post it instead. split is not intended to be used in that way, you really want aligned instead. \documentclass{article} \usepackage{mathtools} \newcommand{\trom}[1]{\makebox[0pt][r]{\MakeUppercase{\romannumeral #1}}} \begin{document} \begin{equation*} \begin{split} &\!\left . \!\begin{aligned} \trom{1}\quad & 2 + x_1 -2 = x_1 \\ \trom{2}\quad & 1 + 2s + 0t = x_2 \\ & 1 + 2s = x_2 \\ \end{aligned} \right \} \textit{Test....} \\ &\trom{3}\quad 0 + 2s + 3t = x_3 \\ \end{split} \end{equation*} \end{document} • Thanks. That's what I've been looking for. I still have a question, though. Why did you add \!? A google search for Latex exclamation point|mark didn't turn up anything useful. – Matt3o12 May 15 '15 at 17:54 • Never mind. I just found it in another answer. – Matt3o12 May 15 '15 at 18:20 Here is another option using arrays: \documentclass{article} \usepackage{amsmath} \newcommand{\toRoman}[1]{\text{\MakeUppercase{\romannumeral #1}}} \begin{document} \begin{equation*} \renewcommand{\arraystretch}{1.2}% http://tex.stackexchange.com/a/31704/5764 \begin{array}{r l} \multicolumn{2}{l}{\left.\kern-\nulldelimiterspace\begin{array}{@{}r l} \hphantom{\toRoman{3}}\makebox[0pt][r]{\toRoman{1}} & 2 + x_1 -2 = x_1 \\ \toRoman{2} & 1 + 2s + 0t = x_2 \\ & 1 + 2s = x_2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903127908706665, "perplexity": 1094.2281828927707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00409.warc.gz"}
https://2019.ieee-isit.org/Papers/ViewPaper.asp?PaperNum=1822
# Technical Program ## Paper Detail Paper Title On Coded Caching with Correlated Files TU1.R1.4 Kai Wan, Technische Universität Berlin, Germany; Daniela Tuninetti, University of Illinois at Chicago, United States; Mingyue Ji, University of Utah, United States; Giuseppe Caire, Technische Universität Berlin, Germany Coded Caching II Le Théatre (Parterre), Level -1 Tuesday, 09 July, 09:50 - 11:10 Tuesday, 09 July, 10:50 - 11:10 Click here to download the manuscript This paper studies the fundamental limits of the shared-link caching problem with correlated files, where a server with a library of $N$ files communicates with $K$ users who can store $M$ files. Given an integer $r \in [N]$, correlation is modelled as follows: each $r-$subset of files contains one and one only common block. The tradeoff between the cache size and the average transmitted load is considered. First, a converse bound under the constraint of uncoded cache placement (i.e., each user directly caches a subset of the library bits) is derived. Then, an interference alignment scheme is proposed. The proposed scheme achieves the optimal average load under uncoded cache placement to within a factor of 2 in general, and it is exactly optimal for (i) users demand distinct files, (ii) large or small cache size, namely $KrM/N \leq 2$ or $KrM/N \geq K-1$, and (iii) large or small correlation, namely $r \in \{1,2,N-1,N\}$. As a by-product, the proposed scheme reduces the (worst-case or average) load of existing schemes for the caching problem with multi-requests.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891128659248352, "perplexity": 1335.8334995270957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00437.warc.gz"}
http://www-m3.ma.tum.de/Allgemeines/JoernBehrensProjInvTrans
# Adaptive Grid Generation in Inverse Atmospheric Transport Problems Author: Dr. Jörn Behrens ## Introduction Inverse atmospheric modeling (often referred to as adjoint modeling) plays a crucial role in atmospheric sciences. In complex modeling environments, adjoint models allow for accurate data assimilation. In contaminant dispersion modeling inverse methods allow for back tracking of measured constituents to their sources. ## Idea Adaptive mesh refinement allows for high local resolution. Therefore, we anticipate a good representation of point sources or fine-scale phenomena (like filamentation or mixing/stirring phenomena), when combining adaptive mesh refinement with inverse modeling techniques. ## Example In order to give a first simple example, we consider the (linear) advection equation and derive the adjoint. It turns out that the adjoint of the advection operator is the advection operator with inverted wind component: • start from the advection operator: %$\mathcal{D}(\cdot):= \partial_t(\cdot)+\nabla\cdot(\mathbf{v}(\cdot)),$% • now, the adjoint formulation: %$\langle \mathcal{D}\phi, \psi \rangle = \langle \phi, \mathcal{D}^\ast\psi \rangle,$% • finally, the resulting adjoint operator: %$\mathcal{D}^\ast(\cdot)= -\partial_t(\cdot)- \mathbf{v}\cdot\nabla(\cdot).$% Thus, starting from a given distribution of an atmospheric tracer, we can now compute the source of its emmission. The pictures below, show the initial distribution and the corresponding source (the TUM logo) together with the corresponding adaptively refined meshes. ## Figures Figure 1 - The wind field Figure 2 - The initial situation (measured tracer distribution) Figure 3 - The final situation (source of tracer emmission)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9213099479675293, "perplexity": 2702.403474163251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658844.27/warc/CC-MAIN-20190117062012-20190117084012-00141.warc.gz"}
https://www.math.ias.edu/seminars/abstract?event=26388
Lower Bounds for Circuits with MOD_m Gates COMPUTER SCIENCE/DISCRETE MATH II Topic: Lower Bounds for Circuits with MOD_m Gates Speaker: Arkadev Chattopadhyay Affiliation: Member, School of Mathematics Date: Tuesday, October 14 Time/Room: 10:30am - 12:30pm/S-101 A celebrated theorem of Razborov/Smolensky says that constant depth circuits comprising AND/OR/MOD_{p^k} gates of unbounded fan-in, require exponential size to compute the MAJORITY function if p is a fixed prime and k is a fixed integer. Extending this result to the case when p is a number having more than one distinct prime factor (like 6) remains a major open problem. In particular, it remains consistent with our knowledge that every problem in NP has linear size depth-three circuits comprising only MOD_6 gates. We go through some approaches for making progress on this problem. In the process, a diverse set of techniques are introduced that are interesting in their own right.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385869264602661, "perplexity": 748.5718347571975}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156857.16/warc/CC-MAIN-20180921053049-20180921073449-00513.warc.gz"}
http://www.lampos.net/latex-presentation
# LaTeX beamer how-to Why not produce a presentation using a WYSIWYG editor (like MS Powerpoint)? For the same reasons you won't write your publications or books in MS Word. Coding your presentation in LaTeX with the 'help' of the beamer class makes the task easy, especially if you already have some experience with TeX scripting. Beamer is a LaTeX class which provides a lot of easy-to-deploy commands, tricks, templates and libraries for producing presentations. I was searching the web for an easy-to-follow and use application of it, but this process wasn't very straightforward, so when I finally compiled a LaTeX presentation, I thought it might be useful to make the source code publicly available. Therefore, the attached documents are only good for beamer beginners (since they are the result of my first LaTeX presentation tryout). If you are using MiKTeX in WinEdt, my source code should be considered as plug and play! There is no need to download the beamer class, WinEdt will do it for you automatically (during compilation time). My sense is that in any modern LaTeX setup scenario the .tex source file provided (see below) should compile OK. Try it for yourself and let me know if there's a problem. All the files for this sample presentation (a .tex file and 2 figures) are included in this archive: Presentation using beamer class in LaTeX The final outcome, in case you want to check first what you can get out of LaTeX, is this pdf file: LaTeX presentation pdf
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.940464973449707, "perplexity": 1603.1926027873292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720468.71/warc/CC-MAIN-20161020183840-00368-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/858641/squeeze-theorem-with-factorial/858649
# Squeeze Theorem with Factorial Is it correct to squeeze $\frac{3n}{(n+1)!}$ between $0$ and $\frac{1}{n!}$? The proposed left side makes logical sense to me, however bounding the right hand side to prove the limit goes to $0$ is giving me a bit more trouble. - Write $3n=3(n+1)-3$. –  Adam Hughes Jul 7 '14 at 5:22 Note that since $n<n+1$, one have that $$0\le \frac{3n}{(n+1)!} \le \frac{3}{n!}$$ Using Squeeze Theorem, it can be easily seen that the limit is zero.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9867441654205322, "perplexity": 221.52802657904792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097475.46/warc/CC-MAIN-20150627031817-00166-ip-10-179-60-89.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/54136/tiling-a-hexagonal-chessboard-with-tribones
# Tiling a hexagonal chessboard with “tribones” A tribone is a tile made of three hexagons in a line. A hexagonal chessboard is a hexagonal grid of 91 cells in the shape of a larger hexagon. When 30 tribones are placed on a hexagonal chessboard without overlap, what are the possible locations of the uncovered space? • are we gonna ignore symmetry? or the answer would be 6x? – Oray Aug 9 '17 at 20:45 Colour in all cells in the top horizontal row in Red, the second in Blue, then Green, Red, Blue and so on. R R R R R R B B B B B B B G G G G G G G G R R R R R R R R R B B B B B B B B B B G G G G G G G G G G G R R R R R R R R R R B B B B B B B B B G G G G G G G G R R R R R R R B B B B B B Thanks to Mike Earnest for the diagram! As you place tiles on the board, denote the number of each colour remaining uncovered at any stage as R, G, B. Each piece must either cover 3 cells of a single colour, or one of each. Either way, the relative numbers of each colour modulo 3 remains unaltered, i.e. (R-G)%3, (G-B)%3 and (B-R)%3 are constant. Initially there are 6+9+10+7=32 reds (=2 mod 3), and the same number of blue. There are 8+11+8=27 greens (=0 mod 3). So R=B mod 3 and G=R+1 mod 3. Once there is only one cell left uncovered, that cell must therefore be Green. So it must be on row 3, 6 or 9. By rotating through 60 degrees in either direction and applying the same logic, the uncovered cell must be one of the seven highlighted in the other answers - either the central cell or 3 rows in from any two adjoining edges. • Thanks for the image - I can't see the edit history on this device so don't know who to thank :) – IanF1 Aug 11 '17 at 14:39 • That was me, nice job on the solution! – Mike Earnest Aug 12 '17 at 2:06 Adding to @Oray's answer, here are constructions: (rotate for blue dots) I've reduced it to these hexagons: Those labelled 2, because each 'tribone' must cover a 1, a 2 and a 3 and there are 30 1s, 30 3s but 31 2s. • I reduced it to the same ones - I'm trying to find something to rule out the other 2s. – Deusovi Aug 9 '17 at 23:45 There are 7 uncovered place possibilities as shown in the figure below: I am working on how to show that these are only possible places by programming or by some logical-deduction and post it soon!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231808543205261, "perplexity": 799.3245345912361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258120.87/warc/CC-MAIN-20190525144906-20190525170906-00484.warc.gz"}
https://math.stackexchange.com/questions/220242/another-probability-distribution-question
Another probability distribution question: So apparently I'm having quite a difficult time finding probability distributions. My first question is how to develop a strategy of finding these probability distributions. My next question comes from a book I'm using to study for the CAS Exam P and it goes like this: Let $p(x)=(\frac34)(\frac14)^x, x=0,1,2,3...$ be probability mass function of a random variable $X$. Find $F$, the distribution of $X$. • This is a geometrique distribution. – user31280 Oct 29 '12 at 0:35 Hagen answered a specific part of your question, but I see that your question is how to understand how to do this in general. So, that is why I am answering the question. Please let me know if there are parts that are not clear. In general, the cumulative distribution function is defined by $$F(x) = P(X \leq x).$$ You already have $p(x) = P(X = x)$. So, if you want to find $F(n) = P(X \leq n)$, you just add up all of the values of $p(x)$ for all $x \leq n$. That is \begin{align*} F(n) &= P(X \leq n) \\ &= P(X = 0) + P(X = 1) + P(X = 2) + \cdots + P(X = n) \\ &= p(0) + p(1) + p(2) + \cdots + p(n). \end{align*}. The above formula assumes that $n$ is an integer. But, if $n$ were not an integer, you would just add up all values of $p(x)$ for $x \leq n$, just as I said. For example, $$F(3.14159) = p(0) + p(1) + p(2) + p(3).$$ Now, if you want to find a nice formula for $F(x)$ for this specific problem, see Hagen's answer. In general, nice formulas may or may not be possible, and the techniques for finding them would vary hugely from problem to problem. For this specific problem, note, for any $x$ in $[0, 1)$, $F(x)$ will have the same value, $p(0)$. And, for any $x$ in $[1, 2)$, $F(x) = p(0) + p(1)$. And, for any $x$ in $[2, 3)$, $F(x) = p(0) + p(1) + p(2)$. And so on. So, $F(x)$, as a function, has an infinite number of jump discontinuities. The main thing here is, for any $x$ value, hopefully you now understand how to find $F(x)$. $F(x)$ is the probability of $X\le x$. Thus $F(x)=0$ for $x<0$, $F(x)=\frac34$ for $0\le x<1$, $F(x)=\frac{15}{16}$ for $1\le x<2$. You should find that $$F(x)=1-\frac1{4^{\lfloor x\rfloor+1}}$$ if $x\ge0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997934103012085, "perplexity": 78.79880936589004}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00343.warc.gz"}
https://math.libretexts.org/TextMaps/Analysis_Textmaps/Map%3A_Partial_Differential_Equations_(Miersemann)/6%3A_Parabolic_Equations/6.4.0%3A_Initial-Boundary_Value_Problems
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 6.4: Initial-Boundary Value Problems Consider the initial-boundary value problem for $$c=c(x,t)$$ \begin{eqnarray} \label{sol1} c_t&=&D\triangle c\ \ \mbox{in}\ \Omega\times (0,\infty)\\ \label{sol2} c(x,0)&=&c_0(x)\ \ x\in\overline{\Omega}\\ \label{sol3} \frac{\partial c}{\partial n}&=& 0\ \ \mbox{on}\ \partial\Omega\times (0,\infty). \end{eqnarray} Here is $$\Omega\subset\mathbb{R}^n$$, $$n$$ the exterior unit normal at the smooth parts of $$\partial\Omega$$, $$D$$ a  positive constant and $$c_0(x)$$ a given function. Remark. In application to diffusion problems, $$c(x,t)$$ is the concentration of a substance in a solution, $$c_0(x)$$ its initial concentration and $$D$$ the coefficient of diffusion. The first Fick's rule says that $$w=D\partial c/\partial n$$, where $$w$$ is the flow of the substance through the boundary $$\partial\Omega$$. Thus according to the Neumann boundary condition (\ref{sol3}), we assume that there is no flow through the boundary. ### Contributors • Integrated by Justin Marshall.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919984340667725, "perplexity": 683.0954794463024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188891.62/warc/CC-MAIN-20170322212948-00374-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/inverse-integral-numerically.193373/
# Inverse integral numerically 1. Oct 23, 2007 ### jimbo_durham I have a complicated integral which i need to compute numerically. I can do this in C++ using a version of Simpson's rule. I also need to compute the inverse of this integral (presumably this is what it is called) ie i have d=integral(f(x)dx) and i need to be able to compute x given a value of d. The function f(x) is too complicated to simply rearrange, so i need an iterative way of guessing x, running it in the program, seeing what value of d is given, and then improving my guess of x until i reach my starting d value. This seems on the surface to be a simple iteration problem, however i cannot find an efficent way of doing this. Can anyone tell me of a nice way of solving this problem (and even if it is called an 'inverse integration' or not?) and if there is a name given to the numerical method of solving it. It has been mentioned that there is a 'press' method however i can find no mention of this. Prehaps it involves finding the roots of an equation which takes the value of the integral evaluated at some guessed x, and the roots give the point where the guessed x corresponds to the known d, thus solving the problem? does this ring any bells? Last edited: Oct 23, 2007 2. Oct 23, 2007 ### arildno Well, one way would be: Let: $$y(x)=y(0)+\int_{x_{0}}^{x}f(t)dt$$ Or, that is: $$y=y_{0}+\int_{x(0)}^{x(y)}f(t)dt$$ Differentiating the latter expression with respect to y yields: $$1=f(x(y))\frac{dx}{dy}$$ That is, you may solve the following differential equation numerically: $$\frac{dx}{dy}=\frac{1}{f(x)}, x(y_{0})=x(y_{0})$$ 3. Oct 23, 2007 ### jimbo_durham i am not sure how to apply that, i think it is worth me giving you the integral; $$d_{M}=c_{1}\cdot sinh\left[c_{2}\int^{b}_{a}\left[\left(1+z\right)^{2}\cdot\left(1+c_{3}\cdot z\right)-z\cdot\left(2+z\right)\cdot c_{4}\right]^{-\frac{1}{2}}dz\right]$$ i am trying to find z given a value of $$d_{M}$$. 4. Oct 23, 2007 ### jimbo_durham note in formula, $$a, b, c$$ are known constants Similar Discussions: Inverse integral numerically
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887873530387878, "perplexity": 471.60200509593324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887067.27/warc/CC-MAIN-20180118051833-20180118071833-00270.warc.gz"}
https://studyadda.com/solved-papers/rajasthan-pet/rajasthan-pet-solved-paper-2009/464
# Solved papers for RAJASTHAN ­ PET Rajasthan PET Solved Paper-2009 ### done Rajasthan PET Solved Paper-2009 • question_answer1) Which of the following equation represents a progressive wave? A) $y=\,A\cos \,ax\,\sin bt$ B) $y=\,A\,\sin \,bt$ C) $y=\,A\,\cos (ax+bt)$ D) $y=\,A\,\tan (ax+bt)$ • question_answer2) The frequency and velocity of sound wave are 600 Hz and 360 m/s respectively. Phase difference between two particles of medium are$60{}^\circ ,$the minimum distance between these two particles will be A) 10 cm B) 15 cm C) 20 cm D) 50 cm • question_answer3) The heat required to increase the temperature of 4 moles of a monoatomic ideal gas from 273 K to 473 K at constant volume is A) 200 H B) 400 R C) 800 J D) 1200 R • question_answer4) A solid sphere rolls without slipping on the roof. The ratio of its rotational kinetic energy and its total kinetic energy is A) 2/5 B) 4/5 C) 2/7 D) 3/7 • question_answer5) A black body at a temperature of 2600 K has the wavelength corresponding to maximum emission 1200$\overset{o}{\mathop{\text{A}}}\,$. Assuming the moon to be perfectly black body the temperature of the moon, if the wavelength corresponding to maximum emission is 5000$\overset{o}{\mathop{\text{A}}}\,$ is A) 7800 K B) 6240 K C) 5240 K D) 3640 K • question_answer6) When the light enters from air to glass, for which colour the angle of deviation is maximum? A) Red B) Yellow C) Blue D) Violet • question_answer7) Two waves are represent by ${{y}_{1}}=A\sin (kx-\omega t)\,and\,{{y}_{2}}=A\cos \,(kx-\omega t)$ The amplitude of resultant wave is A) 4A B) 2A C) $\sqrt{2}A$ D) A • question_answer8) Which of the following event, support the quantum nature of light? A) Diffraction B) Polarization C) Interference D) Photoelectric effect • question_answer9) The linear momentum of photon is p. The wavelength of photon is $\lambda$, then ($h$is Planck?s constant) A) $\lambda =hp$ B) $\lambda =\frac{h}{p}$ C) $\lambda =\frac{p}{h}$ D) $\lambda =\frac{{{p}^{2}}}{h}$ • question_answer10) When two coherent monochromatic beams of intensity I and 9l interfere, the possible maximum and minimum intensities of the resulting beam are A) $9I$ and$I$ B) $9I$and$4I$ C) $16I$and$4I$ D) $16I$and$I$ • question_answer11) An atom of mass M which is in the state of rest emits a photon of wavelength $\lambda$ . As a result, the atom will deflect with the kinetic energy equal to (A is Planck's constant) A) $\frac{{{h}^{2}}}{M{{\lambda }^{2}}}$ B) $\frac{1}{2}\frac{{{h}^{2}}}{M{{\lambda }^{2}}}$ C) $\frac{h}{M\lambda }$ D) $\frac{1}{2}\frac{h}{M\lambda }$ • question_answer12) Two long straight wires are set parallel to each other Each carries a current i in the opposite direction and the separation between them is 2R. The intensity of the magnetic field midway between them is A) zero B) $\frac{{{\mu }_{0}}i}{4\pi R}$ C) $\frac{{{\mu }_{0}}i}{2\pi R}$ D) $\frac{{{\mu }_{0}}i}{\pi R}$ • question_answer13) Charge Q is placed at the diagonal faced comers of a square and charge q is placed at another two corners of square. The condition for net electric force on Q to be zero will be A) $Q=(-2\sqrt{2)}\,q$ B) $Q=-\frac{q}{2}$ C) $Q=(2\sqrt{2)}\,q$ D) $Q=-\frac{q}{2}$ • question_answer14) The electric potential at centre of metallic conducting sphere is A) zero B) half from potential at surface of sphere C) equal from potential at surface of sphere D) twice from potential at surface of sphere • question_answer15) A capacitor of capacity 10 $\mu$F is charged to a potential of 400 Volt. When its both plates are connected by a conducting wire, then heat generated will be A) $80\text{ }J$ B) $0.8\text{ }J$ C) $8\times {{10}^{-3}}$ D) $8\times {{10}^{-6}}J$ • question_answer16) Magnet moment of bar magnet is M. The work done to turn the magnet by$90{}^\circ$of magnet in direction of magnetic field B will be A) zero B) $\frac{1}{2}$MB C) 2 MB D) MB • question_answer17) Voltage V and current$i$in AC circuit are given by \begin{align} & V=50\sin (50\,t)\,volt \\ & i=50\,\sin \,\left( 50t+\frac{\pi }{3} \right)mA \\ \end{align} The power dissipated in circuit is A) 5.0 W B) 2.5 W C) 1.25 W D) zero • question_answer18) The maximum voltage in DC circuit is 282 Volt. The effective voltage in AC circuit will be A) 200 V B) 300 V C) 400 V D) 564 V • question_answer19) If L, C and R denote inductance, capacitance and resistance respectively, then which of the following combination has the dimension of time? A) $\frac{C}{L}$ B) $\frac{1}{RC}$ C) $\frac{L}{R}$ D) $\frac{RL}{C}$ • question_answer20) The ratio of momenta of an electron and photon which are accelerated from rest by a potential difference 50 Volt is A) $\frac{{{m}_{e}}}{{{m}_{p}}}$ B) $\sqrt{\frac{{{m}_{e}}}{{{m}_{p}}}}$ C) $\frac{{{m}_{p}}}{{{m}_{e}}}$ D) $\sqrt{\frac{{{m}_{p}}}{{{m}_{e}}}}$ • question_answer21) A wire of resistance$18\,\Omega$is divided into three equal parts. These parts are connected in side of triangle, the equivalent resistance of any two corners of triangle will be A) 18$\Omega$ B) 9$\Omega$ C) 6$\Omega$ D) 4$\Omega$ • question_answer22) Two capacitors of capacity 6 $\mu$F and 12 $\mu$V in series are connected by potential of 150 V. The potential of capacitor of capacity 12 $\mu$F will be A) 25 V B) 50 V C) 100 V D) 150 V • question_answer23) A charged particle enters in a magnetic field whose direction is parallel to velocity of the particle, then the speed of this particle A) in straight line B) in coiled path C) in circular path D) in ellipse path • question_answer24) Magnetic intensity for an axial point due to a short bar magnet of magnetic moment M is given by A) $\frac{{{\mu }_{0}}}{4\pi }\frac{m}{{{d}^{3}}}$ B) $\frac{{{\mu }_{0}}M}{4\pi {{d}^{2}}}$ C) $\frac{{{\mu }_{0}}}{2\pi }\frac{\mu }{{{d}^{3}}}$ D) $\frac{{{\mu }_{0}}M}{2\pi {{d}^{2}}}$ • question_answer25) Which of the following transition gives the photon of minimum frequency? A) n = 2 to n = 1 B) n = 3 to n = 1 C) n = 3 to n = 2 D) n = 4 to n = 3 • question_answer26) lonization energy of He+ ion at minimum position is A) 13.6 eV B) 27.2 eV C) 54.4 eV D) 68.0 eV • question_answer27) The half-life period of a radioactive substance is 3 days. Three fourth of substance decays in A) 3 days B) 6 days C) 9 days D) 12 days • question_answer28) If the decay constant of a radioactive substance is $\lambda$, then its half-life is A) $\frac{1}{\lambda }{{\log }_{e}}2$ B) $\frac{1}{\lambda }$ C) $\lambda {{\log }_{e}}2$ D) $\frac{\lambda }{{{\log }_{e}}2}$ • question_answer29) A galvanometer can be converted into a voltmeter by connecting A) low resistance in parallel B) low resistance in series C) high resistance in parallel D) high resistance in series • question_answer30) A heater coil cut into two equal parts and one part is connected with heater. Now heat generated in heater will be A) twice B) half C) one- fourth D) four times • question_answer31) A particle moves along with x-axis. The position$x$of particle with respect to time t from origin given by $x={{b}_{0}}+{{b}_{1}}t+{{b}_{2}}{{t}_{2}}$. The acceleration of particle is A) ${{b}_{0}}$ B) ${{b}_{1}}$ C) ${{b}_{2}}$ D) $2{{b}_{2}}$ • question_answer32) One end of string of length$l$is connected to a particle of mass m and the other to a small peg on a smooth horizontal table. If the particle moves in a circle with speed v, the net force on the particle (directed towards the centre) is A) zero B) $T-\frac{m{{v}^{2}}}{I}$ C) $T$ D) $T+\frac{m{{v}^{2}}}{I}$ • question_answer33) If the kinetic energy of a body is increased 2 times, its momentum will A) half B) remain unchanged C) be doubled D) increase $\sqrt{2}$times • question_answer34) The moment of inertia of a solid sphere of mass M and radius R about the tangent on its surface is A) $\frac{7}{5}M{{R}^{2}}$ B) $\frac{4}{5}M{{R}^{2}}$ C) $\frac{2}{5}M{{R}^{2}}$ D) $\frac{1}{2}M{{R}^{2}}$ • question_answer35) Satellites A and B are revolving around the orbit of earth. The mass of A is 10 times to mass of B. The ratio of time period $\left( \frac{{{T}_{A}}}{{{T}_{B}}} \right)$ is A) $10$ B) $1$ C) $\frac{1}{5}$ D) $\frac{1}{10}$ • question_answer36) A particle of amplitude A is executing simple harmonic motion. When the potential energy of particle is half of its maximum potential energy, then displacement from its equilibrium position is A) $\frac{A}{4}$ B) $\frac{A}{3}$ C) $\frac{A}{2}$ D) $\frac{A}{\sqrt{2}}$ • question_answer37) A simple wave motion represents by$y=5(\sin 4\pi t+\sqrt{3}\cos 4\pi t)$. Its amplitude is A) 5 B) 5$\sqrt{3}$ C) $10\sqrt{3}$ D) 10 • question_answer38) A liquid does not wet the sides of a solid, if the angle of contact is A) obtuse B) $90{}^\circ$ C) acute D) zero • question_answer39) Root means square speed of the molecules of ideal gas is$v$. If pressure is increased two times at constant temperature, then the rms speed will become A) $\frac{v}{2}$ B) $v$ C) $2v$ D) $4v$ • question_answer40) 1 mole of gas occupies a volume of 200 ml at 100 mm pressure. What is the volume occupied by two moles of gas at 400 mm pressure and at same temperature? A) 50 mL B) 100 mL C) 200 mL D) 400 mL • question_answer41) The$s{{p}^{3}}$hybridisation is present on the central atom of A) $HCHO$ B) $BC{{l}_{3}}$ C) $PC{{l}_{3}}$ D) $S{{O}_{3}}$ • question_answer42) The number of unit cells in the 5.85 g crystals of$NaCl$are A) $1.5\times {{10}^{23}}$ B) $1.5\times {{10}^{22}}$ C) $3.0\times {{10}^{22}}$ D) $3.0\times {{10}^{23}}$ • question_answer43) The oxidation number of sulphur is -1 in A) ${{H}_{2}}S$ B) $Fe{{S}_{2}}$ C) $C{{S}_{2}}$ D) $C{{u}_{2}}S$ • question_answer44) For which of the following metals, the electronic configuration of valence shell is not ${{d}^{6}}$? A) Fe (II) B) $Mn$(II) C) Co (III) D) Ni (IV) • question_answer45) 68 g sugar$({{C}_{12}}{{H}_{22}}{{O}_{11}})$is dissolved in 1 kg of water. What is the mole fraction of sugar? A) 0.018 B) 0.036 C) 0.0018 D) 0.0036 • question_answer46) The pH of 0.1 M aqueous ammonia $({{K}_{b}}=1.8\times {{10}^{-5}})$is. A) 9.13 B) 10.13 C) 11.13 D) 12.13 • question_answer47) From the following mixtures, which is not a buffer (concentration level 0.5 M)? A) $C{{H}_{3}}COOH+NaOH(2:1)$ B) $HCl+N{{H}_{3}}(aq)(1:2)$ C) $C{{H}_{3}}COOH+NaOH(1:2)$ D) $HCl+N{{H}_{3}}(2:3)$ • question_answer48) How much volume (in litre) of$3M\,NaOH$is obtained from 80 g$NaOH$? (Atomic mass of $Na=23u$) A) 2.67 B) 1.34 C) 0.67 D) 0.33 • question_answer49) The initial concentration of sugar solution is 0.12 M. On doing fermentation the concentration of sugar decreases to 0.06 M in 10 h and to 0.045 M in 15 h. The order of the reaction is A) 0.5 B) 1.0 C) 1.5 D) 2.0 • question_answer50) For the decomposition of${{N}_{2}}O$and${{O}_{2}}$in presence of at, the velocity constant, k is $k=5\times {{10}^{11}}{{e}^{-30,000/T}}$ For this, the activation energy is (in kJ$mo{{l}^{-1}}$) A) 2.494 B) 24.94 C) 249.4 D) 2494 • question_answer51) The following equilibrium establishes on heating 0.2 mole of${{H}_{2}}$and 1.0 mole of sulphur in 1 L vessel at$90{}^\circ C$. ${{H}_{2}}(g)+S(s){{H}_{2}}S(g);$ $K=6.8\times {{10}^{-2}}$ The partial pressure of${{H}_{2}}S$in equilibrium state is A) 4.20 B) 0.42 C) 0.21 D) 0.042 • question_answer52) An aqueous solution boils at$100.2{}^\circ C$. At which temperature this will freeze. $({{K}_{b}}=0.5{}^\circ C/m,{{K}_{f}}=1.9{}^\circ C/m)$ A) $+0.76$ B) $-0.76$ C) $-0.38$ D) $+0.38$ • question_answer53) The lowest$p{{K}_{a}}$value is for A) phenol B) $m-$cresol C) o-cresol D) p-cresol • question_answer54) At room temperature, the least stable compound is A) $C{{H}_{3}}COCl$ B) $HCOCl$ C) $C{{H}_{3}}COOH$ D) ${{(C{{H}_{3}}CO)}_{2}}O$ • question_answer55) For the conversion of$C{{H}_{2}}=C{{H}_{2}}$into$HOOC$. $C{{H}_{2}}C{{H}_{2}}COOH,$the minimum number of steps required are A) 2 B) 3 C) 4 D) 5 • question_answer56) Which of the following compounds does not give two isomer compounds on reaction with$N{{H}_{2}}OH$? A) $C{{H}_{3}}CO{{C}_{2}}{{H}_{5}}$ B) $C{{H}_{3}}COC{{H}_{3}}$ C) $C{{H}_{3}}CHO$ D) $PhCOC{{H}_{3}}$ • question_answer57) From the following which is not a reactant, reagent or product in Hoffmann reaction? A) $RCON{{H}_{2}}$ B) $RN{{H}_{2}}$ C) $B{{r}_{2}},O{{H}^{-}}$ D) ${{H}_{2}}S{{O}_{4}}$ • question_answer58) The total number of isomers for cyclic alcohol ${{C}_{4}}{{H}_{7}}OH$is A) 2 B) 3 C) 4 D) 5 • question_answer59) The least stable free radical is A) $\overset{\bullet }{\mathop{C}}\,{{H}_{2}}C{{H}_{2}}CH{{(C{{H}_{3}})}_{2}}$ B) $C{{H}_{3}}\underset{\bullet }{\mathop{C}}\,HCH{{(C{{H}_{3}})}_{2}}$ C) $C{{H}_{3}}C{{H}_{2}}\underset{\bullet }{\mathop{C}}\,{{(C{{H}_{3}})}_{2}}$ D) $\overset{\bullet }{\mathop{C}}\,{{H}_{3}}$ • question_answer60) The suitable reagent for the reduction of ${{C}_{2}}{{H}_{5}}COOH$into${{C}_{3}}{{H}_{7}}OH$is A) $B{{H}_{3}}/THF$and${{H}_{3}}{{O}^{+}}$ B) $NaB{{H}_{4}}$ C) $Na/EtOH$ D) ${{H}_{2}}/$catalyst • question_answer61) The blue colour of acidic solution of$C{{r}_{2}}O_{7}^{2-}$is not changed into green by A) ${{C}_{6}}{{H}_{5}}C{{H}_{2}}OH$ B) ${{(C{{H}_{3}})}_{2}}CHOH$ C) $C{{H}_{3}}C{{H}_{2}}C{{H}_{2}}C{{H}_{2}}OH$ D) ${{(C{{H}_{3}})}_{3}}COH$ • question_answer62) The reaction of${{C}_{2}}{{H}_{5}}Cl$with$Li$and$CuI$gives mainly A) 2-butene C) n-butane D) n-butyl chloride • question_answer63) The reagent which does not convert n-butyl chloride into zi-butane is A) $Zn,\text{ }HCl$ B) $LIAl{{H}_{4}}$ C) $Mg,$Anhydrous ether,${{H}_{2}}O$ D) ${{B}_{2}}{{H}_{6}}$in THF • question_answer64) The correct order of stability is A) Pentane < iso-pentane < neo-pentane B) iso-pentane < neo-pentane < pentane C) neo-pentane < iso-pentane < pentane D) Pentane < iso-pentane < iso-pentane • question_answer65) The correct order of boiling point of ethyl dimethyl amine , n-butyl amine and diethyl amine is A) $B>C>A$ B) $B>A>C$ C) $A>B>C$ D) $C>B>A$ • question_answer66) Which of the following compounds does not give iodoform test? A) $C{{H}_{3}}COC{{H}_{2}}COO{{C}_{2}}{{H}_{5}}$ B) $PhC{{H}_{2}}COC{{H}_{3}}$ C) $M{{e}_{3}}C.COC{{H}_{3}}$ D) $C{{H}_{3}}COC{{H}_{3}}$ • question_answer67) The species which acts as both nucleophile and electrophile is A) $C{{H}_{3}}CN$ B) $N{{H}_{3}}$ C) $P{{(C{{H}_{3}})}_{2}}$ D) ${{H}_{2}}$ • question_answer68) The most basic from the following is A) $N{{H}_{3}}$ B) $C{{H}_{3}}N{{H}_{2}}$ C) $N{{F}_{3}}$ D) $N{{(Si{{H}_{3}})}_{3}}$ • question_answer69) The main product of the reaction of benzene with lithium in liquid ammonia and$EtOH$is A) B) C) D) • question_answer70) The rate of free radical chlorination of$C{{H}_{4}}$is A) equal to$C{{D}_{4}}$ B) double the rate of $C{{D}_{4}}$ C) 12 times the rate of$C{{D}_{4}}$ D) less than the rate of$C{{D}_{4}}$ • question_answer71) The first ionisation energy difference is maximum for which pair? A) $Na,Mg$ B) $K,Ca$ C) $Rb,Sr$ D) $Cs,Ba$ • question_answer72) The ratio of third Bohr orbit radius and second Bohr orbit radius for the hydrogen atom is A) 0.5 B) 1.5 C) 0.75 D) 2.25 • question_answer73) The principal quantum number of Mn for those valence shell orbits in which electrons are filled A) 4, 3 B) 4, 4 C) 3, 3 D) 5, 4 • question_answer74) From the generally known oxidation states, the oxidation number is maximum for A) $Mn$ B) $Cu$ C) $Sn$ D) $Sc$ • question_answer75) The number of molecules in 180 g of heavy water are A) $6.02\times {{10}^{24}}$ B) $6.02\times {{10}^{22}}$ C) $5.42\times {{10}^{24}}$ D) $5.42\times {{10}^{23}}$ • question_answer76) Which of the following is reduced by${{H}_{2}}{{O}_{2}}$? A) $C{{l}_{2}}$ B) ${{[Fe{{(CN)}_{6}}]}^{4-}}$ C) $N{{H}_{2}}OH$ D) $SO_{3}^{2-}$ • question_answer77) The maximum energy molecular orbit filled by electron in nitrogen molecule is/are A) $\sigma 2{{p}_{z}}$ B) $\pi 2{{p}_{x}}\approx \pi 2{{p}_{y}}$ C) ${{\pi }^{*}}2{{p}_{x}}\approx {{\pi }^{*}}2{{p}_{y}}$ D) ${{\sigma }^{*}}2{{p}_{z}}$ • question_answer78) Which of the following is correct order for density? A) $Cs>Rb>K>Na$ B) $Cs>Rb>Na>K$ C) $Rb>Cs>K>Na$ D) $Rb>Cs>Na>K$ • question_answer79) The correct order of dipole moment is A) $B{{F}_{3}}<{{H}_{2}}S<{{H}_{2}}O$ B) ${{H}_{2}}S<B{{F}_{3}}<{{H}_{2}}O$ C) ${{H}_{2}}O<{{H}_{2}}S<B{{F}_{3}}$ D) ${{H}_{2}}O<B{{F}_{3}}<{{H}_{2}}S$ • question_answer80) Which of the following bond has minimum bond energy? A) $C-H$ B) $N-H$ C) $O-H$ D) $F-H$ • question_answer81) The intersection point of the normals drawn at the end points of latusrectum of the parabola${{x}^{2}}=-2y$is A) $\left( -\frac{1}{2},-\frac{3}{2} \right)$ B) $\left( \frac{1}{2},-\frac{3}{2} \right)$ C) $(0,-1)$ D) $\left( 0,-\frac{3}{2} \right)$ • question_answer82) The equation whose roots are reciprocal of the roots of the equation$a{{x}^{2}}+bx+c=0,$is A) $b{{x}^{2}}+cx+a=0$ B) $b{{x}^{2}}+ax+c=0$ C) $c{{x}^{2}}+ax+b=0$ D) $c{{x}^{2}}+bx+a=0$ • question_answer83) If in the expansion of${{\left( 3x-\frac{2}{{{x}^{2}}} \right)}^{15}},$rth term is independent of$x,$then value of r is A) 11 B) 10 C) 9 D) 12 • question_answer84) If$f(x)=\left| \begin{matrix} 1+a & 1-ax & 1+a{{x}^{2}} \\ 1+b & 1+bx & 1+b{{x}^{2}} \\ 1+c & 1+cx & 1+c{{x}^{2}} \\ \end{matrix} \right|,$where$a,b,c$ are non-zero Constants, then value of$f(10)$ is A) $10(b-a)(c-a)$ B) $100(b-a)(c-b)(a-c)$ C) $100\,abc$ D) 0 • question_answer85) If$^{n}{{C}_{12}}{{=}^{n}}{{C}_{8}},$then value of$^{n}{{C}_{19}}$is A) 1 B) 20 C) 210 D) 1540 • question_answer86) If$\omega$is a complex cube root of unity and$A=\left[ \begin{matrix} \omega & 0 \\ 0 & \omega \\ \end{matrix} \right],$then${{A}^{50}}$is A) ${{\omega }^{2}}A$ B) $\omega A$ C) $A$ D) $0$ • question_answer87) If sum of$n$terms of an AP is$2n+3{{n}^{2}},$then rth term is A) $2r+3{{r}^{2}}$ B) $3{{r}^{2}}-4r+1$ C) $6r-1$ D) $4r+1$ • question_answer88) If A and B are square matrices of order$3\times 3,$ then which of the following is true? A) $AB=0\Rightarrow A=O$or$B=O$ B) $det(2AB)=8\text{ (}detA)\text{ (}detB)$ C) ${{A}^{2}}-{{E}^{2}}=(A+B)(A-B)$ D) $det(A+B)=det(A)+det(B)$ • question_answer89) If geometric mean and harmonic mean between two different numbers are 12 and $\frac{48}{5}$respectively, then one number is A) 4 B) 6 C) 8 D) 10 • question_answer90) Let a, b, c are in GP and 4a,5b,4c are in AP such that$a+b+c=70,$then value of$b$is A) 5 B) 10 C) 15 D) 20 • question_answer91) The value of integral$\int_{0}^{4}{|x-1|}dx$ is A) 4 B) 5 C) 7 D) 9 • question_answer92) If$\frac{x-1}{1+i}+\frac{y-1}{1-i}=i,$where$x$and y are real numbers and$i=\sqrt{-1},$then A) $x=0$ B) $x<0$ C) $x>0$ D) None of these • question_answer93) If a pair of two fair dice is thrown, then the probability of getting the sum 5 on both dice is A) $\frac{5}{36}$ B) $\frac{1}{12}$ C) $\frac{1}{18}$ D) $\frac{1}{9}$ • question_answer94) If a fair coin is tossed 20 times and let we get head n times, then probability that 21 is odd, is A) $\frac{1}{2}$ B) $\frac{1}{6}$ C) $\frac{5}{8}$ D) $\frac{7}{8}$ • question_answer95) If${{x}_{r}}=\cos \left( \frac{\pi }{{{2}^{r}}} \right)+i\sin \left( \frac{\pi }{{{2}^{r}}} \right),$then value of${{x}_{1}}\,{{x}_{2}}\,\,{{x}_{3}}....$is A) $i$ B) $1$ C) $-1$ D) $-i$ • question_answer96) If${{\left( \frac{1+\cos \phi +i\sin \phi }{1+\cos \phi -i\sin \phi } \right)}^{n}}=u+iv,$where u and v are real numbers, then u is A) $n\cos \phi$ B) $\cos n\phi$ C) $\cos \left( \frac{n\phi }{2} \right)$ D) $\sin \left( \frac{n\phi }{2} \right)$ • question_answer97) If square root of $-7+24i$ is $x+iy,$ then $x$ is A) ?1 B) ? 2 C) ?3 D) ? 4 • question_answer98) If ${{\tanh }^{-1}}(x-iy)=\frac{1}{2}{{\tanh }^{-1}}\left( \frac{2x}{1+{{x}^{2}}+{{y}^{2}}} \right)$ $+\frac{i}{2}{{\tan }^{-1}}\left( \frac{2y}{1-{{x}^{2}}-{{y}^{2}}} \right)x,y\in R,$ then ${{\tanh }^{-1}}(iy)$ is A) $2\text{ }tan{{h}^{-1}}(y)$ B) $-2\text{ }tan{{h}^{-1}}(y)$ C) $\text{i }ta{{n}^{-1}}y$ D) $\text{-i }ta{{n}^{-1}}(y)$ • question_answer99) If $f:R\to R$ is defined as $f(x)={{(1-x)}^{1/3}}$ then ${{f}^{-1}}(x)$ is A) ${{(1-x)}^{-1/3}}$ B) ${{(1-x)}^{3}}$ C) $1-{{x}^{3}}$ D) $1-{{x}^{1/3}}$ • question_answer100) If ${{x}^{y}}={{e}^{x-y}},x>0,$ then value of $\frac{dy}{dx}$ at (1,1) is A) 0 B) $\frac{1}{2}$ C) 1 D) 2 • question_answer101) The value of $\underset{x\to 0}{\mathop{\lim }}\,\frac{(1-\cos 2x)}{{{x}^{2}}}$is A) doesn't exist B) infinite C) 0 D) 2 • question_answer102) The value of differentiation of${{e}^{{{x}^{2}}}}$w.r.t. to ${{e}^{2x-1}}$at$x=1$is A) e B) 0 C) ${{e}^{-1}}$ D) 1 • question_answer103) If function$f(x),x\in R$is differentiable and $f(1)=1,$then value of$\underset{x\to 0}{\mathop{\lim }}\,\frac{(1-\cos 2x)}{{{x}^{2}}}$is A) 0 B) 1 C) $f'(1)$ D) $\infty$ • question_answer104) The value of integral $\int_{0}^{2a}{\frac{f(x)}{f(x)+f(2a-x)}}dx,$where$f(x)$is a continuous function, is A) 0 B) 1 C) $a$ D) $2a$ • question_answer105) The area of the region bounded by the curves $y=ex,\text{ y}=lo{{g}_{e}}x$and lines$x=1,\text{ }x=2$is A) ${{(e-1)}^{2}}$ B) ${{e}^{2}}-e+1$ C) ${{e}^{2}}-e+1-2lo{{g}_{e}}2$ D) ${{e}^{2}}+e-2lo{{g}_{e}}2$ • question_answer106) If$f(x)=sin\text{ }x+cos\text{ }x+1$and $g(x)={{x}^{2}}+x,\text{ }x\in R,$then value of$fog(x)$at $x=0$is A) 0 B) 1 C) 2 D) 3 • question_answer107) The maximum value of function $f(x)=\sin x(1+\cos x),x\in R$is A) $\frac{{{3}^{3/2}}}{4}$ B) $\frac{{{3}^{5/3}}}{4}$ C) $\frac{3}{2}$ D) $\frac{{{3}^{7/5}}}{4}$ • question_answer108) The normal at point (1,1) to the curve${{y}^{2}}={{x}^{3}}$is parallel to the line A) $3x-y-2=0$ B) $2x+3y-7=0$ C) $2x-3y+1=0$ D) $2y-3x+1=0$ • question_answer109) Function$f(x)=|x-1|+|x-2|,x\in R$is A) differentiable everywhere in R B) except$x=1$and$x=2$differentiable everywhere in R C) not continuous at$x=1$and$x=2$ D) increasing in$R$ • question_answer110) If$f(1)=2$and$f'(1)=1,$then the value of$\underset{x\to 1}{\mathop{\lim }}\,\frac{2x-f(x)}{x-1}$is A) $-1$ B) 0 C) $1$ D) 2 • question_answer111) The distance of the point P (1,2,3) from the line which passes through the point A (4,2,2) and parallel to the vector$2\hat{i}+3\hat{j}+6\hat{k},$is A) $\sqrt{10}$ B) $\sqrt{7}$ C) $\sqrt{5}$ D) $1$ • question_answer112) Let$\overrightarrow{a}=2\hat{i}+\hat{k},\overrightarrow{b}=\hat{i}+\hat{i}+\hat{k}$and$\overrightarrow{c}=4\hat{i}-3\hat{j}+7\hat{k}$. If r is a vector such that$\overrightarrow{r}\times \overrightarrow{b}=\overrightarrow{c}\times \overrightarrow{b}$ and$\overrightarrow{r}.\overrightarrow{a}=0,$then value of$\overrightarrow{r}.\overrightarrow{b}$is A) 7 B) $-7$ C) $-5$ D) 5 • question_answer113) A unit vector which is coplanar with $\hat{i}+\hat{j}+2\hat{k}$and$\hat{i}+2\hat{j}+\hat{k}$and perpendicular to $\hat{i}+\hat{j}+\hat{k},$ is A) $\frac{(-\hat{j}+\hat{k})}{\sqrt{2}}$ B) $\frac{(\hat{k}-\hat{i})}{\sqrt{2}}$ C) $\frac{(\hat{i}-\hat{j})}{\sqrt{2}}$ D) $\frac{(\hat{i}-\hat{k})}{\sqrt{2}}$ • question_answer114) If a line is inclined at$45{}^\circ$to both x-axis and$y-$axis, then the angle at which it is inclined to z-axis is A) $45{}^\circ$ B) $60{}^\circ$ C) $30{}^\circ$ D) $90{}^\circ$ • question_answer115) The equation of plane which passes through the point$(1,2,-1)$and parallel to the plane $x-y+2z=0$is A) $x-y+2z-3=0$ B) $x-y+2z+3=0$ C) $x-y+2z+6=0$ D) $x-y+2z-7=0$ • question_answer116) Let a plane passes through the point P(-1, -1,1) and also passes through a line joining the points Q (0,1,1) and R(Q,0,2). Then, the distance of plane from the point (0,0,0) is A) 3 B) 0 C) $1/\sqrt{6}$ D) $2/\sqrt{6}$ • question_answer117) If$f(x)$and$g(x),x\in R$are continuous functions, then value of integral $\int_{-\pi /2}^{\pi /2}{[\{f(x)+f(-x)\}\{g(x)-g(-x)\}]}\,dx$is A) $\pi$ B) $\pi /2$ C) $1$ D) 0 • question_answer118) The equation of circle which touches the$x-$axis and 7-axis at points (1, 0) and (0, 1) respectively, is A) ${{x}^{2}}+{{y}^{2}}-4y+3=0$ B) ${{x}^{2}}+{{y}^{2}}-2y-2=0$ C) ${{x}^{2}}+{{y}^{2}}-2x-27+2=0$ D) ${{x}^{2}}+{{y}^{2}}-2x-2y+1=0$ • question_answer119) If lines$4x+3y=1,\text{ }y-x=5$and$kx+5y=1$are concurrent, then value of k is A) 0 B) 1 C) 3 D) 7 • question_answer120) The equation${{y}^{2}}-8y-x+19=0$represents A) a parabola whose focus is$\left( \frac{1}{4},0 \right)$and directrix is$x=-\frac{1}{4}$ B) a parabola whose vertex is (3, 4) and directrix is$x=\frac{11}{4}$ C) a parabola whose focus is$\left( \frac{13}{4},4 \right)$and vertex is (0, 0) D) a curve which is not a parabola
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9319471716880798, "perplexity": 3014.40988865838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204461.23/warc/CC-MAIN-20190325214331-20190326000331-00228.warc.gz"}
https://math.stackexchange.com/questions/2577907/golden-ratio-almost-integer
# Golden ratio - almost integer Let, $a_1=1, b_1=3, c_1=3$ and $d_1=1$ Where $a_2=a_1+c_1,b_2=b_1+d_1, c_2=a_1+b_1+c_1$ and $d_2=a_1+d_1$ and we defined $$a_n=a_{n-1}+c_{n-1}$$ $$b_n=b_{n-1}+d_{n-1}$$ $$c_n=a_{n-1}+b_{n-1}+c_{n-1}$$ $$d_n=a_{n-1}+d_{n-1}$$ then we have this approximate relation of involvement of $\phi$ $$a_n\phi^{3/2}+b_n\phi^{1/2}-c_n\phi\approx d_n$$ Where $\phi={\sqrt{5}+1\over2}$, which is known as the golden ration, related to the well-known Fibonacci numbers. Apparently $d_n$ it is almost an integer for any values of n. As $n\to \infty$ we see $d_n \to$ an integer My question is: Can anyone please explain why does this part $a_n\phi^{3/2}+b_n\phi^{1/2}-c_n\phi$ always result in almost an integer values for any n? $\phi$ it is an irrational number and it is interesting to see $\phi$ turn the above equation into an almost integer result. Examples of above approximate equation, give $$\phi^{3/2}+3\phi^{1/2}-3\phi \approx 1$$ $$4\phi^{3/2}+4\phi^{1/2}-7\phi \approx 2$$ $$11\phi^{3/2}+6\phi^{1/2}-15\phi \approx 6$$ $$26\phi^{3/2}+12\phi^{1/2}-32\phi \approx 17$$ $$58\phi^{3/2}+29\phi^{1/2}-70\phi \approx 43$$ $$\left( a_n\phi^{3/2}+b_n\phi^{1/2}-c_n\phi - d_n \right) = -(1 - \phi^{1/2})^{n+2}$$ Since $1 - \varphi^{1/2} \approx -0.272$, its powers rapidly converge to zero. I found this formula from: • The knowledge that these sorts of phenomena usually boil down to exponential functions with a small base • Recognizing that $1, \phi^{1/2}, \phi, \phi^{3/2}$ form a basis for a number field • Running the recursion backwards to find the $-2$ and $-1$ terms are $(0,0,0,1)$ and $(0,1,0,1)$, corresponding to the numbers $-1$ and $\varphi^{1/2} - 1$ respectively. • Numerically checking the formula To rigorously prove the identity, one might simply show that stepping the recursion has the same effect as multiplying by $(1 - \phi^{1/2})$. There was some good fortune that the formula was particularly simple; I expected to have to do more work to find it. A more systematic approach to obtain the closed form would be to apply the theory of linear difference equations. • nice of you to came with the formula @Hurkyl – user515439 Dec 23 '17 at 16:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.969420313835144, "perplexity": 204.8285725292242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00490.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=10013
2017 Том 69 № 7 # Investigation of a class of methods of summation of interpolational processes Pogodicheva N. A. Abstract For the class of all continuous $2jt$-periodic functions, two processes are considered for the approximation of the functions of this class by trigonometric polynomials. Citation Example: Pogodicheva N. A. Investigation of a class of methods of summation of interpolational processes // Ukr. Mat. Zh. - 1964. - 16, № 2. - pp. 164-184. Full text
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.805789589881897, "perplexity": 1543.8796501352704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00502.warc.gz"}
http://physics.stackexchange.com/questions/7003/physics-of-tsunami-the-relationship-between-wavelength-sea-depth-and-the-heigh
# Physics of tsunami: the relationship between wavelength, sea depth and the height of the water If I understand correctly, when an earthquake occurs, energy will be transferred to the water, resulting in water waves. As the waves reach seashore, because the sea depth is getting shallower and wavelength is getting shorter, the height of the wave gets push up, resulting in tsunami. In other words in deep sea, water won't get pushed up as high as the water in shallow seashore. Is my understanding correct? Is there a quantitative way to express the physics behind all this? - Here is something on Tao's blog. terrytao.wordpress.com/2011/03/13/… – MBN Mar 16 '11 at 4:04 @MBN, I would appreciate if you could post this as an answer; I think this comment is good enough to be an answer. – Graviton Mar 16 '11 at 4:05 @Graviton It's a comment, of course. Usually people post a link as a comment if they don't have a lot of time. If they want to go in more depth, they post an answer with their link and then summarize its contents. – Mark Eichenlaub Mar 16 '11 at 4:25 @Mark: Should I delete the answer? – MBN Mar 16 '11 at 4:29 @MBN It's your answer; I don't know. I don't think this is some official thing - just my observations. You could start a meta thread if you want to clarify the difference between comments and answers. – Mark Eichenlaub Mar 16 '11 at 4:45 A tsunami is basically a shallow-water wave, even in deep seas. This means its velocity is $v=\sqrt{gH}$, where $H$ is the water depth and $g$ is the gravitational acceleration. The energy of the tsunami scales as the square of its amplitude $A$, and thus the energy flux $S$ goes as $S\sim A^2 \sqrt{H}$. Conservation of energy then implies that the wave amplitude depends on the sea depth as $$A \sim H^{-1/4}$$ a result known as Green's law. For example, Green's law predicts that a tsunami with amplitude $A=1$m at $H=5000$m will run up to $A=4$m if the depth becomes $H=20$m. - I think it is just the gravity wave solution. We have a periododic function (sine wave) in the horizontal direction, and an exponential in the vertical direction. I think the wave number and the exponential decay rate are the same number, but hopefully someone in the know can fill in the details. In any case for short waves and deep water you need only consider the exponential term which decays with depth. But for tsunami's the wavelength is greater than the depth, so you have to use both types. Not sure what the boundary conditions are (at the water surface, and at the sea bottom are), but satisfying them would give you the allowable form for the wave at a given wavelength and depth. But in any case, for the tsunami, considerable motion is seen throughout the water column. If the wave is not reflected going into shallower water (I think this means the depth doesn't change much within a horizontal wavelength) then conservation of energy & momentum means the wave amplitude grows. You can get a hydrolic jump (moving wall of water), because the wavespeed is higher in deeper water, so the higher portion of the wave can catch up with the slower moving portions ahead of it. If that happens, instead of a gradually increasing sealevel, you can get one or several stepfunction type waves coming in. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9408610463142395, "perplexity": 516.2253936954714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00097-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/breit-wigner-formula.80935/
# Breit-Wigner formula 1. Jul 2, 2005 ### kuengb Hello The Breit-Wigner formula says that the cross section for a resonance is proportional to $$\frac{1}{(2S_a + 1)(2 S_b + 1)},$$ where Sa and Sb are the spins of the incident particles. What's the reason for this? I don't understand it because for a general cross section there appears just the inverse of that because that's what enters the density-of-states. Thanks, Bruno 2. Jul 3, 2005 ### CarlB I'll take a stab. Think of it in terms of lifetimes. A high cross section means a short life time because that means that the reaction has a high probability of happening. But to get a sharp (and therefore tall) resonance, you want a very long life time. So some of the terms are inverted. Resonances are sort of reversed from usual cross sections like that. You should be aware that I answer questions like this in order to make myself think, not necessarily because I am a world expert. Carl 3. Jul 3, 2005 ### marlon A Could you please tell me how you get this formula ? Look at : http://www.astro.uwo.ca/~jlandstr/p467/lec9-nucl_reacts/ to see the actual Breit Wigner formula. How do you convert this formula into the one you gave ? And what do you mean by 'that is what enters the DOS ?' regards marlon 4. Jul 3, 2005 ### kuengb In Fermi's golden rule for the reaction rate, there is the DOS as a factor, and this is proportional to the number of spin states. I got it from Perkin's book "I. to High Energy Physics". I think I'll quote the questionable paragraph: "So far, we have not considered particle spin. The appropriate spin multiplicity factors were given in the previous section [what I mentioned above, golden rule]. Putting all these things together, we finally get the complete Breit-Wigner formula, $$\sigma = \frac{\lambda^2 (2J+1)}{\pi (2S_a+1)(2S_b+1)} \frac{\Gamma^2 / 4}{(E-E_R)^2 + \Gamma^2/4}$$ where Sa and Sb are the spins of the incident and target particles and J is the spin of the resonant states." That's the same as in your reference except for this prefactor. Now, I understand the factor (2J+1) because that comes from the fact that the cross section for a given partial wave is proportional to this number. But I don't get why the other spin stuff appears in the denominator. If there are more spin states, the reaction rate is higher and therefore the cross section is larger...or isn't it? Last edited by a moderator: May 2, 2017 5. Jul 3, 2005 ### Meir Achuz The initial spin factor arises because the cross-section is determined by the outgoing flux divided by the incident flux. The factor (2Sa+1)(2Sb+1) comes from the number of spin states in the incident beam, and therefore is in the denominator. 6. Jul 3, 2005 ### kuengb I see. Thank you! 7. Oct 23, 2007 ### nucular1 statistical factor in B-W formula Hi, The statistical factor in the Breit-Wigner cross section formula arises from the coupling of angular momenta. For a given reaction A + a -> B + b, or A(a,b)B in nuclear physics parlance, the entrance channel A + a has an associated channel spin S, which is the sum of the individual particle spin quantum numbers. If the incident particle 'a' has spin s =1/2, for example, and the target particle 'A' is some nucleus in its ground state with spin I = 5/2, then the channel spin S is the vector sum of s + I, which by addition rules for angular momentum can add to either S = 3 (5/2 + 1/2) or 2 (5/2 - 1/2). The total angular momentum J is the sum of s, I, and the orbital angular momentum between them L: J = (s + I) + L (all are in units of h_bar). If L = 1, for example, we can now have J = 2, 3, or 4. For a compound nucleus mechanism, which is what the Breit-Wigner formula describes, the intermediate state is the compound nucleus (CN) which is just A + a -> CN -> B + b. For example, in the reaction 20F(p,n)20Ne, the compound nucleus is 21Ne. The reaction can proceed through excited states in 21Ne of a given energy and total spin J. (I am leaving out parity, which also must be conserved, brevity) The spin of that particular resonance is the one that goes into the B-W formula. Now each value of the channel spin S, can have 2S + 1 spatial orientations from the magnetic quantum number m_S. Recall m_S = S, S - 1, ...., -S. So including all possible configurations of both particles in the entrance channel, there are (2s + 1)(2I + 1) total states for a channel spin S = s + I. If you follow this logic through, the cross section for one particular state must be divided by this factor. The cross section is also proportional to the spin of the CN, giving the factor (2J + 1) in the numerator, so the total statistical factor for the cross section is therefore: (2J + 1) ---------------- = g(S). (2s + 1)(2I + 1) Hence, the expression given in the Breit-Wigner formula. For more details see Blatt & Weisskopf, Krane or your favorite Nuclear Physics text. Note that for the special case of spinless particles in the entrance channel, this factor is simply (2L + 1) as given in the limit for the geometrical cross section. I hope this helps, rather than adding to your confusion. Isn't physics PHUN?!? :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402030110359192, "perplexity": 642.2291389772455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746398.20/warc/CC-MAIN-20181120130743-20181120152743-00190.warc.gz"}