url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://link.springer.com/article/10.1007/s10474-015-0529-2?wt_mc=email.event.1.SEM.ArticleAuthorOnlineFirst
Acta Mathematica Hungarica , Volume 146, Issue 2, pp 306–331 # Exact Kronecker constants of three element sets Article DOI: 10.1007/s10474-015-0529-2 Hare, K.E. & Ramsey, L.T. Acta Math. Hungar. (2015) 146: 306. doi:10.1007/s10474-015-0529-2 ## Abstract For any three element set of positive integers, $${\{a,b,n \}}$$, with a < b < n, n sufficiently large and gcd (a, b)=1, we find the least $${\alpha}$$ such that given any real numbers t1, t2, t3 there is a real number x such that $$\max \{ {\langle} ax-t_{1}{\rangle} , {\langle}bx-t_{2}{\rangle} , {\langle} nx-t_{3}{\rangle} \} {\leq} {\alpha},$$ where $${\langle {\cdot} \rangle}$$ denotes the distance to the nearest integer. The number $${\alpha}$$ is known as the angular Kronecker constant of $${\{a,b,n \}}$$. We also find the least $${\beta}$$ such that the same inequality holds with upper bound $${\beta}$$ when we consider only approximating t1, t2, t3$${{\in} \{0,1/2 \}}$$, the so-called binary Kronecker constant. The answers are complicated and depend on the congruence of n mod (a + b). Surprisingly, the angular and binary Kronecker constants agree except if $${n{\equiv}a^{2}}$$ mod (a + b). ### Mathematics Subject Classification primary 42A10 secondary 43A46 11J71 ### Key words and phrases Kronecker constant trigonometric approximation ## Authors and Affiliations 1. 1.Department of Pure MathematicsUniversity of WaterlooWaterlooCanada 2. 2.Department of MathematicsUniversity of Hawaii at ManoaHonoluluUSA
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786056876182556, "perplexity": 1111.2551779166993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463615105.83/warc/CC-MAIN-20170530124450-20170530144450-00175.warc.gz"}
http://atrium.math.wisc.edu/historical/
# Home Welcome In 1997 the UW-Madison Mathematics Department celebrated the 100th anniversary of the granting of the first PhD in Mathematics by the University of Wisconsin. The first Wisconsin Mathematics PhD was awarded to Henry Freeman Stecker in 1897 with a thesis titled On the roots of equations, particularly the imaginary roots of numerical equations.'' Stecker, a native of Sheboygan, Wisconsin, went on to have a successful career at Penn State University. The peak years of PhD production were 1970 and 1971. Thirty nine people received their PhDs in each of those years. It is interesting to note that two of the first five PhDs from this department were granted to women. The University of Wisconsin is one of the leading PhD granting institutions in the U.S. The Mathematics Department is contributing to this ranking. Historical Faculty Information Historical Resources Math Genealogy from AMS
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037595152854919, "perplexity": 3029.324104336027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709224828/warc/CC-MAIN-20130516130024-00027-ip-10-60-113-184.ec2.internal.warc.gz"}
https://zerobone.net/blog/cs/2020-02-21-non-recursive-extended-euklidian-algorithm/
# Extended Euclidean algorithm without stack or recursion Typical implementation of the extended Euclidean algorithm on the internet will just iteratively calculate modulo until 0 is reached. However, sometimes you also need to calculate the linear combination coefficients for the greatest common divisor. ## Extended Euclidean algorithm The extended Euclidean algorithm allows us not only to calculate the gcd (greatest common divisor) of 2 numbers, but gives us also a representation of the result in a form of a linear combination: \gcd(a, b) = u \cdot a + v \cdot b \quad u,v \in \mathbb{Z} gcd of more than 2 numbers can always be done by iteratively calculating the gcd of 2 numbers. For example, let us calculate gcd(14, 5)$gcd(14, 5)$: \begin{aligned} 14 &= 5 \cdot 2 + 4 \\ 5 &= 4 \cdot 1 + 1 \\ 4 &= 1 \cdot 4 + 0 \end{aligned} So the greatest common divisor of 14$14$ and 5$5$ is 1$1$. We can find the linear combination coefficients by writing 1$1$ in terms of 14$14$ and 5$5$: \begin{aligned} 1 &= 5 - 4 \cdot 1 \\ &= 5 - (14 - 5 \cdot 2) \cdot 1 \\ &= 5 - 14 + 5 \cdot 2 \\ &= 3 \cdot 5 + (-1) \cdot 14 \end{aligned} So in this case u = 3$u = 3$ and v = -1$v = -1$: \gcd(14, 5) = (-1) \cdot 14 + 3 \cdot 5 = 1 We can calculate the linear combination coefficients by doing back substitution. But it is not so easy to implement this without recursion, because the back substitution is done when we are climbing out of the recursive calls. We will implement the algorithm recursively first. ## Recursive implementation The formula \gcd(a, b) = \begin{cases} b, & \text{if}\ a = 0 \\ \gcd(b \bmod a, a), & \text{otherwise} \end{cases} allows us to describe the algorithm in a functional way: • If a = 0$a = 0$, then the greatest common divisor is b$b$. Coefficients u = 0$u = 0$ and v = 0$v = 0$. • Else, we make the problem simpler by calculating \gcd(b \bmod a, a)$\gcd(b \bmod a, a)$. We can calculate the new coefficients based on the coefficients of the simpler problem. So, how can we calculate u$u$ and v$v$ so that \gcd(a, b) = u \cdot a + v \cdot b by knowing u'$u'$ and v'$v'$ with: \gcd(b \bmod a, a) = u' \cdot (b \bmod a) + v' \cdot a In order to do that we can write b \bmod a$b \bmod a$ in terms of initial a$a$ and b$b$: \begin{aligned} \gcd(b \bmod a, a) &= u' \cdot (b \bmod a) + v' \cdot a \\ &= u' \cdot (b - \left\lfloor \frac{b}{a} \right\rfloor \cdot a) + v' \cdot a \\ &= u' \cdot b - u' \cdot \left\lfloor \frac{b}{a} \right\rfloor \cdot a + v' \cdot a \\ &= (v' - u' \cdot \left\lfloor \frac{b}{a} \right\rfloor) \cdot a + u' \cdot b \end{aligned} So the new linear combination coefficients are: \begin{aligned} u &= v' - u' \cdot \left\lfloor \frac{b}{a} \right\rfloor \\ v &= u' \end{aligned} With this formula we are now ready to implement the algorithm: class GCD_Result: # Representation of the result def __init__(self, gcd, u, v): self.gcd = gcd self.u = u self.v = v def extended_gcd(a, b): if a == 0: return GCD_Result(b, 0, 1) result = extended_gcd(b % a, a) u = result.u # save u' result.u = result.v - (b // a) * result.u # u = v' - u' * (b // a) result.v = u # v = u' return result ## Non-recursive implementation The recursion in the algorithm above cannot be easily eliminated because the function is not tail-recursive. In order to implement the algorithm with a loop we need to define a sequence of division remainders and then update the corresponding remainers as we calculate the remainders. Formally, we can define f finite sequence r_n$r_n$: \begin{aligned} r_1 &= a \\ r_2 &= b \\ r_{n+2} &= r_n \bmod r_{n+1} \end{aligned} If r_{n+1} = 0$r_{n+1} = 0$, r_{n+2}$r_{n+2}$ is not defined. We can write each r_n$r_n$ as a linear combination of u$u$ and v$v$. Now we are interested in how u$u$ and v$v$ change as we calculate remainders. To do this formally, we will need to define two new finite sequences u_n$u_n$ and v_n$v_n$ which will represent the linear combination coefficients: r_n = u_n \cdot a + v_n \cdot b By definition, r_1 = a$r_1 = a$ and r_2 = b$r_2 = b$, so we can directly write the linear combination coefficients for r_1$r_1$ and r_2$r_2$: \begin{aligned} u_1 &= 1 \\ v_1 &= 0 \\ u_2 &= 0 \\ v_2 &= 1 \end{aligned} Let q_n$q_n$ be the finite sequence of integer divisions in r_n$r_n$: r_n = r_{n+1} \cdot q_{n+2} + r_{n+2} Now we can write u_n$u_n$ and v_n$v_n$ in terms of q_n$q_n$: \begin{aligned} r_{n+2} &= r_n - r_{n+1} \cdot q_{n+2} \\ &= u_n \cdot a + v_n \cdot b - r_{n+1} \cdot q_{n+2} \\ &= u_n \cdot a + v_n \cdot b - (u_{n+1} \cdot a + v_{n+1} \cdot b) \cdot q_{n+2} \\ &= u_n \cdot a + v_n \cdot b - u_{n+1} \cdot a \cdot q_{n+2} - v_{n+1} \cdot b \cdot q_{n+2} \\ &= (u_n - u_{n+1} \cdot q_{n+2}) \cdot a + (v_n - v_{n+1} \cdot q_{n+2}) \cdot b \end{aligned} To get the formula for u_n$u_n$ and v_n$v_n$ we can just substitute n$n$ instead of n + 2$n + 2$: \begin{aligned} u_n &= u_{n-2} - q_n \cdot u_{n-1} \\ v_n &= v_{n-2} - q_n \cdot v_{n-1} \end{aligned} With this formula and the initial values of the u_n$u_n$ and v_n$v_n$ sequences we can now implement the extended Euclidean algorithm without recursion: def extended_gcd(a, b): if a == 0: # The algorithm will work correctly without this check # But it will take one iteration of the inner loop return GCD_Result(b, 0, 1) unPrev = 1 vnPrev = 0 unCur = 0 vnCur = 1 while b != 0: # Calculate new element of the qn sequence qn = a // b # Calculate new element of the rn sequence newRemainder = a % b a = b b = newRemainder # Calculate new coefficients with the formula above unNew = unPrev - qn * unCur vnNew = vnPrev - qn * vnCur # Shift coefficients unPrev = unCur vnPrev = vnCur unCur = unNew vnCur = vnNew return GCD_Result(a, unPrev, vnPrev) ## Example We can visualize the finite sequences we defined and see how the algorithm works with a table. We will calculate \gcd(104, 47)$\gcd(104, 47)$ and it’s linear combination coefficients: \gcd(104, 47) = u \cdot 104 + v \cdot 47 r_n$r_n$q_n$q_n$u_n$u_n$v_n$v_n$ 104-10 47-01 1021-2 74-49 315-11 12-1431 033320 At each step we first calculate the next element from the q_n$q_n$ sequence and then use it to calculate new linear combination coefficients u_n$u_n$ and v_n$v_n$. The result of the algorithm: \gcd(104, 47) = -14 \cdot 104 + 31 \cdot 47 = 1 ## Improvement of the non-recusive solution As we see in the example above, we don’t need to calculate the last row of the table because we aren’t interested in the linear combination that forms zero. We can terminate the algorithm directly after calculating the new element of the r_n$r_n$ sequence: def extended_gcd(a, b): if a == 0: # Optional check return GCD_Result(b, 0, 1) if b == 0: # Without this check the first iteration will divide by zero return GCD_Result(a, 1, 0) unPrev = 1 vnPrev = 0 unCur = 0 vnCur = 1 while True: qn = a // b newR = a % b a = b b = newR if b == 0: return GCD_Result(a, unCur, vnCur) # Update coefficients unNew = unPrev - qn * unCur vnNew = vnPrev - qn * vnCur # Shift coefficients unPrev = unCur vnPrev = vnCur unCur = unNew vnCur = vnNew
{"extraction_info": {"found_math": true, "script_math_tex": 55, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999675750732422, "perplexity": 1935.7198719660466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348500712.83/warc/CC-MAIN-20200605111910-20200605141910-00502.warc.gz"}
http://math.stackexchange.com/questions/128925/use-cauchy-riemann-to-prove-this-function-is-differentiable-at-all-points
# Use Cauchy Riemann to prove this function is differentiable at all points I expanded it out and got $e^{4z+1} = e^{4x+1}\cos{4y} + e^{4x+1}\sin{4y}$ Then my CR equations were - $U_x = (4x+1)e^{4x+1}(4)(\cos 4y)$ $U_y = -e^{4x+1}(\sin4y)$ $V_x = (4x+1)e^{4x+1}(4)(\sin 4y)$ $V_y = e^{4x+1}(\cos 4y)$ Taking $U_x = V_y$ I get $(4x+1)e^{4x+1}(4)(\cos 4y) = e^{4x+1}(\cos 4y)$ $(4)(4x+1) = 1$ But that can't be right as then it means the CR equations are only satisfied for a certain value of x. So what am I doing wrong? - If you have $f(z) = f(x + iy)$ then a necessary and sufficient condition for it to be complex differentiable is that it satisfies the Cauchy-Riemann equations. This means that if you set $u(x,y) := \Re f$ and $v(x,y) := \Im f$ then you need to show that the following holds: $$u_x = v_y$$ and $$u_y = - v_x$$ In your case you have $$u(x,y) = \cos (4y) e^{4x + 1}$$ and $$v(x,y) = \sin (4y) e^{4x + 1}$$ This yields $$u_x = 4 \cos (4y) e^{4x + 1} = v_y$$ and $$u_y = -4 \sin (4y) e^{4x + 1} = - v_x$$ So you see that your $f$ is differentiable. Hope this helps.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523218274116516, "perplexity": 57.08808813462287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010451932/warc/CC-MAIN-20140305090731-00027-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/recursive-sequence-problem-proofs-by-mathematical-induction.44488/
# Recursive sequence problem: proofs by mathematical induction 1. Sep 24, 2004 Guys, I'm trying to prove by induction that the sequence given by $$a_{n+1}=3-\frac{1}{a_n} \qquad a_1=1$$ is increasing and $$a_n < 3 \qquad \forall n .$$ Is the following correct? Thank you. $$n = 1 \Longrightarrow a_2=2>a_1$$ is true. We assume $$n = k$$ is true. Then, $$3-\frac{1}{a_{k+1}} > 3-\frac{1}{a_k}$$ $$a_{k+2} > a_{k+1}$$ is true for $$n=k+1$$. This shows, by mathematical induction, that $$a_{n+1} > a_{n} \qquad \forall n .$$ We already know that $$a_1 < 3$$ is true. We assume $$n=k$$ is true. Then, $$a_k < 3$$ $$\frac{1}{a_k} > \frac{1}{3}$$ $$-\frac{1}{a_k} < -\frac{1}{3}$$ $$3-\frac{1}{a_k} < 3-\frac{1}{3}$$ $$a_{k+1} < \frac{8}{3} < 3$$ $$a_{k+1} < 3$$ is true for $$n = k+1$$. Thus, $$a_{n} < 3 \qquad \forall n .$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9947373270988464, "perplexity": 645.0736987232389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742253.21/warc/CC-MAIN-20181114170648-20181114191754-00038.warc.gz"}
http://math.stackexchange.com/users/23875/frank-science?tab=activity&sort=revisions
Frank Science Reputation 3,515 Top tag Next privilege 5,000 Rep. Approve tag wiki edits 6 30 Impact ~47k people reached # 281 Revisions Jul17 revised Approximation of irrational numbers? added 1 character in body Jul17 revised Approximation of irrational numbers? added 238 characters in body Jul17 revised Approximation of irrational numbers? added 24 characters in body Jul17 revised Approximation of irrational numbers? added 24 characters in body Jun7 revised Quotient manifold theorem provides a fibration? added 36 characters in body May9 revised The relation between homotopy equivalence and contractible mapping cone? added 52 characters in body Apr30 revised What is the motivation for the “Covering Homotopy Property” in a fibration? edited body Apr22 revised Should isometries be linear? added 1 character in body Apr17 revised Decompose a vector space into invariant subspaces? added 70 characters in body Mar28 revised Prove that $Ker(g \otimes k)= Im(f \otimes 1_{N}) + Im (1_{M} \otimes h)$ added 409 characters in body Mar28 revised Trouble finding the limits of integration for polar coordinates LaTeX Mar28 revised Can anyone check if this correct? LaTeX Mar28 revised Proving that the tensor product is right exact deleted 20 characters in body Mar21 revised Homomorphism of local rings added 12 characters in body Mar13 revised The hyper-derived functors $\mathbb L_\bullet F$ are just derived functors of $H_0F$? added 248 characters in body Mar8 revised The hyper-derived functors $\mathbb L_\bullet F$ are just derived functors of $H_0F$? added 2 characters in body Mar7 revised A quasi-isomorphism between the total complex of a Cartan-Eilenberg resolution and the complex per se. added 32 characters in body Feb24 revised Convergence in measure and convergence of norm implies convergence in L^p added 30 characters in body Feb23 revised A short exact sequence of chain complexes with null-homotopic chain maps added 17 characters in body Feb23 revised A short exact sequence of chain complexes with null-homotopic chain maps edited title
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645105004310608, "perplexity": 3311.2306371607324}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989897.84/warc/CC-MAIN-20150728002309-00321-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/car-acceleration-kinematics-give-average-power.870456/
# Car acceleration kinematics give average power • Start date • #1 late347 301 15 ## Homework Statement The performance of a car is being tested. Distance driven is 400m. The car accelerates from a standing start (v0 = 0). (reader's note: It could be plausibly assumed that car performance tests are measured at even ground.) The car's mass is 1150kg. The time taken in the acceleration is 16,1 seconds. The car's final velocity is 143km/h Calculate the average power of the total force (net force?), which accelerates the car. ## Homework Equations delta E = W P= W/ t Fs= W Ekin = (1/2) * (m) * (v^2) = 56 kW power ## The Attempt at a Solution We assume even ground at the car testing event. Likewise standing start means that the car starts from standing place motionless. I had difficulty with this problem because I got the wrong result. Evidently there was a mistake or oversight in my own assumptions, upon which I had based my calculation. It is known that potential energy does not change in this situation, with respect to the road. The car stays upon the road, and does not elevate itself with respect to the road at least greatly so. We need to transform km/h into m/s. 143km/h = 39,72 m/s v0= 0 v1= 39,72 m/s The work done is calculated using kinetic energies... delta EKin. = W The motionless car at V0, has infact also EKin. 0 = 0 Joules EKin. 1 = 0,5 * 1150kg * (39,72^2) = work done = 907165,08 Joules. This is the maximum kinetic energy of the car, I think. Then I calculated the force. Fs= W. F= W/s. 907165,08 / 400m F=2267,91 Newtons I guess then I calculated P = (Fs)/ t [400m * 2267,91 N ] / 16,1 seconds 56 345,59 Watts I got a nagging feeling that I looked at the question wrongly at this point, even though the result looks roughly ok... Maybe I was overthinking it, and it could simply be calculated as P= W/t average power = (Work done by force) / (time used to do that work) Is that the correct interpretation ? I looked back at a book example and that was the formula to be used for average power Anyhow... This is inputted as. 907165,08 Joules / 16,1 seconds = 56 345,65 Watts roughly 56 kW power.. Why am I calculating only the average power, such that, I divide the work done, by the time taken during acceleration? I guess that's where I got confused. Doesn't the acceleration force affect the car all the time? Why focus upon only the first 16,1 seconds? I guess in other words, when we focus on the first 16,1 seconds, we are focusing upon the acceleration portion, of the entire journey of the car (entire journey was 400m) So, in other words we would no longer be answering the question about the power of the accelerating force, if we were to examine all the portions of the car's journey? I.e. the entire journey 400m It does appear that it is possible, that the car reaches its Vmax, already before the 400m mark at the ground. This would mean that the car drives the accelerating portion, and the final portion of the journey is traveled at constant speed Vmax essentially (Vmax= 39,72m/s) In real life I think we could not know for sure, unless we knew exactly what the acceleration of the car was. Was it constant or not? Average acceleration could be calculated as a= (delta V) / (delta t) 39,72m/s / 16,1 seconds average acceleration is 2,4670 m/s^2 Using that value it look like the car reaches Vmax already at the distance of 319,74m. One could use s= (v0*t) + (0,5*a*t^2) • #2 dean barry 311 23 I sneaked a look at the answer, it looks as though air drag has been ignored, and average power has been calculated using the KE change and elapsed time. • #3 late347 301 15 do you think I made erroneous assumption regarding Fs=W I assumed F* 400m = 907 165 joules and indede it does look like air drag was ignored along with other things like friction between wheel and ground. It does look upon closer examination that the car would reach Vmax already before 400m. This could be done assuming acceleration as constant, and using s= v0t + 0,5* at^2 However it is doubtful if this models real life car accelerations from standstill. • #4 Homework Helper Gold Member 38,778 8,187 do you think I made erroneous assumption regarding Fs=W I assumed F* 400m = 907 165 joules and indede it does look like air drag was ignored along with other things like friction between wheel and ground. Yes, your calculation of the force was wrong. Even if the power were constant (which we are not told) the force would vary. You found the average force, but that does not help in finding average power. Dividing the work done by the time is the valid way to find that. With regard to friction, it certainly does not ignore friction between tyre and ground. With no friction, the car would not move. You perhaps mean rolling resistance. In fact, if you read the question carefully, you will find that it does not ignore drag or rolling resistance. It asks for the average power of the net force, i.e. the propulsive force remaining after subtracting drag etc. • #5 late347 301 15 Yes, your calculation of the force was wrong. Even if the power were constant (which we are not told) the force would vary. You found the average force, but that does not help in finding average power. Dividing the work done by the time is the valid way to find that. With regard to friction, it certainly does not ignore friction between tyre and ground. With no friction, the car would not move. You perhaps mean rolling resistance. In fact, if you read the question carefully, you will find that it does not ignore drag or rolling resistance. It asks for the average power of the net force, i.e. the propulsive force remaining after subtracting drag etc. No values were given for any frictions or air drag, though. I suppose one has to be content that that the average power was calculated. It looks like the average power was to be calculated as the work done divided by time. I checked it from textbook again to be sure. I suppose that in real life the friction would be varied in the beginning of the car journey. Because the car starts from a standstill, and should be accelerated in optimal manner , to its max velocity. Although 143km/h sounds quite fast speed indeed. Especially for regular car. The performance tester driver ought to avoid "skidding the tyres", when accelerating from standstill etc... Indeed this comes back to the idea that possibly less power is used, in the beginning acceleration portion, when accelerating from standstill. • #6 Homework Helper Gold Member 38,778 8,187 I suppose that in real life the friction would be varied in the beginning of the car journey. Because the car starts from a standstill, and should be accelerated in optimal manner , to its max velocity. Although 143km/h sounds quite fast speed indeed. Especially for regular car. The performance tester driver ought to avoid "skidding the tyres", when accelerating from standstill etc... Indeed this comes back to the idea that possibly less power is used, in the beginning acceleration portion, when accelerating from standstill. You don't seem to have comprehended the last para in my post. The question asks for the average power provided by the net force. That is, the remaining force after allowing for all losses due to drag, rolling resistance, skidding, whatever. The net force is what leads directly to the acceleration (ΣF=ma). Thus the answer is the gain in KE divided by the elapsed time. No assumptions, no approximations. • Last Post Replies 5 Views 591 • Last Post Replies 1 Views 384 • Last Post Replies 16 Views 758 • Last Post Replies 13 Views 314 • Last Post Replies 3 Views 610 Replies 17 Views 509 • Last Post Replies 7 Views 357 • Last Post Replies 5 Views 257 • Last Post Replies 1 Views 732 • Last Post Replies 10 Views 424
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520570993423462, "perplexity": 1374.0588262033175}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00481.warc.gz"}
http://jeffrouder.blogspot.com/2015/05/simulating-bayes-factors-and-p-values.html
## Sunday, May 31, 2015 ### Simulating Bayes Factors and p-Values I see people critiquing Bayes factors based on simulations these days, and example include recent blog posts by Uri Simonsohn and Dr.-R. These authors assume some truth, say that the true effect size is .4, and then simulate the distribution of Bayes factors are like across many replicate samples.  The resulting claim is that Bayes factors are biased, and don't control long run error rates.   I think the use of such simulations is not helpful.  With tongue-in-cheek, I consider them frequocentrist.  Yeah, I just made up that word.  Let's pronounce it as  "freak-quo-centrists.  It refers to using frequentist criteria and standards to evaluate Bayesian arguments. To show that frequocentric arguments are lacking, I am going to do the reverse here.  I am going to evaluate p-values with a Bayescentric simulation. I created a set of 40,000 replicate experiments of 10 observations each.  Half of these sets were from the null model; half were from an alternative model with a true effect size of .4.   Let's suppose you picked one of these 40,000 and asked if it were from the null model or from the effect model.  If you ignore the observations entirely, then you would rightly think it is a 50-50 proposition.  The question is how much do you gain from looking at the data. Figure 1A shows the histograms of observed effect sizes for each model.  The top histogram (salmon) is for the effect model; the bottom, downward going histogram (blue) is for the null model.  I drew it downward to reduce clutter. The arrows highlight the bin between .5 and .6.  Suppose we had observed an effect size there.  According to the simulation, 2,221 of the 20,000 replicates under the alternative model are in this bin.  And 599 of the 20,000 replicates under the null model are in this bin.   If we had observed an effect size in this bin, then the proportion of times it comes from the null model is 599/(2,221+599) = .21.  So, with this observed effect size, the probability goes from 50-50 to 20-80.  Figure 1B shows the proportion of replicates from the null model, and the dark point is for the highlighted bin.  As a rule, the proportion of replicates from the null decreases with effect size. We can see how well p-values match these probabilities.  The dark red solid line is the one-tail p-values, and these are miscalibrated.  They clearly overstate the evidence against the null and for an effect.  Bayes factors, in contrast, get this problem exactly right---it is the problem they are designed to solve.  The dashed lines show the probabilities derived from the Bayes factors, and they are spot on.  Of course, we didn't need simulations to show this concordance.  It falls directly from the law of conditional probability. Some of you might find this demonstration unhelpful because it misses the point of what a p-value is what it does.  I get it.  It's exactly how I feel about others' simulations of Bayes factors. This blog post is based on my recent PBR paper: Optional Stopping: No Problem for Bayesians.  It shows that Bayes factors solves the problem they are designed to solve even in the presence of optional stopping. Dr. R said... Dear Jeff, I am happy that we are engaging in a constructive dialogue that may not result in agreement but at least better understanding of the sources of disagreement. I am also happy that this exchange is happening online in real time without months of delay and hidden peer-reviews. My first question is why you used a one-tailed p-value. My second question is what your prior distribution is (I assume a half Cauchy with scaling factor .707). My third question is how you would interpret an observed effect size of d = -1, with a posteriori probabilty of ~ 1 in favor of the null-hypothesis. Finally, I am not sure how posteriori-probabilities are related to Bayes-Factors. Sincerely, Dr. R (Uli) Jeff Rouder said... Hi Uli, 1. I used a 1-sided p-value because I knew the direction of the effect was positive. The goal was to see how much more info there was when conditioning on the data then when not. 2. The BF was for the models at hand. They were given. So in this case it was two points: one at zero and one at .4. 3. Yes, d = -1 is evidence for the null vs. .4. 4. post-p = BF/(1+BF) for equal prior odds. If the prior odds are pi, then pi*BF/(1+pi*BF) Best, Jeff Winthrop Harvey said... Firstly, it’s not true that Bayesian methods in principle are biased against small effect sizes. This depends on your priors. But I don’t think the “default” prior being biased against small effect sizes is a weakness. I think it’s a strength! To paraphrase Cohen, "The Null is Always False (Even When It is True)." Oh sure, the theoretical null hypothesis can be really, truly TRUE - but if you take a large enough sample size you will find an effect "showing otherwise" to whichever p-value you want. This is not because the null hypothesis you stated is false - but rather because your experiment is imperfect, and is not actually testing the theoretical null hypothesis but an approximation, the experimental null hypothesis. We use controls to minimize confounds, and good study design does a good job of making sure that any confounds remaining are very small. But it is practically impossible to ELIMINATE confounds. You can't control conditions perfectly. There's no such thing as a perfect experiment, even in simulation studies you could have minute imperfections in random number generation or the physical activity of your computer producing some extraordinarily slight confound. You can't control everything, and when it comes to chaotic real world systems everything has SOME effect. Maybe it's .00001, but there's going to be an effect, and if you have enough n, and your power actually increases with n, then you'll eventually detect it. If you model the level of confounds in your experiment as a random variable, what is the probability that you just happen to hit exactly 0? It doesn't even matter what the probability distribution is, the chance of hitting EXACTLY 0 to perfect precision is, in fact, EXACTLY 0. The only thing you're sure about is that your experiment isn't perfect. The point being... if you get p=.0000001, on a difference of .5%, and then you say you reject the null hypothesis because it's just so UNLIKELY... you're in for some pain. Because what you've detected isn't that the null isn't true, what you've detected is the imperfection in your ability to create an experimental setup that actually tests the theoretical null. The experimental null you're testing is an APPROXIMATION of the theoretical null. You cannot reasonably expect to ever create an experiment with NO confounds of any arbitrarily small magnitude. The theoretical null may or may not be true. The experimental null is ALWAYS false, in the limit of large n. You cannot control for every confound - you cannot even conceive of every confound! But the problem is when people ignore the fact that experimental or systematic error can only be reduced, not eliminated, and then go on to think that p=.000000001 at a miniscule effect size is strong evidence against the null. But what a Bayesian says is, "I expect (have a prior) that even if the theoretical null is true, there's going to be some tiny confound I couldn't control, so if I see a very small effect, it's most likely a confound." Unless you SPECIFICALLY hypothesized (had a prior for!) a very small effect size, finding a small effect is strong evidence FOR the null regardless of the p value! If you were looking for an effect d=.4, but find an effect d=.05 with very low p-value, it’s really tempting to say, “Well, the effect was (much) smaller than we thought it was, but it’s real, here it is, look at this tiny p-value!” Bayes keeps us honest because it forces us to reveal via our priors EXACTLY how large an effect has to be before we will be able to think we are gathering evidence for it over the null. This is not a weakness, but a strength! If you don’t do this, you find ESP is real because that darn null hypothesis is so improbable. Winthrop Harvey said... Continued off last post: On Dr. R’s post, he says, “The main difference [between Bayes factors and p-values] is that p-values have a constant meaning for different sample sizes. That is, p = .04 has the same meaning in studies with N = 10, 100, or 1000 participants. However, the interpretation of Bayes-Factors changes with sample size.” He clarifies later, “In contrast, p-values have a consistent meaning. They quantify how probable it is that random sampling error alone could have produced a deviation between an observed sample parameter and a postulated population parameter.” This is true. The p-value is the probability that sampling error alone could have produced the result. But sampling error is never alone. Our experiments aren’t perfect. The null is always false (even when it's true). This means that the INTERPRETATION of a p-value VERY MUCH depends on sample size, and if you don’t acknowledge this you’re in for a world of trouble! P=.04 for 10 participants is very, very different from p=.04 for 1000 because the observed effect size to get p=.04 for 10 participants is very large, whereas the observed effect size needed to get p=.04 for 1000 participants is rather low. Not only is this of practical concern even if you think that the evidence is somehow equally good for both effects because the p-value is the same, but the evidence is NOT equally good for both because the small effect in the n=1000 sample could much more easily be a confound! If an effect if of a real, fixed sized, then you expect increased sample size to result in decreased p. What this means is that for a given effect, increased sample size actually requires SMALLER p-values to provide the same level of evidence! If you keep getting the same p-value as n increases, that means your effect size is decreasing with n (which is pretty odd, if it’s a real effect!). Wagenmaker’s 2007 article “A practical solution to the pervasive problem of p-values” demonstrates this quite well (especially figure 6). (If you think it’s unfair to use Bayesian methods to assess p-values, keep in mind also that Bayesian methods are actually, and provably, correct). This result is quite counterintuitive, which makes it extraordinarily important. Most people would say that a 500 person p=.04 study provides stronger evidence for an effect than a 50 person p=.04 study. Most people are wrong, and this is BEFORE even considering confounds which an overpowered study might become ensnared in! At best, you can say they provide equal evidence for their effects, but that the 500 person study has a much smaller one. But to even say that is to assume that your study is perfect. Smaller effect sizes are inherently less reliable even with equal statistical evidence. So if Dr. R thinks that “the interpretation of Bayes-Factors changes with sample size “ is a unique weakness of Bayes factors, he’s simply not reasoning correctly. P-values have exactly the same problem, only it’s far, far worse because people aren’t aware of it. Again and again, the Bayes methods gets attacked for some problem, (subjectivity, interpretability), that p-values supposedly lack, when the case is really that Bayes keeps us honest and forces us to reveal analytical lability front and center, while p-values let us hide it. Jeff Rouder said...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758811950683594, "perplexity": 1137.879205251409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320057.96/warc/CC-MAIN-20170623114917-20170623134917-00390.warc.gz"}
https://math.berkeley.edu/about/events
# Department Events Weekly (usually) Mathematics Department events: Annual Mathematics Department events: For current offerings of the above events as well as conferences and workshops held in the department from time to time, please consult the Events Calendar: A summary of today's events is also shown on the Math Department home page.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9193601012229919, "perplexity": 2440.496519617339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00549.warc.gz"}
https://www.mathphysicsbook.com/mathematics/clifford-groups/classification-of-clifford-algebras/isomorphisms/
# Isomorphisms The Clifford algebra can be viewed as a $${\mathbb{Z}_{2}}$$-graded algebra, in that it can be decomposed into a direct sum of two vector subspaces generated by $${k}$$-vectors with $${k}$$ either even or odd. The even subspace, denoted $${C_{0}(r,s)}$$, is also a subalgebra, since the Clifford multiplication of two even $${k}$$-vectors remains even. By choosing $${\hat{e}_{0}^{2}=-1\in C(r,s)}$$ and considering the algebra generated by the orthonormal basis $${\hat{e}_{0}\hat{e}_{i}\:(i\neq0)}$$, it is not hard to show that $$\displaystyle C_{0}(r,s)\cong C(r,s-1)$$. Then the relationship $${C_{0}(r,s)\cong C_{0}(s,r)}$$ leads to the isomorphism $$\displaystyle C(r,s-1)\cong C(s,r-1)$$. One can also show that: • $${C(r,s)\otimes C(2,0)\cong C(r,s)\otimes\mathbb{R}(2)\cong C(s+2,r)\cong C(r+1,s+1)}$$ • $${C(r,s)\otimes C(0,2)\cong C(r,s)\otimes\mathbb{H}\cong C(s,r+2)}$$ • $${C(r,s)\otimes C(0,4)\cong C(r,s)\otimes\mathbb{H}(2)\cong C\left(r,s+4\right)}$$ • $${C\left(r-4,s+4\right)\cong C\left(r,s\right)}$$ • The periodicity theorem (related to and sometimes referred to as Bott periodicity): $${C\left(r+8,s\right)\cong C\left(r,s+8\right)\cong C\left(r,s\right)\otimes\mathbb{R}\left(16\right)}$$ The first isomorphism $${C(r+1,s+1)\cong C(r,s)\otimes\mathbb{R}(2)}$$ means that we need only consider classifying Clifford algebras based on the values of $${r-s}$$, and the periodicity theorem means that we can focus on values of $${r-s}$$ mod 8. In physics, the most important signatures are Euclidean and Lorentzian; specific isomorphisms for some of these Clifford algebras are listed in the following table. Note that since the first column covers all values of $${r-s}$$ mod 8, it can be used to easily determine any other Clifford algebra. \begin{aligned}n\end{aligned}\begin{aligned}C\left(n,0\right) & \cong C\left(1,n-1\right)\\ & \cong C_{0}\left(n,1\right)\\ & \cong C_{0}\left(1,n\right) \end{aligned} \begin{aligned}C\left(0,n\right) & \cong C_{0}\left(n+1,0\right)\\ & \cong C_{0}\left(0,n+1\right) \end{aligned} \begin{aligned}C\left(n-1,1\right)\end{aligned} 1$${\mathbb{R}\oplus\mathbb{R}}$$$${\mathbb{C}}$$$${\mathbb{C}}$$ 2$${\mathbb{R}\left(2\right)}$$$${\mathbb{H}}$$$${\mathbb{R}\left(2\right)}$$ 3$${\mathbb{C}\left(2\right)}$$$${\mathbb{H}\oplus\mathbb{H}}$$$${\mathbb{R}\left(2\right)\oplus\mathbb{R}\left(2\right)}$$ 4$${\mathbb{H}\left(2\right)}$$$${\mathbb{H}\left(2\right)}$$$${\mathbb{R}\left(4\right)}$$ 5$${\mathbb{H}\left(2\right)\oplus\mathbb{H}\left(2\right)}$$$${\mathbb{C}\left(4\right)}$$$${\mathbb{C}\left(4\right)}$$ 6$${\mathbb{H}\left(4\right)}$$$${\mathbb{R}\left(8\right)}$$$${\mathbb{H}\left(4\right)}$$ 7$${\mathbb{C}\left(8\right)}$$$${\mathbb{R}\left(8\right)\oplus\mathbb{R}\left(8\right)}$$$${\mathbb{H}\left(4\right)\oplus\mathbb{H}\left(4\right)}$$ 8$${\mathbb{R}\left(16\right)}$$$${\mathbb{R}\left(16\right)}$$$${\mathbb{H}\left(8\right)}$$ Note: Clifford multiplication corresponds to matrix multiplication in the isomorphic matrix algebra. Recall that our notation denotes e.g. the algebra of $${2\times2}$$ matrices of quaternions as $${\mathbb{H}(2)}$$. We can also form the complexified version of the Clifford algebra $${C(r,s)}$$, which is equivalent to considering the Clifford algebra generated by an inner product space $${\mathbb{C}^{n}}$$. Since the signature is irrelevant in this case, we simply write $${C\mathbb{^{C}}(n)}$$ where $${r+s=n}$$. The complex Clifford algebras can be completely described by the following isomorphisms: • $${C\mathbb{^{C}}(2n)\cong\mathbb{C}(2^{n})}$$ • $${C\mathbb{^{C}}(2n+1)\cong\mathbb{C}(2^{n})\oplus\mathbb{C}(2^{n})}$$ Note that this yields an isomorphism $${C\mathbb{^{C}}(2n)\cong C(n,n+1)\cong C(n+2,n-1)}$$; in contrast, $${C\mathbb{^{C}}(2n+1)}$$ is not isomorphic to any real Clifford algebra. Also note that $${C_{0}\mathbb{^{C}}(n)\cong C\mathbb{^{C}}(n-1)}$$. Δ Although the above are all valid algebra isomorphisms, the original formulation of a Clifford algebra includes an extra structure: the generating vector space $${\mathbb{R}^{n}}$$ that is explicitly embedded in $${C(r,s}$$). This extra structure is lost in these isomorphisms, since the choice of such an embedding is not in general unique.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904730916023254, "perplexity": 195.0750665619476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00530.warc.gz"}
https://www.solidot.org/translate/?nid=149778
solidot新版网站常见问题,请点击这里查看。 ## Dimensional estimates and rectifiability for measures satisfying linear PDE constraints. (arXiv:1811.01847v2 [math.AP] UPDATED) We establish the rectifiability of measures satisfying a linear PDE constraint. The obtained rectifiability dimensions are optimal for many usual PDE operators, including all first-order systems and all second-order scalar operators. In particular, our general theorem provides a new proof of the rectifiability results for functions of bounded variations (BV) and functions of bounded deformation (BD). For divergence-free tensors we obtain refinements and new proofs of several known results on the rectifiability of varifolds and defect measures.查看全文 ## Solidot 文章翻译 你的名字 留空匿名提交 你的Email或网站 用户可以联系你 标题 简单描述 内容 We establish the rectifiability of measures satisfying a linear PDE constraint. The obtained rectifiability dimensions are optimal for many usual PDE operators, including all first-order systems and all second-order scalar operators. In particular, our general theorem provides a new proof of the rectifiability results for functions of bounded variations (BV) and functions of bounded deformation (BD). For divergence-free tensors we obtain refinements and new proofs of several known results on the rectifiability of varifolds and defect measures. 
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919203519821167, "perplexity": 1785.9424811846357}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00132.warc.gz"}
https://www.physicsforums.com/threads/questions-dealing-with-force.89092/
# Homework Help: Questions dealing with force 1. Sep 14, 2005 ### Meteo A couple of questions in this problem. A 20,000kg rocket has a rocket motor that generates $$3.0 *10^5N$$ of thrust. 1. What is the rocket's initial upward acceleration? I used the formula F=MA and got 15 but apparently thats not the right answer. So Im stumped. 2. At an altitude of 5000m the rocket's acceleration has increased to 6.0m/s^2. What mass of fuel has it burned? Im assuming I need the answer to the first part and that the 5000m is irrelevant. $$300000=m_1*6$$ Then I should be able to get the answer $$20000-m_1$$ Thanks. 2. Sep 15, 2005 ### M.Hamilton I think you are on the right track. With problems like these the first step is to draw a free body diagram then sum the forces - perhaps the problem states that the rocket is taking off from Earth? After summing the forces you should have the answer for the Total a. Let me know if that helps. Merle 3. Sep 15, 2005 ### Meteo Ah ok I see why my answer is wrong. I needed to subtract weight from the thrust.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321378469467163, "perplexity": 601.8755549476448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159193.49/warc/CC-MAIN-20180923095108-20180923115508-00506.warc.gz"}
https://www.bartleby.com/questions-and-answers/find-dydx.-do-not-simplify-with-exception-of-multiplication-of-numbers-yx1x21/0607ac7f-6319-4a40-ad9b-e1baf85716f1
# find dy/dx. do not simplify with exception of multiplication of numbersy=(x+1)(x^2+1) Question find dy/dx. do not simplify with exception of multiplication of numbers y=(x+1)(x^2+1)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703061938285828, "perplexity": 4415.910115007694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00028.warc.gz"}
https://math.stackexchange.com/questions/3142007/group-of-proper-symmetries-of-painted-cube
# Group of Proper Symmetries of Painted Cube What is the proper symmetry group of a cube in which three faces, coming together at one vertex, are painted green and the other faces are red? I know that the axis of rotation for which the rotations of $$0$$, $$2\pi/3$$, and $$4\pi/3$$ produce a proper rotational group is $$x=y=z$$ with the intersection points being the intersection of the similarly painted faces. But I'm not sure how to formulate this into a proper symmetry group. Any help would be great, thank you in advance! • "proper symmetry group" means reflections are excluded? – Parcly Taxel Mar 10 at 5:46 • I think you want the dihedral group of the triangle. – Angela Richardson Mar 10 at 6:03 • Yes, @ParclyTaxel. – James Done Mar 10 at 7:19 • "Proper" has already many meanings, not to be used for another one. "Orientation-preserving", "direct", "group of motions" etc already mean what you want. – YCor Mar 10 at 13:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8550509214401245, "perplexity": 702.6617107980493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998513.14/warc/CC-MAIN-20190617163111-20190617185111-00330.warc.gz"}
http://mathhelpforum.com/discrete-math/149199-prove-equivalence-relation.html
1. ## Prove equivalence relation Please critique the following proof and also let me know if I have defined the equivalence classes correctly. Let $f:A\longrightarrow B$. Define a relation $\equiv$ on A by $a_{1}\equiv a_{2}$ iff $f(a_{1})=f(a_{2})$. Give a quick proof that this is an equivalence relation. What are the equivalence classes? Explain intuitively. Proof: Test reflexive for $\equiv$. If $a\in A$ then $(a,a)\in A$: Given f is a funcion then $f(a_{1})=f(a_{1})$ implies $a_{1}\equiv a_{1}$ because two different values in the domain cannot be mapped to the same value in the range. Therefore $(a_{1},a_{1})\in A$. Test symmetric: Let $a_{1}\equiv a_{2}$ where $(a_{1},a_{2})\in A$. Then we know $f(a_{1})=f(a_{2})$. But because = is symmetric we know $f(a_{2})=f(a_{1})$ and since $\equiv$ is defined with iff, we know $a_{2}\equiv a_{1}$ which implies $(a_{2},a_{1})\in A$. Test transitive: Let $a_{1}\equiv a_{2}$ and $a_{2}\equiv a_{3}$ where $(a_{1},a_{2})(a_{2},a_{3})\in A$. Then we know $f(a_{1})=f(a_{2})$ and $f(a_{2})=f(a_{3})$. Because = is transitive we know $f(a_{1})=f(a_{3})$, but because $\equiv$ is defined with iff, we can conclude that $a_{1}\equiv a_{3}$. So $(a_{1},a_{3})\in A$.QED The equivalence classes in A would be the sets $\{(a_{i},a_{j})\in A$ such that $a_{i}=a_{j})$. 2. I agree with everything except your intuitive characterization of the equivalence classes. 3. Originally Posted by oldguynewstudent Proof: Test reflexive for $\equiv$. If $a\in A$ then $(a,a)\in A$ I think you mean ' $\equiv$' , not 'A', as the last symbol there? I.e., this is what we want to prove. Originally Posted by oldguynewstudent Given f is a funcion then $f(a_{1})=f(a_{1})$ implies $a_{1}\equiv a_{1}$ because two different values in the domain cannot be mapped to the same value in the range. No, that is backwards (as well as, though it is irrelevant to the actual proof, you can't assume f is an injection as you have). All you need to say is that if a=a then f(a)=f(a) so a $\equiv$a. Your symmetry and transitivity arguments looked okay, as I glanced over them. Originally Posted by oldguynewstudent The equivalence classes in A would be the sets $\{(a_{i},a_{j})\in A$ such that $a_{i}=a_{j})$. Wrong. The equivalence classes are not sets of ordered pairs of members of A. Rather, the equivalence classes are certain subsets of A. E is an equivalence class (per $\equiv$) iff there exists an a in A such that E = {x | x $\equiv$a}. I.e., E is an equivalence class (per $\equiv$) iff there exists an a in A such that E = {x | f(x)=f(a)}. I.e., the equivalence classes (per $\equiv$) are the non-empty (assuming A is nonempty) sets each made up of, for some given a in A, all and only those members of A that map, under the function f, to f(a). 4. Yes, thank you very very much. I have trouble with proofs and have requested a meeting with my professor. The professor I had for Discrete knew his stuff but didn't know how to impart that knowledge very well. This professor for Combinatorics has been great so far. This is a great help. Thanks again! 5. In my view, very likely you're not at fault, but rather standard curricula are at fault. In my view, it should be standard for math (and science, and social sciences, and even history and other humanities) students to take a course that teaches them how to work in the predicate calculus (both strictly symbolically and intuitively). Then, for math students, before even calculus, about the first half of a set theory course (through the basics, basic axiom of choice and Zorn's lemma, the naturals, and construction of the reals as a complete ordered field; but don't need the second half that gets into more about transfinite cardinalities, etc.).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523077607154846, "perplexity": 361.6868067014287}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607245.69/warc/CC-MAIN-20170523005639-20170523025639-00345.warc.gz"}
https://stats.stackexchange.com/questions/6111/the-probability-for-two-people-to-provide-identical-answers-on-survey-questions
# The probability for two people to provide identical answers on survey questions Here is the problem: A survey contains 7 binary questions (Yes/No responses). If two people are answering the survey, what is the probability for their answers on 4 or more of the questions to match? In other words, if we have four or more matching answers, we can consider the overall survey response to be similar for both people. • added r tag, so the R code in the comment is highlighted – mpiktas Jan 10 '11 at 7:17 • @mpiktas This question is quite independent of R, IMHO. Moreover, code highlighting is handled through Google Prettify on SE sites, so there's no need to add extra tag to enable syntax highlight. – chl Jan 10 '11 at 9:10 • @chl, I asked the question on meta concerning adding r tags, since I felt too, that adding r tag just for syntax highlighting is not appropriate. Google prettify did not work for me. – mpiktas Jan 10 '11 at 9:22 I assume that the survey will be answered independently by the participants. First, you need estimates for the baseline probabilities $p_{i}$ that an answer $i$ will be answered "yes". The probability of two persons answering "yes" for question $i$ is then $p_{i}^{2}$. Likewise, the probability of two persons answering "no" for question $i$ is $(1-p_{i})^{2}$, hence the probability of agreement is $p_{i}^{2} + (1-p_{i})^{2}$. If you assume that all $p_{i} = 0.5$, then you get the answer given by carlosdc since $0.5^{2} + (1-0.5)^{2} = 0.5$. If you allow the $p_{i}$ to vary, an answer can probably be given in closed form as well, but with only 7 questions, it's easy to simply enumerate all possibilities to get 4 or more agreements, and calculate the probability for each case. > n <- 7 # number of questions > p <- rep(0.5, n) # probabilities p_i, here: set all to 0.5 # p <- c(0.4, 0.4, 0.4, 0.4, 0.1, 0.1, 0.1) # alternative: let p_i vary > k <- 4:7 # number of agreements to check # k <- 0:7 # check: result (total probability) should be 1 # vector to hold probability for each number of agreements > res <- numeric(length(k)) # function to calculate the probability for an event with agreement on the # questions x and disagreement on the remaining questions > getP <- function(x) { + tf <- 1:n %in% x # convert numerical to logical index vector + pp <- p[tf]^2 + (1-p[tf])^2 # probabilities of agreeing on questions x + + # probabilities of disagreeing on remaining questions + qq <- 1 - (p[!tf]^2 + (1-p[!tf])^2) + prod(pp) * prod(qq) # total probability + } # for each number of agreements: calculate probability > for(i in seq(along=res)) { + # all choose(n, k) possibilities to have k agreements + poss <- combn(1:n, k[i]) + + # probability for each of those possibilities, edit: take 0-length into account + if (length(poss) > 0) { + res[i] <- sum(apply(poss, 2, getP)) + } else { + res[i] <- getP(numeric(0)) + } + } > res # probability for 4, 5, 6, 7 agreements [1] 0.2734375 0.1640625 0.0546875 0.0078125 > dbinom(k, n, 0.5) # check: all p_i = 0.5 -> binomial distribution [1] 0.2734375 0.1640625 0.0546875 0.0078125 > sum(res) # probability for 4 or more agreements [1] 0.5 The R code could certainly be simplified, also prod() might be worse in terms of error propagation with small numbers than exp(sum(log())), although I'm not sure on that one. • The "closed formula", expressed as a polynomial in the original $p_i$, has 2187 terms--more than the number of values you would have to enumerate! – whuber Jan 10 '11 at 2:37 • Yes and No answers have a probability of 0.5. This is correct: 0.2734375 0.1640625 0.0546875 0.0078125. There are 16 possible combination of Yes answers for two users: {4,4} {4,5} {4,6} {4,7} {5,4} {5,5} {5,6} {5,7} {6,4} {6,5} {6,6} {6,7} {7,4} {7,5} {7,6} {7,7}. And we can estimate the probability of each of these 16 pairs occuring. However once a pair occurs - the probability of a match is different. In other words for a {4,4} the probability is 1/35 as there are 35 combinations of 4 out of 7. For {4,5} the probability of a match increases. How do we take those in account. – user2715 Jan 10 '11 at 16:20 • I'm not sure I fully understand your question, but in the for() loop, each number of pairwise agreements is considered separately (4, 5, 6, 7 in your case): for each of these numbers, all possible answer patterns leading to that agreement are enumerated (poss), and their respective probabilities calculated. One way to check that the result is correct, you can set k (all numbers of agreement) to 0:7. The sum for all probabilities should then be 1. – caracal Jan 10 '11 at 19:11 • @Rado It looks like you are conditioning on the numbers of yes answers; e.g., {6,4} means respondent 1 had 6 yeses and respondent 4 had 4 yeses. This is a valid approach but is more complex than necessary. @caracal is modeling it question by question: on question $i$, the chance of agreement is like flipping a coin with probability $p_i^2 + (1-p_i)^2$. By looking at all possible outcomes of these 7 coin flips--there are only 128 of them--we can compute the chance of any pattern of agreement. – whuber Jan 10 '11 at 20:37 • I might be missing somenthig - ( res ) is probability of selecting 4, 5,6 or 7 "yes", e.g. for 4 it is 35/128. sum (res) - are you suggesting that this is the probability of the the two picking the same answers? To me sum (res) is the probability of anyone selecting 4 or more Yes out of all possible combinations. – user2715 Jan 10 '11 at 20:40 If for each question the probability of selecting the same answer is equal to 0.5, the answer is the following: $$\sum_{i=4}^7{\binom{7}{i}p^i(1-p)^{7-i}}$$ where $p=0.5.$ In this case it is a binomial distribution. • It's simpler to note that the probability of 4 or more agreements equals the probability of 3 or fewer and, since that exhausts all possibilities, both numbers must equal 1/2 exactly. – whuber Jan 10 '11 at 2:30 • You're right. I just wanted to give an answer based on undergraduate probability and give where it comes from. Having the formula also gives you insight as to how the value changes for different i's and p's. – carlosdc Jan 14 '11 at 5:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010561466217041, "perplexity": 973.0147647325424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00157.warc.gz"}
http://math.stackexchange.com/questions/117135/points-in-the-plane-at-integer-distances
# Points in the plane at integer distances Does there exist a set of $n$ points $p_1,p_2,...,p_n$ in the plane, all at mutual integer distances to each other, and an $e>0$, such that the following statement holds: For all $a,b$ with $a^2+b^2<e$, there exists an integer $i$, with $1\leq i\leq n$, such that the distance from $(a,b)$ to $p_i$ is irrational. What is the least such $n$? - Just curiosity: where did this problem come up? – user2468 Mar 6 '12 at 16:44 @J.D. In my mind, out of curiosity aswell. – user1708 Mar 6 '12 at 16:52 I'm thinking of the very special case of $n = 3$ and $p_1, p_2, p_n$ forming an equilateral triangle of unit side length. Let $(a,b)$ be the center of the circle passing through $p_1, p_2, p_3$. Then for $1\le i\le 3$, the distance from $(a,b)$ to $p_i$ is $\dfrac{\sqrt{3}}{3}$. But again very special. – user2468 Mar 6 '12 at 17:15 Oops. Just noticed "for all $a,b$...". Nevermind my comment above. – user2468 Mar 6 '12 at 17:18 Something like this is discussed in Problem D19 of Guy, Unsolved Problems In Number Theory. Almering proved that the points at rational distances from the vertices of any triangle with rational edges are dense in the plane; I believe this shows $n\ge4$. It may be in MR0147447 (26 #4963) Almering, J. H. J. Rational quadrilaterals. Nederl. Akad. Wetensch. Proc. Ser. A 66 = Indag. Math. 25 1963 192–199. – Gerry Myerson Mar 7 '12 at 6:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732046484947205, "perplexity": 174.5079586028838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398474527.16/warc/CC-MAIN-20151124205434-00204-ip-10-71-132-137.ec2.internal.warc.gz"}
http://numerica.pt/2017/06/26/there-are-a-ton-of-engineering-software-packages-with-numerical-simulation-algorithms/
# There are a ton of engineering software packages with numerical simulation algorithms Many packages exist but the main ideia is always the same. To model the physics behaviour of a set experiments. Let me list the software packages that I know of. I will try on the next post to go a bit deeper on each of these software packages. Ansys www.ansys.com Ansys Fluent www.ansys.com/Products/Fluids/ANSYS-Fluent Ansys Maxwell www.ansys.com/products/electronics/ansys-maxwel Ansys Spaceclaim www.spaceclaim.com/en/default.aspx COMSOL www.comsol.com LS-DYNA www.lstc.com Simul8 (Process Simulation) www.simul8.com Femap (Siemens) www.plm.automation.siemens.com/en/products/femap Abaqus/Simulia www.3ds.com/products-services/simulia/ MSC Nastran www.mscsoftware.com/product/msc-nastran MSC Patran www.mscsoftware.com/product/patran Simulation Mechanical www.autodesk.com/education/free-software/simulation-mechanical HyperWorks www.altairhyperworks.com/ Matlab www.matlab.com
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8485212922096252, "perplexity": 1901.1377795558492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575674.3/warc/CC-MAIN-20190922201055-20190922223055-00282.warc.gz"}
https://arxiv.org/abs/1910.13468v1
math.PR # Title:On the Count Probability of Many Correlated Symmetric Events Abstract: We consider $N$ events that are defined on a common probability space. Those events shell have a common probability function that is symmetric with respect to interchanging the events. We ask for the probability distribution of the number of events that occur. If the probability of a single event is proportional to $1/N$ the resulting count probability is Poisson distributed in the limit of $N\rightarrow \infty$ for independent events. In this paper we calculate the characteristic function of the limiting count probability distribution for events that are correlated up to an arbitrary but finite order. Comments: 10 pages, 1 figure Subjects: Probability (math.PR) MSC classes: 60C05, 60E10 Cite as: arXiv:1910.13468 [math.PR] (or arXiv:1910.13468v1 [math.PR] for this version) ## Submission history From: Rüdiger Kürsten [view email] [v1] Tue, 29 Oct 2019 18:20:03 UTC (132 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8699741959571838, "perplexity": 747.1992203845228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00298.warc.gz"}
http://www.researchgate.net/researcher/18359088_N_H_Brooks
# N. H. Brooks General Atomics, San Diego, California, United States Are you N. H. Brooks? ## Publications (349)372.46 Total impact • ##### Article: Net versus gross erosion of high-Z materials in the divertor of DIII-D [Hide abstract] ABSTRACT: A substantial reduction of net compared to gross erosion of molybdenum and tungsten was observed in experiments conducted in the lower divertor of DIII-D using the divertor material evaluation system. Post-exposure net erosion of molybdenum and tungsten films was measured by Rutherford backscattering (RBS) yielding net erosion rates of 0.4–0.7 nm s−1 for Mo and ~0.14 nm s−1 for W. Gross erosion was estimated using RBS on a 1 mm diameter sample, where re-deposition is negligible. Net erosion on a 1 cm diameter sample was reduced compared to gross erosion by factors of ~2 for Mo and ~3 for W. The experiment was modeled with the REDEP/WBC erosion/re-deposition code package coupled to the Ion Transport in Materials and Compounds—DYNamics mixed-material code, with plasma conditions supplied by the Onion skin modeling + Eirene + Divimp for edGE modeling code with input from divertor Langmuir probes. The code-calculated net/gross erosion rate ratios of 0.46 for Mo and 0.33 for W are in agreement with the experiment. Physica Scripta 04/2014; 2014(T159):014030. · 1.03 Impact Factor • ##### Article: Simulation of localized fast-ion heat loads in test blanket module simulation experiments on DIII-D [Hide abstract] ABSTRACT: Infrared imaging of hot spots induced by localized magnetic perturbations using the test blanket module (TBM) mock-up on DIII-D is in good agreement with beam-ion loss simulations. The hot spots were seen on the carbon protective tiles surrounding the TBM as they reached temperatures over 1000 °C. The localization of the hot spots on the protective tiles is in fair agreement with fast-ion loss simulations using a range of codes: ASCOT, SPIRAL and OFMCs while the codes predicted peak heat loads that are within 30% of the measured ones. The orbit calculations take into account the birth profile of the beam ions as well as the scattering and slowing down of the ions as they interact with the localized TBM field. The close agreement between orbit calculations and measurements validate the analysis of beam-ion loss calculations for ITER where ferritic material inside the tritium breeding TBMs is expected to produce localized hot spots on the first wall. Nuclear Fusion 12/2013; 53(12):3018-. · 2.73 Impact Factor • ##### Article: Reduction of edge localized mode intensity on DIII-D by on-demand triggering with high frequency pellet injection and implications for ITER [Hide abstract] ABSTRACT: The injection of small deuterium pellets at high repetition rates up to 12× the natural edge localized mode (ELM) frequency has been used to trigger high-frequency ELMs in otherwise low natural ELM frequency H-mode deuterium discharges in the DIII-D tokamak [J. L. Luxon and L. G. Davis, Fusion Technol. 8, 441 (1985)]. The resulting pellet-triggered ELMs result in up to 12× lower energy and particle fluxes to the divertor than the natural ELMs. The plasma global energy confinement and density are not strongly affected by the pellet perturbations. The plasma core impurity density is strongly reduced with the application of the pellets. These experiments were performed with pellets injected from the low field side pellet in plasmas designed to match the ITER baseline configuration in shape and normalized β operation with input heating power just above the H-mode power threshold. Nonlinear MHD simulations of the injected pellets show that destabilization of ballooning modes by a local pressure perturbation is responsible for the pellet ELM triggering. This strongly reduced ELM intensity shows promise for exploitation in ITER to control ELM size while maintaining high plasma purity and performance. Physics of Plasmas 08/2013; 20(8). · 2.38 Impact Factor • ##### Article: Control and dissipation of runaway electron beams created during rapid shutdown experiments in DIII-D [Hide abstract] ABSTRACT: DIII-D experiments on rapid shutdown runaway electron (RE) beams have improved the understanding of the processes involved in RE beam control and dissipation. Improvements in RE beam feedback control have enabled stable confinement of RE beams out to the volt-second limit of the ohmic coil, as well as enabling a ramp down to zero current. Spectroscopic studies of the RE beam have shown that neutrals tend to be excluded from the RE beam centre. Measurements of the RE energy distribution function indicate a broad distribution with mean energy of order several MeV and peak energies of order 30–40 MeV. The distribution function appears more skewed towards low energies than expected from avalanche theory. The RE pitch angle appears fairly directed (θ ~ 0.2) at high energies and more isotropic at lower energies (ε < 100 keV). Collisional dissipation of RE beam current has been studied by massive gas injection of different impurities into RE beams; the equilibrium assimilation of these injected impurities appears to be reasonably well described by radial pressure balance between neutrals and ions. RE current dissipation following massive impurity injection is shown to be more rapid than expected from avalanche theory—this anomalous dissipation may be linked to enhanced radial diffusion caused by the significant quantity of high-Z impurities (typically argon) in the plasma. The final loss of RE beams to the wall has been studied: it was found that conversion of magnetic to kinetic energy is small for RE loss times smaller than the background plasma ohmic decay time of order 1–2 ms. Nuclear Fusion 07/2013; 53(8):083004. · 2.73 Impact Factor • ##### Article: An experimental comparison of gross and net erosion of Mo in the DIII-D divertor [Hide abstract] ABSTRACT: Experimental observation of net erosion of molybdenum being significantly reduced compared to gross erosion in the divertor of DIII-D is reported for well-controlled plasma conditions. For the first time, gross erosion rates were measured by both spectroscopic and non-spectroscopic methods. In one experiment a net erosion rate of 0.73 ± 0.03 nm/s was measured using ion beam analysis (IBA) of a 1 cm diameter Mo-coated sample. For a 1 mm diameter Mo sample exposed at the same time the net erosion rate was higher at 1.31 nm/s. For the small sample redeposition is expected to be negligible in comparison with the larger sample yielding a net to gross erosion estimate of 0.56 ± 12%. The gross rate was also measured spectroscopically (386 nm MoI line) giving 2.45 nm/s ± factor 2. The experiment was modeled with the REDEP/WBC erosion/redeposition code package coupled to the ITMC–DYN mixed-material code, with plasma conditions supplied by the OEDGE code using Langmuir probe data input. The code-calculated net/gross ratio is =0.46, in good agreement with experiment. Journal of Nuclear Materials 07/2013; 438:S309–S312. · 2.02 Impact Factor • ##### Article: Reduction of Edge-Localized Mode Intensity Using High-Repetition-Rate Pellet Injection in Tokamak H-Mode Plasmas [Hide abstract] ABSTRACT: High repetition rate injection of deuterium pellets from the low-field side (LFS) of the DIII-D tokamak is shown to trigger high-frequency edge-localized modes (ELMs) at up to 12× the low natural ELM frequency in H-mode deuterium plasmas designed to match the ITER baseline configuration in shape, normalized beta, and input power just above the H-mode threshold. The pellet size, velocity, and injection location were chosen to limit penetration to the outer 10% of the plasma. The resulting perturbations to the plasma density and energy confinement time are thus minimal (<10%). The triggered ELMs occur at much lower normalized pedestal pressure than the natural ELMs, suggesting that the pellet injection excites a localized high-n instability. Triggered ELMs produce up to 12× lower energy and particle fluxes to the divertor, and result in a strong decrease in plasma core impurity density. These results show for the first time that shallow, LFS pellet injection can dramatically accelerate the ELM cycle and reduce ELM energy fluxes on plasma facing components, and is a viable technique for real-time control of ELMs in ITER. Physical Review Letters 06/2013; 110(24). · 7.73 Impact Factor • ##### Article: The effect of thermo-oxidation on plasma performance and in-vessel components in DIII-D [Hide abstract] ABSTRACT: In April 2010, two thermo-oxidation experiments ('O-bakes') were performed in the DIII-D tokamak. Internal surfaces of the tokamak, as well as a number of specimens inserted into the torus, were exposed to a mixture of 20% O2/80% He at a nominal pressure of 9.5 Torr (1.27 kPa) at a temperature of 350–360 °C for a duration of 2 h. Three primary conclusions have been drawn from these experiments: (1) laboratory measurements on the release of deuterium from tokamak codeposits by oxidation have been duplicated in a tokamak environment, (2) no internal tokamak components or systems were adversely affected by the oxidation and (3) the recovery of plasma performance following oxidation was similar to that following regular torus openings. Nuclear Fusion 05/2013; 53(7):073008. · 2.73 Impact Factor • ##### Conference Paper: Plasma Surface Interaction (PSI) studies at DIII-D [Hide abstract] ABSTRACT: Understanding of Plasma Surface Interactions (PSI) and the selection of suitable plasma facing materials are critical areas for current tokamak experiments and future D-T burning facilities including ITER and FNSF. In support of PSI studies, DIII-D uses the Divertor Materials Evaluation System (DiMES), which contains a removable probe where material samples can be exposed to as few as a single well-characterized plasma shot. Experiments, consisting of a carbon DiMES probe surface with metal coatings of Be, W, V, Mo or Al, have been exposed to the DIII-D lower divertor strike point plasma for cumulative discharge times of 4-20s. Extensive DIII-D divertor diagnostics provided well-characterized plasmas for modeling efforts. Experimental results were benchmarked with modeling codes to validate and extend the predictive capability of the codes. Reported in this paper are two recent experiments and results. The first is on the net and gross erosion of Mo coatings and the extension of these results to an extrapolated all Mo surface DIII-D machine. The second is on the exposure to vertical displacement discharges and X-point plasma discharges of W-fuzz buttons, which were prepared by the PISCES (UCSD) laboratory. The surprising results are the robustness of the W-fuzz and that W impurity was not detected in the plasma core at the conditions studied. Fusion Engineering (SOFE), 2013 IEEE 25th Symposium on; 01/2013 • ##### Article: OEDGE Assessment of Pressure and Power Balance Methods for Separatrix Identification [Hide abstract] ABSTRACT: The OEDGE code is used to assess several methods of determining the upstream separatrix location. The inter-ELM phase of a well-diagnosed ELMing H-mode discharge is being used for this comparison. The OEDGE code utilizes 1D plasma fluid models calculated along the field lines on a 2D computational grid of a poloidal cross-section of the discharge magnetic geometry to produce a 2D model of the background plasma. Langmuir probe data at the targets are used as input to the 1D models. Additional diagnostic measurements, including Thomson scattering, reciprocating probe, divertor spectroscopy and infra-red measurements of target heat flux, may be used to further constrain the plasma background determined by OEDGE. This plasma background thus found, is then used to identify the location of the separatrix in the experimental data by comparing the upstream plasma profiles from OEDGE to the experimental measurements. The OEDGE result is then compared to the separatrix locations predicted using simple pressure balance and power balance considerations. 10/2012; • ##### Article: Near infrared spectroscopy of the DIII-D divertor [Hide abstract] ABSTRACT: A high speed, high resolution near infrared (NIR) spectrometer has been installed at DIII-D to make first-of-its-kind observations of the 0.8-2.2 μm region in a tokamak divertor. The goals of this diagnostic are (1) to study Paschen spectra for line-averaged measurement of low temperature plasma parameters, (2) to benchmark the chemical and physically sputtered sources of neutral carbon using the lineshape of the CI, 910 nm multiplet, and (3) to quantify contamination of the 0.75-1.1 μm region where Thomson-shifted laser light is measured by the Thomson scattering diagnostic. Diagnostic capabilities include a 300 mm, f/3.9 design, 300-2400 Gr/mm gratings providing optical resolution of ˜0.65-0.04 nm, and readout at up to 900 frames/second. Data are presented in L-mode plasmas, and in H-mode between ELMs and during the ELM peak. Results acquired by this diagnostic will be applied to design of a proposed divertor Thomson diagnostic for NSTX-U and aid validation of the Thomson system on ITER. 10/2012; • Source ##### Article: A two photon absorption laser induced fluorescence diagnostic for fusion plasmas. [Hide abstract] ABSTRACT: The quality of plasma produced in a magnetic confinement fusion device is influenced to a large extent by the neutral gas surrounding the plasma. The plasma is fueled by the ionization of neutrals, and charge exchange interactions between edge neutrals and plasma ions are a sink of energy and momentum. Here we describe a diagnostic capable of measuring the spatial distribution of neutral gas in a magnetically confined fusion plasma. A high intensity (5 MW/cm(2)), narrow bandwidth (0.1 cm(-1)) laser is injected into a hydrogen plasma to excite the Lyman β transition via the simultaneous absorption of two 205 nm photons. The absorption rate, determined by measurement of subsequent Balmer α emission, is proportional to the number of particles with a given velocity. Calibration is performed in situ by filling the chamber to a known pressure of neutral krypton and exciting a transition close in wavelength to that used in hydrogen. We present details of the calibration procedure, including a technique for identifying saturation broadening, measurements of the neutral density profile in a hydrogen helicon plasma, and discuss the application of the diagnostic to plasmas in the DIII-D tokamak. The Review of scientific instruments 10/2012; 83(10):10D701. · 1.52 Impact Factor • ##### Article: Localized Fast-Ion Induced Heat Loads in Test Blanket Module Mockup Experiments on DIII-D [Hide abstract] ABSTRACT: Localized hot spots can be created in ITER on the Test Blanket Modules (TBMs) because the ferritic steel of the TBMs distorts the local magnetic field near the modules and alters fast ion confinement. Predicting the TBM heat load levels is important for assessing their effects on the ITER first wall. Experiments in DIII-D were carried out with a mock-up of the ITER TBM ferromagnetic error field to provide data for validation of fast-ion orbit following codes. The front surface temperature of the protective TBM tiles was imaged directly with a calibrated infrared camera and heat loads were extracted. The detailed spot sizes and measured heat loads are compared with results from heat load calculations performed with a suite of orbit following codes. The codes reproduce the hot spots well, thereby validating the codes and giving confidence in predictions for fast-ion heat loads in ITER. 10/2012; • ##### Article: Test Blanket Module Mockup Experiments in DIII-D [Hide abstract] ABSTRACT: Recent DIII-D experiments have investigated the effects of localized magnetic field perturbations, using coils that approximate the magnetization of the test blanket modules (TBMs) in one ITER port. In H-mode discharges, compensation of the TBM field using an applied n=1 field yielded only partial recovery of the plasma rotation, and the compensation field that maximized plasma rotation differed significantly from the field that reduced the resonant magnetic response to a very low value. These results provide insight into the effects of error fields, and suggest an important role for non-resonant magnetic braking. In addition, measurements of localized heat deposition with the TBM field are being compared to orbit following calculations of fast ion loss, and a new fast ion detector has confirmed earlier observations of reduced 1 MeV triton confinement. 10/2012; • ##### Article: Ion beam analysis of 13C and deuterium deposition in DIII-D and their removal by in-situ oxygen baking [Hide abstract] ABSTRACT: An experiment was conducted in DIII-D to examine carbon deposition when a secondary separatrix is near the wall. The magnetic configuration for this experiment was a biased double-null, similar to that foreseen for ITER. 13C methane was injected toroidally symmetrically near the secondary separatrix into ELMy H-mode deuterium plasmas. The resulting deposition of 13C was determined by nuclear reaction analysis. These results show that very little of the injected 13C was deposited at the primary separatrix, whereas a large fraction of injected 13C was deposited close to the point of injection near the secondary separatrix. Six of the tiles were put back into DIII-D, where they were baked at 350-360 °C for 2 h at ~1 kPa in a 20% O2/80% He gas mixture. Subsequent ion beam analysis of these tiles showed that about 21% of the 13C and 54% of the deuterium were removed by the bake. Physica Scripta Volume T. 12/2011; • ##### Article: Poloidal distribution of recycling sources and core plasma fueling in DIII-D, ASDEX-Upgrade and JET L-mode plasmas [Hide abstract] ABSTRACT: Deuterium fueling profiles across the separatrix have been calculated with the edge fluid codes UEDGE, SOLPS and EDGE2D/EIRENE for lower single null, ohmic and low-confinement plasmas in DIII-D, ASDEX Upgrade and JET. The fueling profiles generally peak near the divertor x-point, and broader profiles are predicted for the open divertor geometry and horizontal targets in DIII-D than for the more closed geometries and vertical targets in AUG and JET. Significant fueling from the low-field side midplane may also occur when assuming strong radial ion transport in the far scrape-off layer. The dependence of the fueling profiles on upstream density is investigated for all three devices, and between the different codes for a single device. The validity of the predictions is assessed for the DIII-D configuration by comparing the measured ion current to the main chamber walls at the low-field side and divertor targets, and deuterium emission profiles across the divertor legs, and the high-field and low-field side midplane regions to those calculated by UEDGE and SOLPS. Plasma Physics and Controlled Fusion 11/2011; 53(12):124017. · 2.37 Impact Factor • ##### Article: Portable rotating discharge plasma device B. L. Dwyer, N. H. Brooks, R. L. Lee [Hide abstract] ABSTRACT: We constructed two devices for the purpose of educational demonstration: a rotating tube containing media of two densities to demonstrate axial confinement and a similar device that uses pressure variation to convert a long plasma glow discharge into a long straight arc [1]. In the first device, the buoyant force is countered by the centripetal force, which confines less dense materials to the center of the column. Similarly, a plasma arc heats the gas through which it passes, creating a hot gaseous bubble that is less dense than the surrounding medium. Rotating its containment envelope stabilizes this gas bubble in an analogous manner to an air bubble in a rotating tube of water. In addition to stabilization, the rotating discharge also exhibits a decrease in buoyancy-driven convection currents. This limits the power loss to the walls, which decreases the field strength requirement for maintaining the arc. These devices demonstrate principles of electrodynamics, plasma physics, and fluid mechanics. They are portable and safe for classroom use. 6pt [1] N.H. Brooks, et al., J. Appl. Phys. 94, 1402 (2003). 11/2011; • ##### Article: Applications of Collisional Radiative Modeling of Helium and Deuterium for Image Tomography Diagnostic of Te, Ne, and ND in the DIII-D Tokamak [Hide abstract] ABSTRACT: We apply new atomic modeling techniques to helium and deuterium for diagnostics in the divertor and scrape-off layer regions. Analysis of tomographically inverted images is useful for validating detachment prediction models and power balances in the divertor. We apply tomographic image inversion from fast tangential cameras of helium and Dα emission at the divertor in order to obtain 2D profiles of Te, Ne, and ND (neutral ion density profiles). The accuracy of the atomic models for He I will be cross-checked against Thomson scattering measurements of Te and Ne. This work summarizes several current developments and applications of atomic modeling into diagnostic at the DIII-D tokamak. 11/2011; • ##### Article: A TALIF Diagnostic for the DIII-D Tokamak [Hide abstract] ABSTRACT: The density profile of hydrogenic neutrals in the edge of DIII-D plays an important role in the problems of momentum transport, pedestal formation, and plasma-wall interaction, but an accurate measurement has proven difficult. A two-photon absorption laser induced fluorescence (TALIF) diagnostic is under construction and is intended to provide temporally and spatially resolved neutral density measurements in the pedestal region. This three-level TALIF scheme offers the advantages of direct excitation of ground state atoms, emission in the visible portion of the spectrum, a high degree of spatial localization, and the potential for a Doppler-free measurement. The large background of Dα emission, the principal challenge of the measurement, can be overcome by the focusing of a high power (1 MW) UV laser. Calculations of the SNR show that densities of 10^15 m-3 or lower can be measured with a spatial resolution of 0.3 mm. We present design details of the proposed laser system, calculations of the expected performance in DIII-D and in a helicon source plasma, and measurements of the HI profile in the helicon plasma. 11/2011; • ##### Article: Detailed OEDGE Modeling of Core-Pedestal Fueling in DIII-D [Hide abstract] ABSTRACT: The OEDGE code is used to model the deuterium neutral density and ionization distribution inside the separatrix for an attached L-mode SAPP discharge and an attached ELMy H-mode discharge. The background plasma solution is determined by empirical plasma reconstruction matching as many diagnostic measurements as possible. Recycling fluxes are obtained from measurements by Langmuir probes and spectroscopic measurements of Dα. The relative importance of wall, divertor and recombination sources to core and pedestal fueling are assessed. In addition, the sensitivity of the ionization source location to the details of the plasma solution in the divertor is examined. Several models for plasma-wall contact are used to estimate the strength of the wall recycling source. In the L-mode case, ionization profiles peak at the flux surface ˜1.3 cm inboard of the separatrix (mapped to the outer midplane). Journal of Nuclear Materials 11/2011; · 2.02 Impact Factor • ##### Article: Effect of applied toroidal electric field on the growth/decay of plateau-phase runaway electron currents in DIII-D [Hide abstract] ABSTRACT: Large relativistic runaway electron currents (0.1–0.5 MA) persisting for ~100 ms are created in the DIII-D tokamak during rapid discharge shut down caused by argon pellet injection. Slow upward and downward ramps in runaway currents were found in response to externally applied loop voltages. Comparison between the observed current growth/decay rate and the rate expected from the knock-on avalanche mechanism suggests that classical collisional dissipation of runaways alone cannot account for the measured growth/damping rates. It appears that a fairly constant anomalous dissipation rate of order 10 s−1 exists, possibly stemming from radial transport or direct orbit losses to the vessel walls, although the possibility of an apparent loss due to current profile shrinking cannot be ruled out at present. Nuclear Fusion 09/2011; 51(10):103026. · 2.73 Impact Factor #### Publication Stats 1k Citations 372.46 Total Impact Points #### Institutions • ###### General Atomics San Diego, California, United States • ###### West Virginia University • Department of Physics Morgantown, WV, United States • ###### University of Toronto • Institute for Aerospace Studies • ###### Forschungszentrum Jülich • Plasmaphysik (IEK-4) Düren, North Rhine-Westphalia, Germany • ###### Oak Ridge National Laboratory • Fusion Energy Division Oak Ridge, FL, United States • ###### University of California, San Diego • Department of Mechanical and Aerospace Engineering (MAE) San Diego, CA, United States • ###### Lawrence Livermore National Laboratory • Physics Division Livermore, California, United States • ###### Sandia National Laboratories Albuquerque, New Mexico, United States
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425143361091614, "perplexity": 4875.624185493064}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378047.50/warc/CC-MAIN-20141119123258-00106-ip-10-235-23-156.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/221809/numerical-solution-for-a-system-of-pdes-via-steady-state-with-time-integration-m
# Numerical solution for a system of PDEs via steady state with time integration method I'm working to solve the steady-state short circuit current of a solar cells, using the coupled continuity equations with a drift-diffusion expression and Poisson's equation: D[n[x, t], t] - (1 / q) D[Jn[x, t], x] == G - R; D[p[x, t], t] + (1 / q) D[Jp[x, t], x] == G - R; Jn[x, t] == q [Mu]n (n F + (kb T / q) D[n[x, t], x]); Jp[x, t] == q [Mu]p (p F - (kb T / q) D[p[x, t], x]); D[D[[Phi][x, t], x], x] == (q / ([Epsilon]r [Epsilon]0)) (p[x, t] - n[x, t]); In particular, I'm following the work from a literature paper that (no surprise) leaves out many details. The paper says to first find the equilibrium solution by solving Poisson's equation numerically with the equilibrium expressions: nboltz[x] = Nc Exp[(-kb T Log[Nc] eVperJ - q [Phi][x]) / (kb T eVperJ)] ; pboltz [x] = Nv Exp[(-kb T Log[Nv] eVperJ + q [Phi][x] ) / (kb T eVperJ)]; I can get this far using the boundary conditions below (for electric potential). However, it then says: "the steady-state solutions found using the time-evolution method of David's et al, which uses an additional equation for the time derivative of the electric field: D[F[x, t], t] == - (1 / ([Epsilon]r [Epsilon]0)) (Jn[x, t] + Jp[x, t] - (1 / L) Integrate[ Jn[x, t] + Jp[x, t], {x, 0, L}]); In addition, I have the boundary conditions, p[0, t] = Nv n[0, t] = Nc Exp[(Ev - Ec) / (kb T eVperJ)] n[L, t] = Nc p[L, t] = Nv Exp[(Ev - Ec) / (kb T eVperJ)] Davids et al. says, "To find the steady state solution at an applied voltage bias, a time dependent potential ramp which stops at the desired voltage is applied to the right contact and the equations are integrated forward in time starting from thermal equilibrium until state state is reached . The position independence of the total particle current J = Jn + Jp is sued to verify that steady state has been reached." Conveniently, I am looking at the short-circuit case where the applied voltage bias is 0. However, I don't understand how one can integrate a steady state solution forward in time. Isn't a steady state solution fundamentally unchanging with time? Moreover, if integrating forward in time is just to help the solver, I am still unsure of how to implement this (I'm working in Mathematica). Hopefully this all formats well and thanks for the help! -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653445243835449, "perplexity": 2900.452142882106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00082-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/another-relativistic-particle-decay-question.884105/
# Another relativistic particle decay question Tags: 1. Sep 3, 2016 ### Elvis 123456789 1. The problem statement, all variables and given/known data Unstable particles cannot live very long. Their mean life time t is defined by N(t) = N0e−t/τ , i.e., after a time of t, the number of particles left is N0/e. (For muons, τ=2.2µs.) Due to time dilation and length contraction, unstable particles can still travel far if their speeds are high enough. For some particles, the mean life time is so small that it is more convenient to define τ using the quatity cτ (c is the speed of light). For example, the particle Λ has a cτ measured to be 7.89cm. a) If the Λ is traveling at 0.5c, how many of L are left after traveling 7.89cm? b) How far would the Λ’s have travelled, if only 50% of them are left? c) (Extra) Derive the general expression of N(L)/N0 for the Λ particles, as a function of L (distance travelled) and the speed v (arbitrary, not just always 0.5c) of the Λ particles. 2. Relevant equations N(t) = N0e−t/τ t_e = t_Λ *γ L_Λ = L_e/γ t_e & L_e is the time and length measured in the earth frame of reference and t_Λ and L_Λ is the time and length measured in the lambda particle frame of reference I did all the parts but I feel pretty unsure about it. These relativity questions just feel really ambiguous to me. I was hoping you guys could take a look and let me know if it seems ok. Thanks in advance! 3. The attempt at a solution Parts a.) , b.), and c.) are in the attached image I assumed that the cτ = 7.89 cm is in the particle's FR and for part a that the 7.89cm traveled was in the earth/lab FR File size: 21.9 KB Views: 46 File size: 31.6 KB Views: 50 2. Sep 3, 2016 ### TSny But note that it would be nice to express your equations in terms of the defined quantity $c \tau_\Lambda$. Thus you can write $N = N_0 \exp \left(- \frac{L}{v \gamma \tau_\Lambda} \right)$ as $N = N_0 \exp \left(- \frac{L}{(v/c) \gamma (c \tau_\Lambda )} \right)$. Then you can just use the given value for $c \tau_\Lambda$ in the calculation for part (a) without having to find $v$ in m/s or $\tau_\Lambda$ in seconds. For part (c), I think they want an equation for the ratio $N/N_0$, which just requires a little change in what you wrote. They might prefer the equation to be written in terms of the the quantity $c \tau_\Lambda$. But, maybe not. Last edited: Sep 4, 2016 3. Sep 5, 2016 ### David Lewis If there were no time dilation or length contraction, unstable particles would still travel far if their speeds are high enough. Draft saved Draft deleted Similar Discussions: Another relativistic particle decay question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232673645019531, "perplexity": 1129.0805310590224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512584.10/warc/CC-MAIN-20171211071340-20171211091340-00183.warc.gz"}
https://worldwidescience.org/topicpages/d/deformation+energy+absorption.html
#### Sample records for deformation energy absorption 1. Deformation and energy absorption properties of powder-metallurgy produced Al foams International Nuclear Information System (INIS) Michailidis, N.; Stergioudi, F.; Tsouknidas, A. 2011-01-01 Highlights: → Porous Al fabricated via a dissolution and sintering method using raw cane sugar. → Different deformation mode depending on the relative density of the foams. → Enhanced energy absorption by reducing pore size and relative density of the foam. → Pore size uniformity and sintering temperature affect energy absorption. - Abstract: Al-foams with relative densities ranging from 0.30 to 0.60 and mean pore sizes of 0.35, 0.70 and 1.35 mm were manufactured by a powder metallurgy technology, based on raw cane sugar as a space-holder material. Compressive tests were carried out to investigate the deformation and energy absorbing characteristics and mechanisms of the produced Al-foams. The deformation mode of low density Al-foams is dominated by the bending and buckling of cell walls and the formation of macroscopic deformation bands whereas that of high density Al-foams is predominantly attributed to plastic yielding. The energy absorbing capacity of Al-foams rises for increased relative density and compressive strength. The sintering temperature of Al-foams having similar relative densities has a marked influence on both, energy absorbing efficiency and capacity. Pore size has a marginal effect on energy efficiency aside from Al-foams with mean pore size of 0.35 which exhibit enhanced energy absorption as a result of increased friction during deformation at lower strain levels. 2. Structural Analysis of Shipping Casks, Vol. 9. Energy Absorption Capabilities of Plastically Deformed Struts Under Specified Impact Loading Conditions (Thesis) International Nuclear Information System (INIS) Davis, F.C. 2001-01-01 The purpose of this investigation was to determine the energy absorption characteristics of plastically deformed inclined struts under impact loading. This information is needed to provide a usable method by which designers and analysts of shipping casks for radioactive or fissile materials can determine the energy absorption capabilities of external longitudinal fins on cylindrical casks under specified impact conditions. A survey of technical literature related to experimental determination of the dynamic plastic behavior of struts revealed no information directly applicable to the immediate problem, especially in the impact velocity ranges desired, and an experimental program was conducted to obtain the needed data. Mild-steel struts with rectangular cross sections were impacted by free-falling weights dropped from known heights. These struts or fin specimens were inclined at five different angles to simulate different angles of impact that fins on a shipping cask could experience under certain accident conditions. The resisting force of the deforming strut was measured and recorded as a function of time by using load cells instrumented with resistance strain gage bridges, signal conditioning equipment, an oscilloscope, and a Polaroid camera. The acceleration of the impacting weight was measured and recorded as a function of time during the latter portion of the testing program by using an accelerometer attached to the drop hammer, appropriate signal conditioning equipment, the oscilloscope, and the camera. A digital computer program was prepared to numerically integrate the force-time and acceleration-time data recorded during the tests to obtain deformation-time data. The force-displacement relationships were then integrated to obtain values of absorbed energy with respect to deformation or time. The results for various fin specimen geometries and impact angles are presented graphically, and these curves may be used to compute the energy absorption capacity of 3. Wave energy absorption by ducks OpenAIRE 2017-01-01 We study the absorption of wave energy by a single and multiple cam-shaped bodies referred to as ducks. Numerical models are developed under the assumptions of linear theory. We consider wave absorption by a single duck as well as by two lines of ducks meeting at an angle. 4. Wave energy absorption by ducks DEFF Research Database (Denmark) 2018-01-01 We study the absorption of wave energy by a single and multiple cam-shaped bodies referred to as ducks. Numerical models are developed under the assumptions of linear theory. We consider wave absorption by a single duck as well as by two lines of ducks meeting at an angle.... 5. Shell effects in the nuclear deformation energy International Nuclear Information System (INIS) Ross, C.K. 1973-01-01 A new approach to shell effects in the Strutinsky method for calculating nuclear deformation energy is evaluated and the suggestion of non-conservation of angular momentum in the same method is resolved. Shell effects on the deformation energy in rotational bands of deformed nuclei are discussed. (B.F.G.) 6. Barriers in the energy of deformed nuclei Directory of Open Access Journals (Sweden) V. Yu. Denisov 2014-06-01 Full Text Available Interaction energy between two nuclei considering to their deformations is studied. Coulomb and nuclear in-teraction energies, as well as the deformation energies of both nuclei, are taken into account at evaluation of the interaction energy. It is shown that the barrier related to the interaction energy of two nuclei depends on the de-formations and the height of the minimal barrier is evaluated. It is obtained that the heavier nucleus-nucleus sys-tems have large deformation values at the lowest barrier. The difference between the barrier between spherical nuclei and the lowest barrier between deformed nuclei increases with the mass and the charge of the interacting nuclei. 7. Compressive Behaviour and Energy Absorption of Aluminium Foam Sandwich Science.gov (United States) Endut, N. A.; Hazza, M. H. F. Al; Sidek, A. A.; Adesta, E. T. Y.; Ibrahim, N. A. 2018-01-01 Development of materials in automotive industries plays an important role in order to retain the safety, performance and cost. Metal foams are one of the idea to evolve new material in automotive industries since it can absorb energy when it deformed and good for crash management. Recently, new technology had been introduced to replace metallic foam by using aluminium foam sandwich (AFS) due to lightweight and high energy absorption behaviour. Therefore, this paper provides reliable data that can be used to analyze the energy absorption behaviour of aluminium foam sandwich by conducting experimental work which is compression test. Six experiments of the compression test were carried out to analyze the stress-strain relationship in terms of energy absorption behavior. The effects of input variables include varying the thickness of aluminium foam core and aluminium sheets on energy absorption behavior were evaluated comprehensively. Stress-strain relationship curves was used for energy absorption of aluminium foam sandwich calculation. The result highlights that the energy absorption of aluminium foam sandwich increases from 12.74 J to 64.42 J respectively with increasing the foam and skin thickness. 8. Energy absorption capabilities of composite sandwich panels under blast loads Science.gov (United States) Sankar Ray, Tirtha As blast threats on military and civilian structures continue to be a significant concern, there remains a need for improved design strategies to increase blast resistance capabilities. The approach to blast resistance proposed here is focused on dissipating the high levels of pressure induced during a blast through maximizing the potential for energy absorption of composite sandwich panels, which are a competitive structural member type due to the inherent energy absorption capabilities of fiber reinforced polymer (FRP) composites. Furthermore, the middle core in the sandwich panels can be designed as a sacrificial layer allowing for a significant amount of deformation or progressive failure to maximize the potential for energy absorption. The research here is aimed at the optimization of composite sandwich panels for blast mitigation via energy absorption mechanisms. The energy absorption mechanisms considered include absorbed strain energy due to inelastic deformation as well as energy dissipation through progressive failure of the core of the sandwich panels. The methods employed in the research consist of a combination of experimentally-validated finite element analysis (FEA) and the derivation and use of a simplified analytical model. The key components of the scope of work then includes: establishment of quantified energy absorption criteria, validation of the selected FE modeling techniques, development of the simplified analytical model, investigation of influential core architectures and geometric parameters, and investigation of influential material properties. For the parameters that are identified as being most-influential, recommended values for these parameters are suggested in conceptual terms that are conducive to designing composite sandwich panels for various blast threats. Based on reviewing the energy response characteristic of the panel under blast loading, a non-dimensional parameter AET/ ET (absorbed energy, AET, normalized by total energy 9. Effect of microplastic deformation on the electron ultrasonic absorption in high-purity molybdenum monocrystals Energy Technology Data Exchange (ETDEWEB) Pal' -Val' , P.P.; Kaufmann, Kh.J. 1983-03-01 The low temperature (100-6 K) linear absorption of ultrasound (88 kHz) by high purity molybdenum single crystals have been studied. Both unstrained samples and samples sub ected to microplastic deformation (epsilon<=0.45%) were used. Unstrained samples displayed at T<30 K a rapid increase in the absorption with lowering temperature which is interpreted as an indication of electron viscosity due to electron-phonon collisions. After deformation this part of absorption disappeared. This seems to suggest that microplastic deformation brings about in the crystal a sufficiently large number of defects that can compete with phonons in restricting the electron mean free path. A low temperature dynamic annealing has been revealed in strained samples, that is almost complete recovery of the absorption nature under irradiation with high amplitude sound, epsilon/sub 0/ approximately 10/sup -4/, during 10 min, at 6 K. A new relaxation peak of absorption at 10 K has been found in strained samples. 10. Effect of microplastic deformation on the electron ultrasonic absorption in high-purity molybdenum monocrystals International Nuclear Information System (INIS) Pal'-Val', P.P.; Kaufmann, Kh.-J. 1983-01-01 The low temperature (100-6 K) linear absorption of ultrasound (88 kHz) by high purity molybdenum single crystals have been studied. Both unstrained samples and samples sub ected to microplastic deformation (epsilon 0 approximately 10 -4 , during 10 min, at 6 K. A new relaxation peak of absorption at 10 K has been found in strained samples 11. Determination of shell energies. Nuclear deformations and fission barriers International Nuclear Information System (INIS) Koura, Hiroyuki; Tachibana, Takahiro; Uno, Masahiro; Yamada, Masami. 1996-01-01 We have been studying a method of determining nuclear shell energies and incorporating them into a mass formula. The main feature of this method lies in estimating shell energies of deformed nuclei from spherical shell energies. We adopt three assumptions, from which the shell energy of a deformed nucleus is deduced to be a weighted sum of spherical shell energies of its neighboring nuclei. This shell energy should be called intrinsic shell energy since the average deformation energy also acts as an effective shell energy. The ground-state shell energy of a deformed nucleus and its equilibrium shape can be obtained by minimizing the sum of these two energies with respect to variation of deformation parameters. In addition, we investigate the existence of fission isomers for heavy nuclei with use of the obtained shell energies. (author) 12. Understanding Energy Absorption Behaviors of Nanoporous Materials Science.gov (United States) 2008-05-23 induced liquid infiltration in nanopores. J. Appl. Phys. 100, 014308.1-3 (2006). 26. Surani, F. B. and Qiao, Y. Energy absorption of a polyacrylic ...that the infiltration pressure decreases as the cation size increases (Fig.K-2). The ionic radii of cesium, potassium , sodium and lithium are...REPORT DOCUMENTATION PAGE Form Approved OMB NO. 0704-0188 Public Reporting burden for this collection of information is estimated to average 1 hour 13. Effect of microplastic deformation on the electron ultrasonic absorption in high-purity molybdenum monocrystals Energy Technology Data Exchange (ETDEWEB) Pal' -Val' , P.P. (AN Ukrainskoj SSR, Kharkov. Fiziko-Tekhnicheskij Inst. Nizkikh Temperatur); Kaufmann, Kh.J. (Akademie der Wissenschaften der DDR, Berlin) 1983-03-01 The low temperature (100-6 K) linear absorption of ultrasound (88 kHz) by high purity molybdenum single crystals have been studied. Both unstrained samples and samples subjected to microplastic deformation (epsilon<=0.45%) were used. Unstrained samples displayed at T<30 K a rapid increase in the absorption with lowering temperature which is interpreted as an indication of electron viscosity due to electron-phonon collisions. After deformation this part of absorption disappeared. This seems to suggest that microplastic deformation brings about in the crystal a sufficiently large number of defects that can compete with phonons in restricting the electron mean free path. A low temperature ''dynamic annealing'' has been revealed in strained samples, that is, almost complete recovery of the absorption nature under irradiation with high amplitude sound, epsilon/sub 0/ approximately 10/sup -4/, during 10 min, at 6 K. A new relaxation peak of absorption at 10 K has been found in strained samples. 14. Energy balance and deformation mechanisms of duplexes Science.gov (United States) Mitra, Gautam; Boyer, Steven E. A duplex consists of a series of imbricate faults that are asymptotic to a roof thrust and a floor thrust. Depending on the final orientations of the imbricate faults and the final position of the branch lines, a duplex may be hinterland-dipping, foreland-dipping, or an antiformal stack. The exact geometry depends on various factors such as the initial dimensions of the individual slices (horses), their lithology, the amount of displacement (normalized to size of horse) on each fault, and the mechanics of movement along each fault. The energy required in duplex formation can be determined by calculating the total work involved in emplacing each horse: this is given by where W t=W p+W b+W g+W iWp is the work involved in initiating and propagating a fracture. Wb is the work involved in basal sliding, which may be frictional or some form of ductile flow, Wg is the work done against gravity during the emplacement of the horse, and Wi is the work involved in the internal deformation of the horse. By calculating and comparing these work terms it is possible to predict the conditions under which the different types of duplexes will form. Normally, the development of a hinterland-dipping duplex is most likely. However, if deformation conditions are favorable, displacements on individual imbricate faults may be very large compared to the size of the horses, leading to the formation of either antiformal stacks or foreland-dipping duplexes. 15. The opacities of 12C-12C reaction and effect of deformed target nucleus on abrasion and absorption cross sections International Nuclear Information System (INIS) 1995-01-01 The values of the opacities for 12 C- 12 C reaction are calculated at different incident ion kinetic energy. The exact multiple scattering series for the scattering of two heavy ions which was derived by wilson is used to calculate the abrasion and absorption cross sections of 16 O- 9 Be and 16 O- 16 O collisions, considering a harmonic oscillator matter density for both target and projectiles as spherical nuclei. The effect of including the pauli correlation is considered. The case of deformed target is also investigated. Our results are compared with other calculations as well as with the experimental results 16. Coupling q-Deformed Dark Energy to Dark Matter Directory of Open Access Journals (Sweden) Emre Dil 2016-01-01 Full Text Available We propose a novel coupled dark energy model which is assumed to occur as a q-deformed scalar field and investigate whether it will provide an expanding universe phase. We consider the q-deformed dark energy as coupled to dark matter inhomogeneities. We perform the phase-space analysis of the model by numerical methods and find the late-time accelerated attractor solutions. The attractor solutions imply that the coupled q-deformed dark energy model is consistent with the conventional dark energy models satisfying an acceleration phase of universe. At the end, we compare the cosmological parameters of deformed and standard dark energy models and interpret the implications. 17. The crack energy absorptive capacity of composites with fractal structure International Nuclear Information System (INIS) Lung, C.W. 1990-11-01 This paper discusses the energy absorptive capacity of composites with fibers of fractal structures. It is found that this kind of structure may increase the absorption energy during the crack propagation and hence the fracture toughness of composites. (author). 10 refs, 6 figs, 2 tabs 18. Experimental Characterization of the Energy Absorption of Functionally Graded Foam Filled Tubes Under Axial Crushing Loads Science.gov (United States) 2018-03-01 This paper deals with the energy absorption characterization of functionally graded foam (FGF) filled tubes under axial crushing loads by experimental method. The FGF tubes are filled axially by gradient layers of polyurethane foams with different densities. The mechanical properties of the polyurethane foams are firstly obtained from axial compressive tests. Then, the quasi-static compressive tests are carried out for empty tubes, uniform foam filled tubes and FGF filled tubes. Before to present the experimental test results, a nonlinear FEM simulation of the FGF filled tube is carried out in ABAQUS software to gain more insight into the crush deformation patterns, as well as the energy absorption capability of the FGF filled tube. A good agreement between the experimental and simulation results is observed. Finally, the results of experimental test show that an FGF filled tube has excellent energy absorption capacity compared to the ordinary uniform foam-filled tube with the same weight. 19. Seasonal Solar Thermal Absorption Energy Storage Development. Science.gov (United States) Daguenet-Frick, Xavier; Gantenbein, Paul; Rommel, Mathias; Fumey, Benjamin; Weber, Robert; Gooneseker, Kanishka; Williamson, Tommy 2015-01-01 This article describes a thermochemical seasonal storage with emphasis on the development of a reaction zone for an absorption/desorption unit. The heat and mass exchanges are modelled and the design of a suitable reaction zone is explained. A tube bundle concept is retained for the heat and mass exchangers and the units are manufactured and commissioned. Furthermore, experimental results of both absorption and desorption processes are presented and the exchanged power is compared to the results of the simulations. 20. Restricted mass energy absorption coefficients for use in dosimetry International Nuclear Information System (INIS) Brahme, A. 1977-02-01 When matter is irradiated by a photon beam the fraction of energy absorbed locally in some region Rsub(Δ) (where the size of the region Rsub(Δ) is related to the range of secondary electrons of some restriction energy Δ) is expressed by the restricted mass energy absorption coefficient. In this paper an example is given of how restricted mass energy absorption coefficients can be calculated from existing differential photon interaction cross sections. Some applications of restricted mass absorption coefficients in dosimetry are also given. (B.D.) 1. Hydrolysis Batteries: Generating Electrical Energy during Hydrogen Absorption. Science.gov (United States) Xiao, Rui; Chen, Jun; Fu, Kai; Zheng, Xinyao; Wang, Teng; Zheng, Jie; Li, Xingguo 2018-02-19 The hydrolysis reaction of aluminum can be decoupled into a battery by pairing an Al foil with a Pd-capped yttrium dihydride (YH 2 -Pd) electrode. This hydrolysis battery generates a voltage around 0.45 V and leads to hydrogen absorption into the YH 2 layer. This represents a new hydrogen absorption mechanism featuring electrical energy generation during hydrogen absorption. The hydrolysis battery converts 8-15 % of the thermal energy of the hydrolysis reaction into usable electrical energy, leading to much higher energy efficiency compared to that of direct hydrolysis. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. 2. Energy balance and deformation at scission in 240Pu fission Directory of Open Access Journals (Sweden) Manuel Caamaño 2017-07-01 Full Text Available The experimental determination of the total excitation energy, the total kinetic energy, and the evaporation neutron multiplicity of fully identified fragments produced in transfer-induced fission of 240Pu, combined with reasonable assumptions, permits to extract the intrinsic and collective excitation energy of the fragments as a function of their atomic number, along with their quadrupole deformation and their distance at scission. The results show that the deformation increases with the atomic number, Z, except for a local maximum around Z=44 and a minimum around Z=50, associated with the effect of deformed shells at Z∼44, N∼64, and spherical shells in 132Sn, respectively. The distance between the fragments also shows a minimum around Z1=44, Z2=50, suggesting a mechanism that links the effect of structure with the length of the neck at scission. 3. Optical absorption and energy transfer processes in dendrimers International Nuclear Information System (INIS) Reineker, P.; Engelmann, A.; Yudson, V.I. 2004-01-01 For dendrimers of various sizes the energy transfer and the optical absorption is investigated theoretically. The molecular subunits of a dendrimer are modeled as two-level systems. The electronic interaction between them is described via transfer integrals and the influence of vibrational degrees of freedom is taken into account in a first approach using a stochastic model. We discuss the time dependence of the energy transport and show that rim states of the dendrimer dominate the absorption spectra, that in general the electronic excitation energy is concentrated on peripheric molecules, and that the energetically lowest absorption peak is redshifted with increasing dendrimer size due to delocalization of the electronic excitation 4. Energy absorption build-up factors in teeth International Nuclear Information System (INIS) Manjunatha, H.C.; Rudraswamy, B. 2012-01-01 Geometric progression fitting method has been used to compute energy absorption build-up factor of teeth [enamel outer surface, enamel middle, enamel dentin junction towards enamel, enamel dentin junction towards dentin, dentin middle and dentin inner surface] for wide energy range (0.015-15 MeV) up to the penetration depth of 40 mean free path. The dependence of energy absorption build-up factor on incident photon energy, penetration depth, electron density and effective atomic number has also been studied. The energy absorption build-up factors increases with the penetration depth and electron density of teeth. So that the degree of violation of Lambert-Beer (I = I 0 e -μt ) law is less for least penetration depth and electron density. The energy absorption build-up factors for different regions of teeth are not same hence the energy absorbed by the different regions of teeth is not uniform which depends on the composition of the medium. The relative dose of gamma in different regions of teeth is also estimated. Dosimetric implication of energy absorption build-up factor in teeth has also been discussed. The estimated absorption build up factors in different regions of teeth may be useful in the electron spin resonance dosimetry. (author) 5. The Role of Absorption Cooling for Reaching Sustainable Energy Systems Energy Technology Data Exchange (ETDEWEB) Lindmark, Susanne 2005-07-01 This thesis focuses on the role and potential of absorption cooling in future energy systems. Two types of energy systems are investigated: a district energy system based on waste incineration and a distributed energy system with natural gas as fuel. In both cases, low temperature waste heat is used as driving energy for the absorption cooling. The main focus is to evaluate the absorption technology in an environmental perspective, in terms of reduced CO{sub 2} emissions. Economic evaluations are also performed. The reduced electricity when using absorption cooling instead of compression cooling is quantified and expressed as an increased net electrical yield. The results show that absorption cooling is an environmentally friendly way to produce cooling as it reduces the use of electrically driven cooling in the energy system and therefore also reduces global CO{sub 2} emissions. In the small-scale trigeneration system the electricity use is lowered with 84 % as compared to cooling production with compression chillers only. The CO{sub 2} emissions can be lowered to 45 CO{sub 2}/MWh{sub c} by using recoverable waste heat as driving heat for absorption chillers. However, the most cost effective cooling solution in a district energy system is a combination between absorption and compression cooling technologies according to the study. Absorption chillers have the potential to be suitable bottoming cycles for power production in distributed systems. Net electrical yields over 55 % may be reached in some cases with gas motors and absorption chillers. This small-scale system for cogeneration of power and cooling shows electrical efficiencies comparable to large-scale power plants and may contribute to reducing peak electricity demand associated with the cooling demand. 6. Energy absorption behaviors of nanoporous materials functionalized (NMF) liquids OpenAIRE Kim, Tae Wan 2011-01-01 For many decades, people have been actively investigating high-performance energy absorption materials, so as to develop lightweight and small-sized protective and damping devices, such as blast mitigation helmets, vehicle armors, etc. Recently, the high energy absorption efficiency of nanoporous materials functionalized (NMF) liquids has drawn considerable attention. A NMF liquid is usually a liquid suspension of nanoporous particles with large nanopore surface areas (100 - 2,000 m²/g). The ... 7. Material selection for elastic energy absorption in origami-inspired compliant corrugations International Nuclear Information System (INIS) Tolman, Sean S; Delimont, Isaac L; Howell, Larry L; Fullwood, David T 2014-01-01 Elastic absorption of kinetic energy and distribution of impact forces are required in many applications. Recent attention to the potential for using origami in engineering may provide new methods for energy absorption and force distribution. A three-stage strategy is presented for selecting materials for such origami-inspired designs that can deform to achieve a desired motion without yielding, absorb elastic strain energy, and be lightweight or cost effective. Two material indices are derived to meet these requirements based on compliant mechanism theory. Finite element analysis is used to investigate the effects of the material stiffness in the Miura-ori tessellation on its energy absorption and force distribution characteristics compared with a triangular wave corrugation. An example is presented of how the method can be used to select a material for a general energy absorption application of the Miura-ori. Whereas the focus of this study is the Miura-ori tessellation, the methods developed can be applied to other tessellated patterns used in energy absorbing or force distribution applications. (paper) 8. X-ray absorption intensity at high-energy region International Nuclear Information System (INIS) Fujikawa, Takashi; Kaneko, Katsumi 2012-01-01 We theoretically discuss X-ray absorption intensity in high-energy region far from the deepest core threshold to explain the morphology-dependent mass attenuation coefficient of some carbon systems, carbon nanotubes (CNTs), highly oriented pyrolytic graphite (HOPG) and fullerenes (C 60 ). The present theoretical approach is based on the many-body X-ray absorption theory including the intrinsic losses (shake-up losses). In the high-energy region the absorption coefficient has correction term dependent on the solid state effects given in terms of the polarization part of the screened Coulomb interaction W p . We also discuss the tail of the valence band X-ray absorption intensity. In the carbon systems C 2s contribution has some influence on the attenuation coefficient even in the high energy region at 20 keV. 9. A new local thickening reverse spiral origami thin-wall construction for improving of energy absorption Science.gov (United States) Kong, C. H.; Zhao, X. L.; Hagiwara, I. R. 2018-02-01 As an effective and representative origami structure, reverse spiral origami structure can be capable to effectively take up energy in a crash test. The origami structure has origami creases thus this can guide the deformation of structure and avoid of Euler buckling. Even so the origami creases also weaken the support force and this may cut the absorption of crash energy. In order to increase the supporting capacity of the reverse spiral origami structure, we projected a new local thickening reverse spiral origami thin-wall construction. The reverse spiral origami thin-wall structure with thickening areas distributed along the longitudinal origami crease has a higher energy absorption capacity than the ordinary reverse spiral origami thin-wall structure. 10. FDTD modeling of solar energy absorption in silicon branched nanowires. Science.gov (United States) Lundgren, Christin; Lopez, Rene; Redwing, Joan; Melde, Kathleen 2013-05-06 Thin film nanostructured photovoltaic cells are increasing in efficiency and decreasing the cost of solar energy. FDTD modeling of branched nanowire 'forests' are shown to have improved optical absorption in the visible and near-IR spectra over nanowire arrays alone, with a factor of 5 enhancement available at 1000 nm. Alternate BNW tree configurations are presented, achieving a maximum absorption of over 95% at 500 nm. 11. A simulation of laser energy absorption by nanowired surface Energy Technology Data Exchange (ETDEWEB) Vasconcelos, Miguel F.S.; Ramos, Alexandre F., E-mail: [email protected], E-mail: [email protected] [Universidade de São Paulo (USP), SP (Brazil). Escola de Artes, Ciências e Humanidades 2017-07-01 Despite recent advances on research about laser inertial fusion energy, to increase the portion of energy absorbed by the target's surface from lasers remains as an important challenge. The plasma formed during the initial instants of laser arrival shields the target and prevents the absorption of laser energy by the deeper layers of the material. One strategy to circumvent that effect is the construction of targets whose surfaces are populated with nanowires. The nanowired surfaces have increased absorption of laser energy and constitutes a promising pathway for enhancing laser-matter coupling. In our work we present the results of simulations aiming to investigate how target's geometrical properties might contribute for maximizing laser energy absorption by material. Simulations have been carried out using the software FLASH, a multi-physics platform developed by researchers from the University of Chicago, written in FORTRAN 90 and Python. Different tools for generating target's geometry and analysis of results were developed using Python. Our results show that a nanowired surfaces has an increased energy absorption when compared with non wired surface. The software for visualization developed in this work also allowed an analysis of the spatial dynamics of the target's temperature, electron density, ionization levels and temperature of the radiation emitted by it. (author) 12. A simulation of laser energy absorption by nanowired surface International Nuclear Information System (INIS) Vasconcelos, Miguel F.S.; Ramos, Alexandre F. 2017-01-01 Despite recent advances on research about laser inertial fusion energy, to increase the portion of energy absorbed by the target's surface from lasers remains as an important challenge. The plasma formed during the initial instants of laser arrival shields the target and prevents the absorption of laser energy by the deeper layers of the material. One strategy to circumvent that effect is the construction of targets whose surfaces are populated with nanowires. The nanowired surfaces have increased absorption of laser energy and constitutes a promising pathway for enhancing laser-matter coupling. In our work we present the results of simulations aiming to investigate how target's geometrical properties might contribute for maximizing laser energy absorption by material. Simulations have been carried out using the software FLASH, a multi-physics platform developed by researchers from the University of Chicago, written in FORTRAN 90 and Python. Different tools for generating target's geometry and analysis of results were developed using Python. Our results show that a nanowired surfaces has an increased energy absorption when compared with non wired surface. The software for visualization developed in this work also allowed an analysis of the spatial dynamics of the target's temperature, electron density, ionization levels and temperature of the radiation emitted by it. (author) 13. Absorptive form factors for high-energy electron diffraction International Nuclear Information System (INIS) Bird, D.M.; King, Q.A. 1990-01-01 The thermal diffuse scattering contribution to the absorptive potential in high-energy electron diffraction is calculated in the form of an absorptive contribution to the atomic form factor. To do this, the Einstein model of lattice vibrations is used, with isotropic Debye-Waller factors. The absorptive form factors are calculated as a function of scattering vector s and temperature factor M on a grid which enables polynomial interpolation of the results to be accurate to better than 2% for much of the ranges 0≤Ms 2 ≤6 and 0≤M≤2 A 2 . The computed values, together with an interpolation routine, have been incorporated into a Fortran subroutine which calculates both the real and absorptive form factors for 54 atomic species. (orig.) 14. Deformation energy of a toroidal nucleus and plane fragmentation barriers International Nuclear Information System (INIS) Fauchard, C.; Royer, G. 1996-01-01 The path leading to pumpkin-like configurations and toroidal shapes is investigated using a one-parameter shape sequence. The deformation energy is determined within the analytical expressions obtained for the various shape-dependent functions and the generalized rotating liquid drop model taking into account the proximity energy and the temperature. With increasing mass and angular momentum, a potential well appears in the toroidal shape path. For the heaviest systems, the pocket is large and locally favourable with respect to the plane fragmentation barriers which might allow the formation of evanescent toroidal systems which would rapidly decay in several fragments to minimize the surface tension. (orig.) 15. Experimental analysis of energy absorption behaviour of Al-tube filled with pumice lightweight concrete under axial loading condition Science.gov (United States) Rajak, D. K.; Deshpande, P. G.; Kumaraswamidhas, L. A. 2017-08-01 This Paper aimed at experimental investigation of compressive behaviour of square tube filled with pumice lightweight concrete (PLC). Square section of 20×20×30 mm is investigated, which is the backbone structure. The compression deformation result shows the better folding mechanism, displacement value, and energy absorption. PLC concrete filled with aluminium thin-wall tubes has been revealed superior energy absorption capacity (EAC) under low strain rate at room temperature. Superior EAC resulted as a result of mutual deformation benefit between aluminium section and PLC is also analysed. PLC was characterised by Fourier Transform Infrared (FTIR) and Field Emission Scanning Electron Microscopy (FESEM), and Energy Dispersive X-ray Spectrometry (EDX) analysis for better understanding of material behaviour. Individual and comparative load bearing graphs is logged for better prospective of analysing. Novel approach aimed at validation of porous lightweight concrete for better lightweight EA filler material. 16. Investigation of Energy Absorption in Aluminum Foam Sandwich Panels By Drop Hammer Test: Experimental Results Directory of Open Access Journals (Sweden) 2016-05-01 Full Text Available The sandwich panel structures with aluminum foam core and metal surfaces have light weight with high performance in dispersing energy. This has led to their widespread use in the absorption of energy. The cell structure of foam core is subjected to plastic deformation in the constant tension level that absorbs a lot of kinetic energy before destruction of the structure. In this research, by making samples of aluminum foam core sandwich panels with aluminum surfaces, experimental tests of low velocity impact by a drop machine are performed for different velocities and weights of projectile on samples of sandwich panels with aluminum foam core with relative density of 18%, 23%, and 27%. The output of device is acceleration‐time diagram which is shown by an accelerometer located on the projectile. From the experimental tests, the effect of weight, velocity and energy of the projectile and density of the foam on the global deformation, and energy decrease rate of projectile have been studied. The results of the experimental testes show that by increasing the density of aluminum foam, the overall impression is reduced and the slop of energy loss of projectile increases. Also by increasing the velocity of the projectile, the energy loss increases. 17. Energy Absorption of Monolithic and Fibre Reinforced Aluminium Cylinders NARCIS (Netherlands) De Kanter, J.L.C.G. 2006-01-01 Summary accompanying the thesis: Energy Absorption of Monolithic and Fibre Reinforced Aluminium Cylinders by Jens de Kanter This thesis presents the investigation of the crush behaviour of both monolithic aluminium cylinders and externally fibre reinforced aluminium cylinders. The research is based 18. Energy and exergy analyses of the diffusion absorption refrigeration system International Nuclear Information System (INIS) Yıldız, Abdullah; Ersöz, Mustafa Ali 2013-01-01 This paper describes the thermodynamic analyses of a DAR (diffusion absorption refrigeration) cycle. The experimental apparatus is set up to an ammonia–water DAR cycle with helium as the auxiliary inert gas. A thermodynamic model including mass, energy and exergy balance equations are presented for each component of the DAR cycle and this model is then validated by comparison with experimental data. In the thermodynamic analyses, energy and exergy losses for each component of the system are quantified and illustrated. The systems' energy and exergy losses and efficiencies are investigated. The highest energy and exergy losses occur in the solution heat exchanger. The highest energy losses in the experimental and theoretical analyses are found 25.7090 W and 25.4788 W respectively, whereas those losses as to exergy are calculated 13.7933 W and 13.9976 W. Although the values of energy efficiencies obtained from both the model and experimental studies are calculated as 0.1858, those values, in terms of exergy efficiencies are found 0.0260 and 0.0356. - Highlights: • The diffusion absorption refrigerator system is designed manufactured and tested. • The energy and exergy analyses of the system are presented theoretically and experimentally. • The energy and exergy losses are investigated for each component of the system. • The highest energy and exergy losses occur in the solution heat exchanger. • The energy and the exergy performances are also calculated 19. Stochasticity of the energy absorption in the electron cyclotron resonance International Nuclear Information System (INIS) Gutierrez T, C.; Hernandez A, O. 1998-01-01 The energy absorption mechanism in cyclotron resonance of the electrons is a present problem, since it could be considered from the stochastic point of view or this related with a non-homogeneous but periodical of plasma spatial structure. In this work using the Bogoliubov average method for a multi periodical system in presence of resonances, the drift equations were obtained in presence of a RF field for the case of electron cyclotron resonance until first order terms with respect to inverse of its cyclotron frequency. The absorbed energy equation is obtained on part of electrons in a simple model and by drift method. It is showed the stochastic character of the energy absorption. (Author) 20. Energy absorption and exposure build-up factors in teeth International Nuclear Information System (INIS) Manjunatha, H.C.; Rudraswamy, B. 2010-01-01 Full text: Gamma and X-radiation are widely used in medical imaging and radiation therapy. The user of radioisotopes must have knowledge about how radiation interacts with matter, especially with the human body, because when photons enter the medium/body, they degrade their energy and build up in the medium, giving rise to secondary radiation which can be estimated by a factor which is called the 'build-up factor'. It is essential to study the exposure build up factor in radiation dosimetry. G.P. fitting method has been used to compute energy absorption and exposure build-up factor of teeth (enamel outer surface (EOS), enamel middle (EM), enamel dentin junction towards enamel (EDJE), enamel dentin junction towards dentin (EDJD), dentin middle (DM) and dentin inner surface (DIS)) for wide energy range (0.015 MeV-15 MeV) up to the penetration depth of 40 mean free path. The dependence of energy absorption and exposure build up factor on incident photon energy, Penetration depth and effective atomic number has also been assessed. The relative dose distribution at a distance r from the point source is also estimated. The computed exposure and absorption build-up factors are useful to estimate the gamma and Bremsstrahlung radiation dose distribution teeth which is useful in clinical dosimetry 1. Exploring the tensile strain energy absorption of hybrid modified epoxies containing soft particles International Nuclear Information System (INIS) 2011-01-01 Research highlights: → Two epoxy systems have been modified by combination of fine and coarse modifiers. → While both hybrid systems reveal synergistic K IC , no synergism is observed in tensile test. → It is found that coarse particles induce stress concentration in hybrid samples. → Stress concentration leads to fracture of samples at lower energy absorption levels. -- Abstract: In this paper, tensile strain energy absorption of two different hybrid modified epoxies has been systematically investigated. In one system, epoxy has been modified by amine-terminated butadiene acrylonitrile (ATBN) and hollow glass spheres as fine and coarse modifiers, respectively. The other hybrid epoxy has been modified by the combination of ATBN and recycled Tire particles. The results of fracture toughness measurement of blends revealed synergistic toughening for both hybrid systems in some formulations. However, no evidence of synergism is observed in tensile test of hybrid samples. Scanning electron microscope (SEM), transmission optical microscope (TOM) and finite element (FEM) simulation were utilized to study deformation mechanisms of hybrid systems in tensile test. It is found that coarse particles induce stress concentration in hybrid samples. This produces non-uniform strain localized regions which lead to fracture of hybrid samples at lower tensile loading and energy absorption levels. 2. Research on a new wave energy absorption device Science.gov (United States) Lu, Zhongyue; Shang, Jianzhong; Luo, Zirong; Sun, Chongfei; Zhu, Yiming 2018-01-01 To reduce impact of global warming and the energy crisis problems caused by pollution of energy combustion, the research on renewable and clean energies becomes more and more important. This paper designed a new wave absorption device, and also gave an introduction on its mechanical structure. The flow tube model is analyzed, and presented the formulation of the proposed method. To verify the principle of wave absorbing device, an experiment was carried out in a laboratory environment, and the results of the experiment can be applied for optimizing the structure design of output power. 3. Energy spectrum inverse problem of q -deformed harmonic oscillator and WBK approximation International Nuclear Information System (INIS) Sang, Nguyen Anh; Thuy, Do Thi Thu; Loan, Nguyen Thi Ha; Lan, Nguyen Tri; Viet, Nguyen Ai 2016-01-01 Using the connection between q-deformed harmonic oscillator and Morse-like anharmonic potential we investigate the energy spectrum inverse problem. Consider some energy levels of energy spectrum of q -deformed harmonic oscillator are known, we construct the corresponding Morse-like potential then find out the deform parameter q . The application possibility of using the WKB approximation in the energy spectrum inverse problem was discussed for the cases of parabolic potential (harmonic oscillator), Morse-like potential ( q -deformed harmonic oscillator). so we consider our deformed-three-levels simple model, where the set-parameters of Morse potential and the corresponding set-parameters of level deformations are easily and explicitly defined. For practical problems, we propose the deformed- three-levels simple model, where the set-parameters of Morse potential and the corresponding set-parameters of level deformations are easily and explicitly defined. (paper) 4. Stored energy and annealing behavior of heavily deformed aluminium DEFF Research Database (Denmark) Kamikawa, Naoya; Huang, Xiaoxu; Kondo, Yuka 2012-01-01 It has been demonstrated in previous work that a two-step annealing treatment, including a low-temperature, long-time annealing and a subsequent high-temperature annealing, is a promising route to control the microstructure of a heavily deformed metal. In the present study, structural parameters...... are quantified such as boundary spacing, misorientation angle and dislocation density for 99.99% aluminium deformed by accumulative roll-bonding to a strain of 4.8. Two different annealing processes have been applied; (i) one-step annealing for 0.5 h at 100-400°C and (ii) two-step annealing for 6 h at 175°C...... followed by 0.5 h annealing at 200-600°C, where the former treatment leads to discontinuous recrystallization and the latter to uniform structural coarsening. This behavior has been analyzed in terms of the relative change during annealing of energy stored as elastic energy in the dislocation structure... 5. Nuclear safeguards applications of energy-dispersive absorption edge densitometry International Nuclear Information System (INIS) Russo, P.A.; Hsue, S.T.; Langner, D.G.; Sprinkle, J.K. Jr. 1980-01-01 The principles and techniques of absorption edge densitometry in the energy-dispersive mode are summarized as they apply to the nondestructive assay of special nuclear materials. Five existing field instruments, designed for special nuclear materials accounting measurements, are described. Results of the testing of these instruments as well as recent laboratory results are used to define the capabilities of the technique for special nuclear materials accounting. Possibilities for future applications are reviewed. 14 figures 6. Energy Absorption in Chopped Carbon Fiber Compression Molded Composites International Nuclear Information System (INIS) Starbuck, J.M. 2001-01-01 In passenger vehicles the ability to absorb energy due to impact and be survivable for the occupant is called the ''crashworthiness'' of the structure. To identify and quantify the energy absorbing mechanisms in candidate automotive composite materials, test methodologies were developed for conducting progressive crush tests on composite plate specimens. The test method development and experimental set-up focused on isolating the damage modes associated with the frond formation that occurs in dynamic testing of composite tubes. Quasi-static progressive crush tests were performed on composite plates manufactured from chopped carbon fiber with an epoxy resin system using compression molding techniques. The carbon fiber was Toray T700 and the epoxy resin was YLA RS-35. The effect of various material and test parameters on energy absorption was evaluated by varying the following parameters during testing: fiber volume fraction, fiber length, fiber tow size, specimen width, profile radius, and profile constraint condition. It was demonstrated during testing that the use of a roller constraint directed the crushing process and the load deflection curves were similar to progressive crushing of tubes. Of all the parameters evaluated, the fiber length appeared to be the most critical material parameter, with shorter fibers having a higher specific energy absorption than longer fibers. The combination of material parameters that yielded the highest energy absorbing material was identified 7. Determining photon energy absorption parameters for different soil samples International Nuclear Information System (INIS) Kucuk, Nil; Cakir, Merve; Tumsavas, Zeynal 2013-01-01 The mass attenuation coefficients (μ s ) for five different soil samples were measured at 661.6, 1173.2 and 1332.5 keV photon energies. The soil samples were separately irradiated with 137 Cs and 60 Co (370 kBq) radioactive point gamma sources. The measurements were made by performing transmission experiments with a 2″ x 2″ NaI(Tl) scintillation detector, which had an energy resolution of 7% at 0.662 MeV for the gamma-rays from the decay of 137 Cs. The effective atomic numbers (Z eff ) and the effective electron densities (N eff ) were determined experimentally and theoretically using the obtained μ s values for the soil samples. Furthermore, the Z eff and N eff values of the soil samples were computed for the total photon interaction cross-sections using theoretical data over a wide energy region ranging from 1 keV to 15 MeV. The experimental values of the soils were found to be in good agreement with the theoretical values. Sandy loam and sandy clay loam soils demonstrated poor photon energy absorption characteristics. However, clay loam and clay soils had good photon energy absorption characteristics. (author) 8. Photosynthetic antennae systems: energy transport and optical absorption International Nuclear Information System (INIS) Reineker, P.; Supritz, Ch.; Warns, Ch.; Barvik, I. 2004-01-01 The energy transport and the optical line shape of molecular aggregates, modeling bacteria photosynthetic light-harvesting systems (chlorosomes in the case of Chlorobium tepidum or Chloroflexus aurantiacus and LH2 in the case of Rhodopseudomonas acidophila) is investigated theoretically. The molecular units are described by two-level systems with an average excitation energy ε and interacting with each other through nearest-neighbor interactions. For LH2 an elliptical deformation of the ring is also allowed. Furthermore, dynamic and in the case of LH2 also quasi-static fluctuations of the local excitation energies are taken into account, simulating fast molecular vibrations and slow motions of the protein backbone, respectively. The fluctuations are described by Gaussian Markov processes in the case of the chlorosomes and by colored dichotomic Markov processes, with exponentially decaying correlation functions, with small (λ s ) and large (λ) decay constants, in the case of LH2 9. Energy absorption in cold inhomogeneous plasmas - The Herlofson paradox. Science.gov (United States) Crawford, F. W.; Harker, K. J. 1972-01-01 Confirmation of Barston's (1964) conclusions regarding the underlying mechanism of the Herlofson paradox by examining in detail several analytically tractable cases of delta-function and sinusoidal excitation. The effects of collisions and nonzero electron temperature in determining the steady state fields and dissipation are considered. Energy absorption without dissipation in plasmas is shown to be analogous to that occurring after application of a signal to a network of lossless resonant circuits. This analogy is pursued and is extended to cover Landau damping in a warm homogeneous plasma in which the resonating elements are the electron streams making up the velocity distribution. Some of the practical consequences of resonant absorption are discussed, together with a number of paradoxical plasma phenomena which can also be elucidated by considering a superposition of normal modes rather than a single Fourier component. 10. Thermostatistic properties of a q-deformed ideal Fermi gas with a general energy spectrum International Nuclear Information System (INIS) Cai, Shukuan; Su, Guozhen; Chen, Jincan 2007-01-01 The thermostatistic problems of a q-deformed ideal Fermi gas in any dimensional space and with a general energy spectrum are studied, based on the q-deformed Fermi-Dirac distribution. The effects of the deformation parameter q on the properties of the system are revealed. It is shown that q-deformation results in some novel characteristics different from those of an ordinary system. Besides, it is found that the effects of the q-deformation on the properties of the Fermi systems are very different for different dimensional spaces and different energy spectrums 11. Nanostructures for Enhanced Light Absorption in Solar Energy Devices Directory of Open Access Journals (Sweden) Gustav Edman Jonsson 2011-01-01 Full Text Available The fascinating optical properties of nanostructured materials find important applications in a number of solar energy utilization schemes and devices. Nanotechnology provides methods for fabrication and use of structures and systems with size corresponding to the wavelength of visible light. This opens a wealth of possibilities to explore the new, often of resonance character, phenomena observed when the object size and the electromagnetic field periodicity (light wavelength λ match. Here we briefly review the effects and concepts of enhanced light absorption in nanostructures and illustrate them with specific examples from recent literature and from our studies. These include enhanced optical absorption of composite photocatalytically active TiO2/graphitic carbon films, systems with enhanced surface plasmon resonance, field-enhanced absorption in nanofabricated carbon structures with geometrical optical resonances and excitation of waveguiding modes in supported nanoparticle assembles. The case of Ag particles plasmon-mediated chemistry of NO on graphite surface is highlighted to illustrate the principle of plasmon-electron coupling in adsorbate systems. 12. Impurities in semiconductors: total energy and infrared absorption calculations International Nuclear Information System (INIS) Yndurain, F. 1987-01-01 A new method to calculate the electronic structure of infinite nonperiodic system is discussed. The calculations are performed using atomic pseudopotentials and a basis of atomic Gaussiam wave functions. The Hartree-Fock self consistent equations are solved in the cluster-Bethe lattice system. Electron correlation is partially included in second order pertubation approximation. The formalism is applied to hydrogenated amorphous silicon. Total energy calculations of finite clusters of silicon atom in the presence of impurities, are also presented. The results show how atomic oxygen breaks the covalent silicon silicon bond forming a local configuration similar to that of SiO 2 . Calculations of the infrared absorption due to the presence of atomic oxygen in cristalline silicon are presented. The Born Hamiltonian to calculate the vibrational modes of the system and a simplied model to describe the infrared absorption mechanism are used. The interstitial and the the substitutional cases are considered and analysed. The position of the main infrared absorption peak, their intensities and their isotope shifts are calculated. The results are satisfactory agreement with the available data. (author) [pt 13. Energy dependences of absorption in beryllium windows and argon gas International Nuclear Information System (INIS) Chantler, C.T.; Staudenmann, J-P. 1994-01-01 In part of an ongoing work on x-ray form factors, new absorption coefficients are being evaluated for all elements, across the energy range from below 100 eV to above 100 keV. These new coefficients are applied herein to typical problems in synchrotron radiation stations, namely the use of beryllium windows and argon gas detectors. Results are compared with those of other authors. The electron-ion pair production process in ionization chambers is discussed, and the effects of 3d-element impurities are indicated. 15 refs., 6 figs 14. Surface wave energy absorption by a partially submerged bio-inspired canopy. Science.gov (United States) Nové-Josserand, C; Castro Hebrero, F; Petit, L-M; Megill, W M; Godoy-Diana, R; Thiria, B 2018-03-27 Aquatic plants are known to protect coastlines and riverbeds from erosion by damping waves and fluid flow. These flexible structures absorb the fluid-borne energy of an incoming fluid by deforming mechanically. In this paper we focus on the mechanisms involved in these fluid-elasticity interactions, as an efficient energy harvesting system, using an experimental canopy model in a wave tank. We study an array of partially-submerged flexible structures that are subjected to the action of a surface wave field, investigating in particular the role of spacing between the elements of the array on the ability of our system to absorb energy from the flow. The energy absorption potential of the canopy model is examined using global wave height measurements for the wave field and local measurements of the elastic energy based on the kinematics of each element of the canopy. We study different canopy arrays and show in particular that flexibility improves wave damping by around 40%, for which half is potentially harvestable. 15. Energy Absorption Capacity in Natural Fiber Reinforcement Composites Structures Directory of Open Access Journals (Sweden) Elías López-Alba 2018-03-01 Full Text Available The study of natural fiber reinforcement composite structures has focused the attention of the automobile industry due to the new regulation in relation to the recyclability and the reusability of the materials preserving and/or improving the mechanical characteristics. The influence of different parameters on the material behavior of natural fiber reinforced plastic structures has been investigated, showing the potential for transport application in energy absorbing structures. Two different woven fabrics (twill and hopsack made of flax fibers as well as a non-woven mat made of a mixture of hemp and kenaf fibers were employed as reinforcing materials. These reinforcing textiles were impregnated with both HD-PE (high-density polyethylen and PLA (polylactic acid matrix, using a continuous compression molding press. The impregnated semi-finished laminates (so-called organic sheets were thermoformed in a second step to half-tubes that were assembled through vibration-welding process to cylindric crash absorbers. The specimens were loaded by compression to determine the specific energy absorption capacity. Quasi-static test results were compared to dynamic test data obtained on a catapult arrangement. The differences on the specific energies absorption (SEA as a function of different parameters, such as the wall thickness, the weave material type, the reinforced textiles, and the matrix used, depending on the velocity rate application were quantified. In the case of quasi-static analysis it is observed a 20% increment in the SEA value when wove Hopsack fabric reinforcement is employed. No velocity rate influence from the material was observed on the SEA evaluation at higher speeds used to perform the experiments. The influence of the weave configuration (Hopsack seems to be more stable against buckling effects at low loading rates with 10% higher SEA values. An increase of SEA level of up to 72% for PLA matrix was observed when compared with HD 16. Studying energy absorption in tapered thick walled tubes Directory of Open Access Journals (Sweden) P. Hosseini Tehrani Full Text Available In many engineering structures different energy absorption systems may be used to improve crashworthiness capability of the system and to control damages that may occur in a system during an accident. Therefore, extensive research has been done on the energy-absorbing cells. In this paper, energy absorption in tapered thick walled tubes has been investigated. As a practical case, studies have been focused on the crush element of Siemens ER24PC locomotive. To investigate performance of this part at collision time, it has been modeled in Abaqus software and its collision characteristics have been evaluated. Considering that the crash element is folded at time of collision, an analytical approach has been presented for calculation of instantaneous folding force under axial load. Basis of this method is definition and analysis of main folding mechanism and calculation of average folding force. This method has been used for validation of the results of numerical solution. Since sheet thickness of the crash element is high and may be ruptured at time of collision, some damage models have been used for numerical simulations. One of the three damage models used in this paper is available in the software and coding has been done for two other damage models and desirable damage model has been specified by comparing results of numerical solution with results of laboratory test. In addition, authenticity of the desirable damage model has been studied through ECE R 66 standard. To improve crashworthiness characteristic some attempts, such as use of metal foam and creation of trigger in suitable situations to reduce maximum force resulting from collision, have been performed. Finally though different simulation optimal crush element has been introduced and its performance and efficiency have been evaluated. 17. Gamow-Teller decay and nuclear deformation: implementing of a new total absorption spectrometer, study of isotopes N ≅ Z krypton and strontium International Nuclear Information System (INIS) Poirier, E. 2002-12-01 Nuclei with A ∼ 70 along the N=Z line are known to be the scene of phenomena closely related to the nuclear deformation and are of particular interest since theoretical mean field calculations predict that a large part of the Gamow-Teller resonance might be located below the ground state of the mother nucleus and then be accessible through β-decay studies. These results have shown the effect of the shape of the ground state on the intensity of the Gamow-Teller strength. Thus, the experimental determination, through δ-decay, of the Gamow-Teller strength distribution and the comparison to the theoretical predictions allow to pin down the quadrupolar deformation parameter of the ground state of the parent nucleus. In order to study the neutron deficient isotopes of krypton (A=72,73,74,75) and strontium (A=76,77,78) and to establish the β-strength on the full energy range, a new total absorption spectrometer (TAgS) has been built in the frame of an international collaboration and installed at the (SOLDE/CERN mass separator. For the data analysis, the response function R of the spectrometer has been calculated by means of Monte-Carlo simulations, based on the GEANT4 code, and of a statistical description of the level scheme in the daughter nucleus. The β-feeding distribution has been obtained from experimental spectra through a method based on Bayes theorem and then converted into Gamow-Teller strength. The results coming from the 74 Kr decay analysis allow to describe the ground state of such a nucleus as the coexistence of an oblate shape and of a prolate shape. In the case of 76 Sr, the experimental Gamow-Teller strength distribution strongly indicates a prolate deformation. (author) 18. Effect of the hydrogen absorption on the positioning of the plastic deformation of a stainless steel-316L International Nuclear Information System (INIS) Aubert, I.; Olive, J.M. 2007-01-01 The aim of this work is to quantify the absorbed hydrogen effects on the plastic deformation (at the grain scale) of stainless steel-316L polycrystals. Tensile tests in air have been carried out on specimens previously cathodically loaded in hydrogen (135 wt.ppm) and unloaded polycrystals. After the tensile tests, a number statistically representative of gliding bands emergent in surface has been observed. In parallel to this experimental study, the plastic gliding level in each grain has been obtained by a finite element method from the polycrystalline microstructure modeled with the EBSD cartography. The Zebulon code developed by the Ecole des Mines de Paris allows to account for the plastic behaviour of the studied polycrystals using the crystalline plasticity model. The coupled analysis of the numerical and experimental results allows to know the gliding plan having produced the gliding steps observed in each grain by AFM. This allows then to quantify the number of emergent dislocations to create the average gliding band. It is then possible to compare the modifications of the positioning of the plastic deformation of the stainless steel-316L induced by hydrogen absorption. (O.M.) 19. Some considerations of the energy spectrum of odd-odd deformed nuclei; Quelqes considerations sur le spectre d'energie des noyaux impair-impair deformes Energy Technology Data Exchange (ETDEWEB) Alceanu-G, Pinho de; Picard, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1965-07-01 The odd-odd deformed nuclei are described as a rotator plus two odd nucleons moving in orbitals {omega}{sub p} and {omega}{sub n} of the deformed potential. We investigate the energies and wave functions of the various states of the ({omega}{sub p}, {omega}{sub n}) configurations by calculating and numerically diagonalizing the Hamiltonian matrix (with R.P.C. and residual interactions). The Gallagher-Mosskowski coupling rules ana the abnormal K equals 0 rotational bands are discussed. (authors) [French] Les noyaux impair-impairs deformes sont decrits comme un rotateur plus deux nucleons non apparies dans les orbites {omega}{sub p} et {omega}{sub n} du potentiel deforme. Nous etudions le spectre d'energie et les fonctions d'onde des configurations ({omega}{sub p}, {omega}{sub n}) en tenant compte de l'interaction particule-rotation et de la force residuelle entre les deux nucleons celibataires. 20. The quasi deuteron model for low energy pion absorption International Nuclear Information System (INIS) Gouweloos, M. 1986-01-01 In this thesis pion absorption in complex nuclei is studied in the quasi-deuteron model in which the pion is absorbed on a nucleon pair in the nucleus. The mechanism is studied in the low-energy domain since then the in-medium (pi→NN) operator turns out to be of simple character. In Ch. 2 and 3 this operator is constructed and analytical expressions are derived for (pi,NN) distributions in a plane wave impulse approximation for nuclei. The results turn out to be very useful for developing insight in the possibilities inherent in the QDM and the interpretation of the results in later chapters. Ch. 4 to 6 are devoted to the more realistic distorted wave calculations. In Ch. 4 the formal framework is presented and the calculational details are discussed. Ch.5 and 6 contain the comparison to stopped pion and in-flight data respectively. In Ch. 7 the main results are summarized. (Auth.) 1. Non-linear absorption for concentrated solar energy transport Energy Technology Data Exchange (ETDEWEB) Jaramillo, O. A; Del Rio, J.A; Huelsz, G [Centro de Investigacion de Energia, UNAM, Temixco, Morelos (Mexico) 2000-07-01 In order to determine the maximum solar energy that can be transported using SiO{sub 2} optical fibers, analysis of non-linear absorption is required. In this work, we model the interaction between solar radiation and the SiO{sub 2} optical fiber core to determine the dependence of the absorption of the radioactive intensity. Using Maxwell's equations we obtain the relation between the refractive index and the electric susceptibility up to second order in terms of the electric field intensity. This is not enough to obtain an explicit expression for the non-linear absorption. Thus, to obtain the non-linear optical response, we develop a microscopic model of an harmonic driven oscillators with damp ing, based on the Drude-Lorentz theory. We solve this model using experimental information for the SiO{sub 2} optical fiber, and we determine the frequency-dependence of the non-linear absorption and the non-linear extinction of SiO{sub 2} optical fibers. Our results estimate that the average value over the solar spectrum for the non-linear extinction coefficient for SiO{sub 2} is k{sub 2}=10{sup -}29m{sup 2}V{sup -}2. With this result we conclude that the non-linear part of the absorption coefficient of SiO{sub 2} optical fibers during the transport of concentrated solar energy achieved by a circular concentrator is negligible, and therefore the use of optical fibers for solar applications is an actual option. [Spanish] Con el objeto de determinar la maxima energia solar que puede transportarse usando fibras opticas de SiO{sub 2} se requiere el analisis de absorcion no linear. En este trabajo modelamos la interaccion entre la radiacion solar y el nucleo de la fibra optica de SiO{sub 2} para determinar la dependencia de la absorcion de la intensidad radioactiva. Mediante el uso de las ecuaciones de Maxwell obtenemos la relacion entre el indice de refraccion y la susceptibilidad electrica hasta el segundo orden en terminos de intensidad del campo electrico. Esto no es 2. Energy spectrum inverse problem of q-deformed harmonic oscillator and entanglement of composite bosons Science.gov (United States) Sang, Nguyen Anh; Thu Thuy, Do Thi; Loan, Nguyen Thi Ha; Lan, Nguyen Tri; Viet, Nguyen Ai 2017-06-01 Using the simple deformed three-level model (D3L model) proposed in our early work, we study the entanglement problem of composite bosons. Consider three first energy levels are known, we can get two energy separations, and can define the level deformation parameter δ. Using connection between q-deformed harmonic oscillator and Morse-like anharmonic potential, the deform parameter q also can be derived explicitly. Like the Einstein’s theory of special relativity, we introduce the observer e˙ects: out side observer (looking from outside the studying system) and inside observer (looking inside the studying system). Corresponding to those observers, the outside entanglement entropy and inside entanglement entropy will be defined.. Like the case of Foucault pendulum in the problem of Earth rotation, our deformation energy level investigation might be useful in prediction the environment e˙ect outside a confined box. 3. Influence of Stacking Fault Energy (SFE) on the deformation mode of stainless steels International Nuclear Information System (INIS) Li, X.; Van Renterghem, W.; Al Mazouzi, A. 2008-01-01 The sensibility to irradiation-assisted stress corrosion cracking (IASCC) of stainless steels in light water reactor (LWR) can be caused by the localisation of deformation that takes place in these materials. Dislocation channelling and twinning modes of deformation can induce localised plasticity leading to failure. Stacking fault energy (SFE) plays an important role in every process of plastic deformation behaviour, especially in twinning and dislocation channelling. In order to correlate localised deformation with stacking fault energy, this parameter has been experimentally determined by transmission electron microscope (TEM) using both dislocation node and multiple ribbons methods after compression in three different model alloys. Detailed deformation behaviour of three fabricated alloys with different stacking fault energy before and after tensile tests at temperatures from -150 deg C to 300 deg C, will be shown and discussed based on mechanical test and TEM observation. (authors) 4. Energy absorption buildup factors for thermoluminescent dosimetric materials and their tissue equivalence DEFF Research Database (Denmark) Manohara, S.R.; Hanagodimath, S.M.; Gerward, Leif 2010-01-01 Gamma ray energy-absorption buildup factors were computed using the five-parameter geometric progression (G-P) fitting formula for seven thermoluminescent dosimetric (TLD) materials in the energy range 0.015-15 MeV, and for penetration depths up to 40 mfp (mean free path). The generated energy-absorption... 5. Partitioning of elastic energy in open-cell foams under finite deformations International Nuclear Information System (INIS) Harb, Rani; Taciroglu, Ertugrul; Ghoniem, Nasr 2013-01-01 The challenges associated with the computational modeling and simulation of solid foams are threefold—namely, the proper representation of an intricate geometry, the capability to accurately describe large deformations, and the extremely arduous numerical detection and enforcement of self-contact during crushing. The focus of this study is to assess and accurately quantify the effects of geometric nonlinearities (i.e. finite deformations, work produced under buckling-type motions) on the predicted mechanical response of open-cell foams of aluminum and polyurethane prior to the onset of plasticity and contact. Beam elements endowed with three-dimensional finite deformation kinematics are used to represent the foam ligaments. Ligament cross-sections are discretized through a fiber-based formulation that provides accurate information regarding the onset of plasticity, given the uniaxial yield stress–strain data for the bulk material. It is shown that the (hyper-) elastic energy partition within ligaments is significantly influenced by kinematic nonlinearities, which frequently cause strong coupling between the axial, bending, shear and torsional deformation modes. This deformation mode-coupling is uniquely obtained as a result of evaluating equilibrium in the deformed configuration, and is undetectable when small deformations are assumed. The relationship between the foam topology and energy partitioning at various stages of moderate deformation is also investigated. Coupled deformation modes are shown to play an important role, especially in perturbed Kelvin structures where over 70% of the energy is stored in coupled axial-shear and axial-bending modes. The results from this study indicate that it may not always be possible to accurately simulate the onset of plasticity (and the response beyond this regime) if finite deformation kinematics are neglected 6. Study on Energy Absorption Capacity of Steel-Polyester Hybrid Fiber Reinforced Concrete Under Uni-axial Compression Science.gov (United States) 2018-05-01 This work presents the energy absorption capacity of hybrid fiber reinforced concrete made with hooked end steel fibers (0.5 and 0.75%) and straight polyester fibers (0.5, 0.8, 1.0 and 2.0%). Compressive toughness (energy absorption capacity) under uni-axial compression was evaluated on 100 × 200 mm size cylindrical specimens with varying steel and polyester fiber content. Efficiency of the hybrid fiber reinforcement is studied with respect to fiber type, size and volume fractions in this investigation. The vertical displacement under uni-axial compression was measured under the applied loads and the load-deformation curves were plotted. From these curves the toughness values were calculated and the results were compared with steel and polyester as individual fibers. The hybridization of 0.5% steel + 0.5% polyester performed well in post peak region due to the addition of polyester fibers with steel fibers and the energy absorption value was 23% greater than 0.5% steel FRC. Peak stress values were also higher in hybrid series than single fiber and based on the results it is concluded that hybrid fiber reinforcement improves the toughness characteristics of concrete without affecting workability. 7. Gamow-Teller decay and nuclear deformation: implementing of a new total absorption spectrometer, study of isotopes N {approx_equal} Z krypton and strontium; Decroissance Gamow-Teller et deformation nucleaire: mise en oeuvre d'un nouveau spectrometre a absorption totale, etude d'isotopes N {approx_equal} Z de krypton et strontium Energy Technology Data Exchange (ETDEWEB) Poirier, E 2002-12-01 Nuclei with A {approx} 70 along the N=Z line are known to be the scene of phenomena closely related to the nuclear deformation and are of particular interest since theoretical mean field calculations predict that a large part of the Gamow-Teller resonance might be located below the ground state of the mother nucleus and then be accessible through {beta}-decay studies. These results have shown the effect of the shape of the ground state on the intensity of the Gamow-Teller strength. Thus, the experimental determination, through {delta}-decay, of the Gamow-Teller strength distribution and the comparison to the theoretical predictions allow to pin down the quadrupolar deformation parameter of the ground state of the parent nucleus. In order to study the neutron deficient isotopes of krypton (A=72,73,74,75) and strontium (A=76,77,78) and to establish the {beta}-strength on the full energy range, a new total absorption spectrometer (TAgS) has been built in the frame of an international collaboration and installed at the (SOLDE/CERN mass separator. For the data analysis, the response function R of the spectrometer has been calculated by means of Monte-Carlo simulations, based on the GEANT4 code, and of a statistical description of the level scheme in the daughter nucleus. The {beta}-feeding distribution has been obtained from experimental spectra through a method based on Bayes theorem and then converted into Gamow-Teller strength. The results coming from the {sup 74}Kr decay analysis allow to describe the ground state of such a nucleus as the coexistence of an oblate shape and of a prolate shape. In the case of {sup 76}Sr, the experimental Gamow-Teller strength distribution strongly indicates a prolate deformation. (author) 8. Variation of energy absorption buildup factors with incident photon energy and penetration depth for some commonly used solvents International Nuclear Information System (INIS) Singh, Parjit S.; Singh, Tejbir; Kaur, Paramjeet 2008-01-01 G.P. fitting method has been used to compute energy absorption buildup factor of some commonly used solvents such as acetonitrile (C 4 H 3 N), butanol (C 4 H 9 OH), chlorobenzene (C 6 H 5 Cl), diethyl ether (C 4 H 10 O), ethanol (C 2 H 5 OH), methanol (CH 3 OH), propanol (C 3 H 7 OH) and water (H 2 O) for the wide energy range (0.015-15.0 MeV) up to the penetration depth of 10 mean free path. The variation of energy absorption buildup factor with chemical composition as well as incident photon energy for the selected solvents has been studied. It has been observed that the maximum value of energy absorption buildup factors shifts to the slightly higher incident photon energy with the increase in equivalent atomic number of the solvent and the solvent with least equivalent atomic number possesses the maximum value of energy absorption buildup factor 9. Doppler broadening and its contribution to Compton energy-absorption cross sections: An analysis of the Compton component in terms of mass-energy absorption coefficient International Nuclear Information System (INIS) Rao, D.V.; Takeda, T.; Itai, Y.; Akatsuka, T.; Cesareo, R.; Brunetti, A.; Gigante, G.E. 2002-01-01 Compton energy absorption cross sections are calculated using the formulas based on a relativistic impulse approximation to assess the contribution of Doppler broadening and to examine the Compton profile literature and explore what, if any, effect our knowledge of this line broadening has on the Compton component in terms of mass-energy absorption coefficient. Compton energy-absorption cross sections are evaluated for all elements, Z=1-100, and for photon energies 1 keV-100 MeV. Using these cross sections, the Compton component of the mass-energy absorption coefficient is derived in the energy region from 1 keV to 1 MeV for all the elements Z=1-100. The electron momentum prior to the scattering event should cause a Doppler broadening of the Compton line. The momentum resolution function is evaluated in terms of incident and scattered photon energy and scattering angle. The overall momentum resolution of each contribution is estimated for x-ray and γ-ray energies of experimental interest in the angular region 1 deg. -180 deg. . Also estimated is the Compton broadening using nonrelativistic formula in the angular region 1 deg. -180 deg., for 17.44, 22.1, 58.83, and 60 keV photons for a few elements (H, C, N, O, P, S, K, and Ca) of biological importance 10. Doppler Broadening and its Contribution to Compton Energy-Absorption Cross Sections: An Analysis of the Compton Component in Terms of Mass-Energy Absorption Coefficient Science.gov (United States) Rao, D. V.; Takeda, T.; Itai, Y.; Akatsuka, T.; Cesareo, R.; Brunetti, A.; Gigante, G. E. 2002-09-01 Compton energy absorption cross sections are calculated using the formulas based on a relativistic impulse approximation to assess the contribution of Doppler broadening and to examine the Compton profile literature and explore what, if any, effect our knowledge of this line broadening has on the Compton component in terms of mass-energy absorption coefficient. Compton energy-absorption cross sections are evaluated for all elements, Z=1-100, and for photon energies 1 keV-100 MeV. Using these cross sections, the Compton component of the mass-energy absorption coefficient is derived in the energy region from 1 keV to 1 MeV for all the elements Z=1-100. The electron momentum prior to the scattering event should cause a Doppler broadening of the Compton line. The momentum resolution function is evaluated in terms of incident and scattered photon energy and scattering angle. The overall momentum resolution of each contribution is estimated for x-ray and γ-ray energies of experimental interest in the angular region 1°-180°. Also estimated is the Compton broadening using nonrelativistic formula in the angular region 1°-180°, for 17.44, 22.1, 58.83, and 60 keV photons for a few elements (H, C, N, O, P, S, K, and Ca) of biological importance. 11. Numerical study on design for wave energy generation of a floater for energy absorption International Nuclear Information System (INIS) Li, Kui Ming; Parthasarathy, Nanjundan; Choi, Yoon Hwan; Lee, Yeon Won 2012-01-01 In order to design a wave energy generating system of a floater type, a 6 DOF motion technique was applied to the three Dimensional CFD analysis on a floating body and the behavior was interpreted according to the nature of the incoming waves. Waves in a tank model were generated using a single floater comparing with that of a Pelamis wave energy converter. In this paper, we focus on four variables, namely the wave height, angular velocity, diameter and length of the floater. The process was carried out in three stages and it was found that there are energy absorption differences in different parameters of wave height, length and the diameter of a floater during simulation, thus leading for the necessity of an optimal design for wave energy generation 12. Some considerations of the energy spectrum of odd-odd deformed nuclei; Quelqes considerations sur le spectre d'energie des noyaux impair-impair deformes Energy Technology Data Exchange (ETDEWEB) Alceanu-G, Pinho de; Picard, J. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1965-07-01 The odd-odd deformed nuclei are described as a rotator plus two odd nucleons moving in orbitals {omega}{sub p} and {omega}{sub n} of the deformed potential. We investigate the energies and wave functions of the various states of the ({omega}{sub p}, {omega}{sub n}) configurations by calculating and numerically diagonalizing the Hamiltonian matrix (with R.P.C. and residual interactions). The Gallagher-Mosskowski coupling rules ana the abnormal K equals 0 rotational bands are discussed. (authors) [French] Les noyaux impair-impairs deformes sont decrits comme un rotateur plus deux nucleons non apparies dans les orbites {omega}{sub p} et {omega}{sub n} du potentiel deforme. Nous etudions le spectre d'energie et les fonctions d'onde des configurations ({omega}{sub p}, {omega}{sub n}) en tenant compte de l'interaction particule-rotation et de la force residuelle entre les deux nucleons celibataires. 13. Imprints of the nuclear symmetry energy on gravitational waves from deformed pulsars International Nuclear Information System (INIS) Li, Baoan; Krastev, P.G. 2010-01-01 The density dependence of nuclear symmetry energy is a critical input for understanding many interesting phenomena in astrophysics and cosmology. We report here effects of the nuclear symmetry energy partially constrained by terrestrial laboratory experiments on the strength of gravitational waves (GWs) from deformed pulsars at both low and high rotational frequencies. (author) 14. Electromagnetic Energy Absorption due to Wireless Energy Transfer: A Brief Review Directory of Open Access Journals (Sweden) Syafiq A. 2016-01-01 Full Text Available This paper reviews an implementation of evaluating compliance of wireless power transfer systems with respect to human electromagnetic exposure limits. Methods for both numerical analysis and measurements are discussed. The objective is to evaluate the rate of which energy is absorbed by the human body when exposed to a wireless energy transfer, although it can be referred to the absorption of other forms of energy by tissue. An exposure assessment of a representative wireless power transfer system, under a limited set of operating conditions, is provided in order to estimate the maximum SAR levels. The aim of this review is to conclude the possible side effect to the human body when utilizing wireless charging in daily life so that an early severe action can be taken when using wireless transfer. 15. Relativistic Energy Analysis of Five-Dimensional q-Deformed Radial Rosen-Morse Potential Combined with q-Deformed Trigonometric Scarf Noncentral Potential Using Asymptotic Iteration Method International Nuclear Information System (INIS) Pramono, Subur; Suparmi, A.; Cari, Cari 2016-01-01 We study the exact solution of Dirac equation in the hyperspherical coordinate under influence of separable q-deformed quantum potentials. The q-deformed hyperbolic Rosen-Morse potential is perturbed by q-deformed noncentral trigonometric Scarf potentials, where all of them can be solved by using Asymptotic Iteration Method (AIM). This work is limited to spin symmetry case. The relativistic energy equation and orbital quantum number equation l_D_-_1 have been obtained using Asymptotic Iteration Method. The upper radial wave function equations and angular wave function equations are also obtained by using this method. The relativistic energy levels are numerically calculated using Matlab, and the increase of radial quantum number n causes the increase of bound state relativistic energy level in both dimensions D=5 and D=3. The bound state relativistic energy level decreases with increasing of both deformation parameter q and orbital quantum number n_l. 16. The Effects of Triggering Mechanisms on the Energy Absorption Capability of Circular Jute/Epoxy Composite Tubes under Quasi-Static Axial Loading Science.gov (United States) Sivagurunathan, Rubentheran; Lau Tze Way, Saijod; Sivagurunathan, Linkesvaran; Yaakob, Mohd. Yuhazri 2018-01-01 The usage of composite materials have been improving over the years due to its superior mechanical properties such as high tensile strength, high energy absorption capability, and corrosion resistance. In this present study, the energy absorption capability of circular jute/epoxy composite tubes were tested and evaluated. To induce the progressive crushing of the composite tubes, four different types of triggering mechanisms were used which were the non-trigger, single chamfered trigger, double chamfered trigger and tulip trigger. Quasi-static axial loading test was carried out to understand the deformation patterns and the load-displacement characteristics for each composite tube. Besides that, the influence of energy absorption, crush force efficiency, peak load, mean load and load-displacement history were examined and discussed. The primary results displayed a significant influence on the energy absorption capability provided that stable progressive crushing occurred mostly in the triggered tubes compared to the non-triggered tubes. Overall, the tulip trigger configuration attributed the highest energy absorption. 17. Prediction of energy absorption characteristics of aligned carbon nanotube/epoxy nanocomposites International Nuclear Information System (INIS) Weidt, D; Figiel, Ł; Buggy, M 2012-01-01 This research aims ultimately at improving the impact performance of laminates by applying a coating of epoxy containing carbon nanotubes (CNTs). Here, 2D and 3D computational modelling was carried out to predict energy absorption characteristics of aligned CNT/epoxy nanocomposites subjected to macroscopic compression under different strain rates (quasi-static and impact rates). The influence of the rate-dependent matrix behaviour, CNT aspect ratio and CNT volume fraction on the energy absorption characteristics of the nanocomposites was evaluated. A strong correlation between those parameters was found, which provides an insight into a rate-dependent behaviour of the nanocomposites, and can help to tune their energy absorption characteristics. 18. An activated energy approach for accelerated testing of the deformation of UHMWPE in artificial joints. Science.gov (United States) Galetz, Mathias Christian; Glatzel, Uwe 2010-05-01 The deformation behavior of ultrahigh molecular polyethylene (UHMWPE) is studied in the temperature range of 23-80 degrees C. Samples are examined in quasi-static compression, tensile and creep tests to determine the accelerated deformation of UHMWPE at elevated temperatures. The deformation mechanisms under compression load can be described by one strain rate and temperature dependent Eyring process. The activation energy and volume of that process do not change between 23 degrees C and 50 degrees C. This suggests that the deformation mechanism under compression remains stable within this temperature range. Tribological tests are conducted to transfer this activated energy approach to the deformation behavior under loading typical for artificial knee joints. While this approach does not cover the wear mechanisms close to the surface, testing at higher temperatures is shown to have a significant potential to reduce the testing time for lifetime predictions in terms of the macroscopic creep and deformation behavior of artificial joints. Copyright 2010. Published by Elsevier Ltd. 19. Deformed potential energy of $^{263}Db$ in a generalized liquid drop model CERN Document Server Chen Bao Qiu; Zhao Yao Lin; 10.1088/0256-307X/20/11/009 2003-01-01 The macroscopic deformed potential energy for super-heavy nuclei /sup 263/Db, which governs the entrance and alpha decay channels, is determined within a generalized liquid drop model (GLDM). A quasi- molecular shape is assumed in the GLDM, which includes volume-, surface-, and Coulomb-energies, proximity effects, mass asymmetry, and an accurate nuclear radius. The microscopic single particle energies derived from a shell model in an axially deformed Woods- Saxon potential with a quasi-molecular shape. The shell correction is calculated by the Strutinsky method. The total deformed potential energy of a nucleus can be calculated by the macro-microscopic method as the summation of the liquid-drop energy and the Strutinsky shell correction. The theory is applied to predict the deformed potential energy of the experiment /sup 22/Ne+/sup 241/Am to /sup 263/Db* to /sup 259/Db+4 n, which was performed on the Heavy Ion Accelerator in Lanzhou. It is found that the neck in the quasi-molecular shape is responsible for t... 20. Cooling performance and energy saving of a compression-absorption refrigeration system assisted by geothermal energy International Nuclear Information System (INIS) Kairouani, L.; Nehdi, E. 2006-01-01 The objectives of this paper are to develop a novel combined refrigeration system, and to discuss the thermodynamic analysis of the cycle and the feasibility of its practical development. The aim of this work was to study the possibility of using geothermal energy to supply vapour absorption system cascaded with conventional compression system. Three working fluids (R717, R22, and R134a) are selected for the conventional compression system and the ammonia-water pair for the absorption system. The geothermal temperature source in the range 343-349 K supplies a generator operating at 335 K. Results show that the COP of a combined system is significantly higher than that of a single stage refrigeration system. It is found that the COP can be improved by 37-54%, compared with the conventional cycle, under the same operating conditions, that is an evaporation temperature at 263 K and a condensation temperature of 308 K. For industrial refrigeration, the proposed system constitutes an alternative solution for reducing energy consumption and greenhouse gas emissions 1. 3D Energy Absorption Diagram Construction of Paper Honeycomb Sandwich Panel Directory of Open Access Journals (Sweden) Dongmei Wang 2018-01-01 Full Text Available Paper honeycomb sandwich panel is an environment-sensitive material. Its cushioning property is closely related to its structural factors, the temperature and humidity, random shocks, and vibration events in the logistics environment. In order to visually characterize the cushioning property of paper honeycomb sandwich panel in different logistics conditions, the energy absorption equation of per unit volume of paper honeycomb sandwich panel was constructed by piecewise function. The three-dimensional (3D energy absorption diagram of paper honeycomb sandwich panel was constructed by connecting the inflexion of energy absorption curve. It takes into account the temperature, humidity, strain rate, and characteristics of the honeycomb structure. On the one hand, this diagram breaks through the limitation of the static compression curve of paper honeycomb sandwich panel, which depends on the test specimen and is applicable only to the standard condition. On the other hand, it breaks through the limitation of the conventional 2D energy absorption diagram which has less information. Elastic modulus was used to normalize the plateau stress and energy absorption per unit volume. This makes the 3D energy absorption diagram universal for different material sandwich panels. It provides a new theoretical basis for packaging optimized design. 2. Effects of mechanical deformation on energy conversion efficiency of piezoelectric nanogenerators International Nuclear Information System (INIS) Yoo, Jinho; Kim, Wook; Choi, Dukhyun; Cho, Seunghyeon; Kim, Chang-Wan; Kwon, Jang-Yeon; Kim, Hojoong; Kim, Seunghyun; Chang, Yoon-Suk 2015-01-01 Piezoelectric nanogenerators (PNGs) are capable of converting energy from various mechanical sources into electric energy and have many attractive features such as continuous operation, replenishment and low cost. However, many researchers still have studied novel material synthesis and interfacial controls to improve the power production from PNGs. In this study, we report the energy conversion efficiency (ECE) of PNGs dependent on mechanical deformations such as bending and twisting. Since the output power of PNGs is caused by the mechanical strain of the piezoelectric material, the power production and their ECE is critically dependent on the types of external mechanical deformations. Thus, we examine the output power from PNGs according to bending and twisting. In order to clearly understand the ECE of PNGs in the presence of those external mechanical deformations, we determine the ECE of PNGs by the ratio of output electrical energy and input mechanical energy, where we suggest that the input energy is based only on the strain energy of the piezoelectric layer. We calculate the strain energy of the piezoelectric layer using numerical simulation of bending and twisting of the PNG. Finally, we demonstrate that the ECE of the PNG caused by twisting is much higher than that caused by bending due to the multiple effects of normal and lateral piezoelectric coefficients. Our results thus provide a design direction for PNG systems as high-performance power generators. (paper) 3. Plastic collapse and energy absorption of circular filled tubes under quasi-static loads by computational analysis Energy Technology Data Exchange (ETDEWEB) Beng, Yeo Kiam; Tzeng, Woo Wen [Universiti Malaysia Sabah, Sabah (Malaysia) 2017-02-15 This study presents the finite element analysis of plastic collapse and energy absorption of polyurethane-filled aluminium circular tubes under quasi-static transverse loading. Increasing focuses were given to impact damage of structures where energy absorbed during impact could be controlled to avoid total structure collapse of energy absorbers and devices designed to dissipate energy. ABAQUS finite element analysis application was utilized for modelling and simulating the polyurethane-filled aluminium tubes, different set of diameterto- thickness ratios and span lengths, subjected to transverse three-point-bending load. Different sets of polyurethane-filled aluminium tubes subjected to the transverse loading were modelled and simulated. The failure modes and mechanisms of filled tubes and its capabilities as energy absorbers to further improve and strengthening of empty tube were also identified. The results showed that plastic deformation response was affected by the geometric constraints and parameters of the specimens. The diameter-to-thickness ratio and span lengths had shown to play crucial role in optimizing the PU-filled tube as energy absorber. 4. Reserch on energy absorption efficiency in full-duplex multi-user broadcast channel Directory of Open Access Journals (Sweden) JIANG Fengju 2015-02-01 Full Text Available This paper studies the user energy scenarios absorption efficiency optimization in multiuser broadcast channel.This paper assumed that the user terminals using full-duplex mode that the user receive uplink energy information and transfer uplink energy at the same time.In this paper,we maximize the minimum user uplink transmit power,when we ensure that each user′s energy absorption efficiency is greater than a threshold value and satisfies the premise of the base station downlink power emission limits.Finally,the simulation results confirm the effectiveness of the proposed algorithm. 5. The general form of the relaxation of a purely interfacial energy for structured deformations Czech Academy of Sciences Publication Activity Database Šilhavý, Miroslav 2017-01-01 Roč. 5, č. 2 (2017), s. 191-215 ISSN 2326-7186 Institutional support: RVO:67985840 Keywords : structured deformations * relaxation * subadditive envelope * interfacial energy Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://msp.org/memocs/2017/5-2/p04.xhtml 6. On the role of deformed Coulomb potential in fusion using energy ... Fusion probabilities; quadrupole deformation; Skyrme energy density formalism ... interaction time between the colliding nuclei is large and therefore, various features of ... parameters t0, t1, t2, t3 and W0, the values of which can be adjusted for ... 7. Residential solar air conditioning: Energy and exergy analyses of an ammonia–water absorption cooling system International Nuclear Information System (INIS) Aman, J.; Ting, D.S.-K.; Henshaw, P. 2014-01-01 Large scale heat-driven absorption cooling systems are available in the marketplace for industrial applications but the concept of a solar driven absorption chiller for air-conditioning applications is relatively new. Absorption chillers have a lower efficiency than compression refrigeration systems, when used for small scale applications and this restrains the absorption cooling system from air conditioning applications in residential buildings. The potential of a solar driven ammonia–water absorption chiller for residential air conditioning application is discussed and analyzed in this paper. A thermodynamic model has been developed based on a 10 kW air cooled ammonia–water absorption chiller driven by solar thermal energy. Both energy and exergy analyses have been conducted to evaluate the performance of this residential scale cooling system. The analyses uncovered that the absorber is where the most exergy loss occurs (63%) followed by the generator (13%) and the condenser (11%). Furthermore, the exergy loss of the condenser and absorber greatly increase with temperature, the generator less so, and the exergy loss in the evaporator is the least sensitive to increasing temperature. -- Highlights: • 10 kW solar thermal driven ammonia–water air cooled absorption chiller is investigated. • Energy and exergy analyses have been done to enhance the thermal performance. • Low driving temperature heat sources have been optimized. • The efficiencies of the major components have been evaluated 8. Micromechanics of Amorphous Metal/Polymer Hybrid Structures with 3D Cellular Architectures: Size Effects, Buckling Behavior, and Energy Absorption Capability. Science.gov (United States) Mieszala, Maxime; Hasegawa, Madoka; Guillonneau, Gaylord; Bauer, Jens; Raghavan, Rejin; Frantz, Cédric; Kraft, Oliver; Mischler, Stefano; Michler, Johann; Philippe, Laetitia 2017-02-01 By designing advantageous cellular geometries and combining the material size effects at the nanometer scale, lightweight hybrid microarchitectured materials with tailored structural properties are achieved. Prior studies reported the mechanical properties of high strength cellular ceramic composites, obtained by atomic layer deposition. However, few studies have examined the properties of similar structures with metal coatings. To determine the mechanical performance of polymer cellular structures reinforced with a metal coating, 3D laser lithography and electroless deposition of an amorphous layer of nickel-boron (NiB) is used for the first time to produce metal/polymer hybrid structures. In this work, the mechanical response of microarchitectured structures is investigated with an emphasis on the effects of the architecture and the amorphous NiB thickness on their deformation mechanisms and energy absorption capability. Microcompression experiments show an enhancement of the mechanical properties with the NiB thickness, suggesting that the deformation mechanism and the buckling behavior are controlled by the brittle-to-ductile transition in the NiB layer. In addition, the energy absorption properties demonstrate the possibility of tuning the energy absorption efficiency with adequate designs. These findings suggest that microarchitectured metal/polymer hybrid structures are effective in producing materials with unique property combinations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 9. Functionally graded biomimetic energy absorption concept development for transportation systems. Science.gov (United States) 2014-02-01 Mechanics of a functionally graded cylinder subject to static or dynamic axial loading is considered, including a potential application as energy absorber. The mass density and stiffness are power functions of the radial coordinate as may be the case... 10. Nuclear-deformation energies according to a liquid-drop model with a sharp surface International Nuclear Information System (INIS) Blocki, J.; Swiatecki, W.J. 1982-05-01 We present an atlas of 665 deformation-energy maps and 150 maps of other properties of interest, relevant for nuclear systems idealized as uniformly charged drops endowed with a surface tension. The nuclear shapes are parametrized in terms of two spheres modified by a smoothly fitted quadratic surface of revolution and are specified by three variables: asymmetry, sphere separation, and a neck variable (that goes over into a fragment-deformation variable after scission). The maps and related tables should be useful for the study of macroscopic aspects of nuclear fission and of collisions between any two nuclei in the periodic table 11. Effects of Gut Microbes on Nutrient Absorption and Energy Regulation OpenAIRE Krajmalnik-Brown, Rosa; Ilhan, Zehra-Esra; Kang, Dae-Wook; DiBaise, John K. 2012-01-01 Malnutrition may manifest as either obesity or undernutrition. Accumulating evidence suggests that the gut microbiota plays an important role in the harvest, storage, and expenditure of energy obtained from the diet. The composition of the gut microbiota has been shown to differ between lean and obese humans and mice; however, the specific roles that individual gut microbes play in energy harvest remain uncertain. The gut microbiota may also influence the development of conditions characteriz... 12. Spinal deformities after high-energy radiotherapy for nephroblastoma in children. Report of 82 cases Energy Technology Data Exchange (ETDEWEB) Zollner, G.; Pfeil, J.; Scheibel, P. 1987-01-01 Study of spinal deformities in 82 cases of nephroblastoma treated surgically or by chemotherapy and high-energy radiotherapy. This paper stresses the high incidence of radiation induced scoliosis which was observed in all patients who had reached adulthood. On the basis of a detailed analysis of the relations between the height of the vertebral body and of the intervertebral space, the authors propose a theory of a compensation of the reduction in vertebral height by displacement the nucleus pulposus towards the irradiated side. This phenomenon prevents the development of scoliosis in subjects under the age of 11 years, despite an already well established deformity of the vertebral bodies. The compensation decreases with increasing age and the degree od scoliosis. This explains partly the increased degree of scoliotic deformity during puberty. Orthopaedic treatment therefore needs to be started relatively early. 13. Features micro plastic deformation auxetic beryllium irradiated with high-energy electrons International Nuclear Information System (INIS) Rarans'kij, M.D.; Olyijnich-Lisyuk, A.V.; Tashchuk, O.Yu. 2016-01-01 By low-frequency internal friction (LFIF) (1...3 Hz) method, the study of the behavior of the dynamic modulus of torsion (Gef) and by mathematical modeling of dislocation motion studied micro plastic deformation in naturally aged and irradiated with high-energy (18 MeV) electrons auxetic beryllium. With increasing doses of radiation found an increase in IF and speed of movement of dislocations in 2-3 times. Installed stage character micro strain auxetic Be. By mathematical modeling showed that in the irradiated material the deformation occurs due to the accelerated movement of the twin dislocations in the early stages, and anomalous dynamic deceleration of complete dislocations with an increase in the degree of deformation in the second stage. It is shown that theoretically estimated values are in good agreement with the experimentally determined. 14. Bio-Inspired Photon Absorption and Energy Transfer for Next Generation Photovoltaic Devices Science.gov (United States) Magsi, Komal Nature's solar energy harvesting system, photosynthesis, serves as a model for photon absorption, spectra broadening, and energy transfer. Photosynthesis harvests light far differently than photovoltaic cells. These differences offer both engineering opportunity and scientific challenges since not all of the natural photon absorption mechanisms have been understood. In return, solar cells can be a very sensitive probe for the absorption characteristics of molecules capable of transferring charge to a conductive interface. The objective of this scientific work is the advancement of next generation photovoltaics through the development and application of natural photo-energy transfer processes. Two scientific methods were used in the development and application of enhancing photon absorption and transfer. First, a detailed analysis of photovoltaic front surface fluorescent spectral modification and light scattering by hetero-structure was conducted. Phosphor based spectral down-conversion is a well-known laser technology. The theoretical calculations presented here indicate that parasitic losses and light scattering within the spectral range are large enough to offset any expected gains. The second approach for enhancing photon absorption is based on bio-inspired mechanisms. Key to the utilization of these natural processes is the development of a detailed scientific understanding and the application of these processes to cost effective systems and devices. In this work both aspects are investigated. Dye type solar cells were prepared and tested as a function of Chlorophyll (or Sodium-Copper Chlorophyllin) and accessory dyes. Forster has shown that the fluorescence ratio of Chlorophyll is modified and broadened by separate photon absorption (sensitized absorption) through interaction with nearby accessory pigments. This work used the dye type solar cell as a diagnostic tool by which to investigate photon absorption and photon energy transfer. These experiments shed 15. Guided-wave approaches to spectrally selective energy absorption Science.gov (United States) Stegeman, G. I.; Burke, J. J. 1987-01-01 Results of experiments designed to demonstrate spectrally selective absorption in dielectric waveguides on semiconductor substrates are reported. These experiments were conducted with three waveguides formed by sputtering films of PSK2 glass onto silicon-oxide layers grown on silicon substrates. The three waveguide samples were studied at 633 and 532 nm. The samples differed only in the thickness of the silicon-oxide layer, specifically 256 nm, 506 nm, and 740 nm. Agreement between theoretical predictions and measurements of propagation constants (mode angles) of the six or seven modes supported by these samples was excellent. However, the loss measurements were inconclusive because of high scattering losses in the structures fabricated (in excess of 10 dB/cm). Theoretical calculations indicated that the power distribution among all the modes supported by these structures will reach its steady state value after a propagation length of only 1 mm. Accordingly, the measured loss rates were found to be almost independent of which mode was initially excited. The excellent agreement between theory and experiment leads to the conclusion that low loss waveguides confirm the predicted loss rates. 16. Solvated electron: criticism of a suggested correlation of chemical potential with optical absorption energy International Nuclear Information System (INIS) Farhataziz, M. 1984-01-01 A recent theoretical treatment of the absorption spectrum of the solvated electron, e - sub(s), maintains that rigorously μ 0 >= -0.75 Esub(av), which gives empirical relationship, μ 0 >= -(0.93 +- 0.02)Esub(max). For e - sub(s) in a particular solvent at a temperature and pressure, μ 0 , Esub(av) and Esub(max) are standard chemical potential, average energy of the absorption spectrum and the energy at the absorption maximum respectively. The temperature and pressure effects on the absorption spectrum of e - sub(s) in water and liquid ammonia do not support the equality sign in the above cited relationships. The implications of inequality expressed above are discussed for e - sub(s) in water and liquid ammonia. (author) 17. High energy X-ray phase and dark-field imaging using a random absorption mask. Science.gov (United States) Wang, Hongchang; Kashyap, Yogesh; Cai, Biao; Sawhney, Kawal 2016-07-28 High energy X-ray imaging has unique advantage over conventional X-ray imaging, since it enables higher penetration into materials with significantly reduced radiation damage. However, the absorption contrast in high energy region is considerably low due to the reduced X-ray absorption cross section for most materials. Even though the X-ray phase and dark-field imaging techniques can provide substantially increased contrast and complementary information, fabricating dedicated optics for high energies still remain a challenge. To address this issue, we present an alternative X-ray imaging approach to produce transmission, phase and scattering signals at high X-ray energies by using a random absorption mask. Importantly, in addition to the synchrotron radiation source, this approach has been demonstrated for practical imaging application with a laboratory-based microfocus X-ray source. This new imaging method could be potentially useful for studying thick samples or heavy materials for advanced research in materials science. 18. Macroscopic-microscopic energy of rotating nuclei in the fusion-like deformation valley International Nuclear Information System (INIS) Gherghescu, R.A.; Royer, Guy 2000-01-01 The energy of rotating nuclei in the fusion-like deformation valley has been determined within a liquid drop model including the proximity energy, the two-center shell model and the Strutinsky method. The potential barriers of the 84 Zr, 132 Ce, 152 Dy and 192 Hg nuclei have been determined. A first minimum having a microscopic origin and lodging the normally deformed states disappears with increasing angular momenta. The microscopic and macroscopic energies contribute to generate a second minimum where superdeformed states may survive. It becomes progressively the lowest one at intermediate spins. At higher angular momenta, the minimum moves towards the foot of the external fission barrier leading to hyperdeformed quasi-molecular states. (author) 19. Wave energy absorption by a floating air bag DEFF Research Database (Denmark) Kurniawan, Adi; Chaplin, John; Greaves, Deborah 2017-01-01 A floating air bag, ballasted in water, expands and contracts as it heaves under wave action. Connecting the bag to a secondary volume via a turbine transforms the bag into a device capable of generating useful energy from the waves. Small-scale measurements of the device reveal some interesting... 20. Energy absorption coefficients for 662 keV gamma ray in some fatty acids International Nuclear Information System (INIS) Bhandal, G.S.; Singh, K.; Rama Rani; Vijay Kumar 1993-01-01 The mass energy absorption coefficient refers to the amount of energy dissipation by the secondary electron set in motion as a result of interactions between incident photons and matter. Under certain conditions, the energy dissipated by electrons in a given volume can be equated to the energy absorbed in that volume. The absorbed energy is of basic interest in radiation dosimetry because it represents the amount of energy made available for the production of chemical or biological effects. Sphere transmission is employed for the direct measurement of mass energy absorption coefficients at 662 keV in some fatty acids. Excellent agreement is obtained between the measured and theoretical values. (author). 6 refs., 1 fig., 1 tab 1. Evaluation of energy absorption performance of steel square profiles with circular discontinuities Directory of Open Access Journals (Sweden) Dariusz Szwedowicz Full Text Available This article details the experimental and numerical results on the energy absorption performance of square tubular profile with circular discontinuities drilled at lengthwise in the structure. A straight profile pattern was utilized to compare the absorption of energy between the ones with discontinuities under quasi-static loads. The collapse mode and energy absorption conditions were modified by circular holes. The holes were drilled symmetrically in two walls and located in three different positions along of profile length. The results showed a better performance on energy absorption for the circular discontinuities located in middle height. With respect to a profile without holes, a maximum increase of 7% in energy absorption capacity was obtained experimentally. Also, the numerical simulation confirmed that the implementation of circular discontinuities can reduce the peak load (Pmax by 10%. A present analysis has been conducted to compare numerical results obtained by means of the finite element method with the experimental data captured by using the testing machine. Finally the discrete model of the tube with and without geometrical discontinuities presents very good agreements with the experimental results. 2. Variation of energy absorption buildup factors with incident photon energy and penetration depth for some commonly used solvents Energy Technology Data Exchange (ETDEWEB) Singh, Parjit S. [Department of Physics, Punjabi University, Patiala 147 002 (India)], E-mail: [email protected]; Singh, Tejbir [Department of Physics, Lovely Professional University, Phagwara 144 402 (India); Kaur, Paramjeet [IAS and Allied Services Training Centre, Punjabi University, Patiala 147 002 (India) 2008-06-15 G.P. fitting method has been used to compute energy absorption buildup factor of some commonly used solvents such as acetonitrile (C{sub 4}H{sub 3}N), butanol (C{sub 4}H{sub 9}OH), chlorobenzene (C{sub 6}H{sub 5}Cl), diethyl ether (C{sub 4}H{sub 10}O), ethanol (C{sub 2}H{sub 5}OH), methanol (CH{sub 3}OH), propanol (C{sub 3}H{sub 7}OH) and water (H{sub 2}O) for the wide energy range (0.015-15.0 MeV) up to the penetration depth of 10 mean free path. The variation of energy absorption buildup factor with chemical composition as well as incident photon energy for the selected solvents has been studied. It has been observed that the maximum value of energy absorption buildup factors shifts to the slightly higher incident photon energy with the increase in equivalent atomic number of the solvent and the solvent with least equivalent atomic number possesses the maximum value of energy absorption buildup factor. 3. Novel characteristics of energy spectrum for 3D Dirac oscillator analyzed via Lorentz covariant deformed algebra. Science.gov (United States) Betrouche, Malika; Maamache, Mustapha; Choi, Jeong Ryeol 2013-11-14 We investigate the Lorentz-covariant deformed algebra for Dirac oscillator problem, which is a generalization of Kempf deformed algebra in 3 + 1 dimension of space-time, where Lorentz symmetry are preserved. The energy spectrum of the system is analyzed by taking advantage of the corresponding wave functions with explicit spin state. We obtained entirely new results from our development based on Kempf algebra in comparison to the studies carried out with the non-Lorentz-covariant deformed one. A novel result of this research is that the quantized relativistic energy of the system in the presence of minimal length cannot grow indefinitely as quantum number n increases, but converges to a finite value, where c is the speed of light and β is a parameter that determines the scale of noncommutativity in space. If we consider the fact that the energy levels of ordinary oscillator is equally spaced, which leads to monotonic growth of quantized energy with the increment of n, this result is very interesting. The physical meaning of this consequence is discussed in detail. 4. Large energy absorption in Ni-Mn-Ga/polymer composites International Nuclear Information System (INIS) Feuchtwanger, Jorge; Richard, Marc L.; Tang, Yun J.; Berkowitz, Ami E.; O'Handley, Robert C.; Allen, Samuel M. 2005-01-01 Ferromagnetic shape memory alloys can respond to a magnetic field or applied stress by the motion of twin boundaries and hence they show large hysteresis or energy loss. Ni-Mn-Ga particles made by spark erosion have been dispersed and oriented in a polymer matrix to form pseudo 3:1 composites which are studied under applied stress. Loss ratios have been determined from the stress-strain data. The loss ratios of the composites range from 63% to 67% compared to only about 17% for the pure, unfilled polymer samples 5. Absorption of electromagnetic field energy by superfluid system of atoms with electric dipole moment International Nuclear Information System (INIS) Poluektov, Yu.M. 2014-01-01 The modified Gross-Pitaevskii equation which takes into account relaxation and interaction with alternating electromagnetic field is used to consider the absorption of electromagnetic field energy by a superfluid system on the assumption that the atoms has intrinsic dipole moment. It is shown that the absorption may be of a resonant behavior only if the dispersion curves of the electromagnetic wave and the excitations of the superfluid system intersect. It is remarkable that such a situation is possible if the superfluid system has a branch of excitations with the energy gap at low momenta. The experiments on absorption of microwaves in superfluid helium are interpreted as evidence of existence of such gap excitations. A possible modification of the excitation spectrum of superfluid helium in the presence of excitation branch with energy gap is dis-cussed qualitatively 6. Energy dependent saturable and reverse saturable absorption in cube-like polyaniline/polymethyl methacrylate film Energy Technology Data Exchange (ETDEWEB) Thekkayil, Remyamol [Department of Chemistry, Indian Institute of Space Science and Technology, Valiamala, Thiruvananthapuram 695 547 (India); Philip, Reji [Light and Matter Physics Group, Raman Research Institute, C.V. Raman Avenue, Bangalore 560 080 (India); Gopinath, Pramod [Department of Physics, Indian Institute of Space Science and Technology, Valiamala, Thiruvananthapuram 695 547 (India); John, Honey, E-mail: [email protected] [Department of Chemistry, Indian Institute of Space Science and Technology, Valiamala, Thiruvananthapuram 695 547 (India) 2014-08-01 Solid films of cube-like polyaniline synthesized by inverse microemulsion polymerization method have been fabricated in a transparent PMMA host by an in situ free radical polymerization technique, and are characterized by spectroscopic and microscopic techniques. The nonlinear optical properties are studied by open aperture Z-scan technique employing 5 ns (532 nm) and 100 fs (800 nm) laser pulses. At the relatively lower laser pulse energy of 5 μJ, the film shows saturable absorption both in the nanosecond and femtosecond excitation domains. An interesting switchover from saturable absorption to reverse saturable absorption is observed at 532 nm when the energy of the nanosecond laser pulses is increased. The nonlinear absorption coefficient increases with increase in polyaniline concentration, with low optical limiting threshold, as required for a good optical limiter. - Highlights: • Synthesized cube-like polyaniline nanostructures. • Fabricated polyaniline/PMMA nanocomposite films. • At 5 μJ energy, saturable absorption is observed both at ns and fs regime. • Switchover from SA to RSA is observed as energy of laser beam increases. • Film (0.1 wt % polyaniline) shows high β{sub eff} (230 cm GW{sup −1}) and low limiting threshold at 150 μJ. 7. Absorption of short-pulse electromagnetic energy by a resistively loaded straight wire International Nuclear Information System (INIS) Miller, E.K.; Deadrick, F.J.; Landt, J.A. 1975-01-01 Absorption of short-pulse electromagnetic energy by a resistively loaded straight wire is examined. Energy collected by the wire, load energy, peak load currents, and peak load voltages are found for a wide range of parameters, with particular emphasis on nuclear electromagnetic pulse (EMP) phenomena. A series of time-sequenced plots is used to illustrate pulse propagation on wires when loads and wire ends are encountered 8. Interacting Dark Matter and q-Deformed Dark Energy Nonminimally Coupled to Gravity Directory of Open Access Journals (Sweden) Emre Dil 2016-01-01 Full Text Available In this paper, we propose a new approach to study the dark sector of the universe by considering the dark energy as an emerging q-deformed bosonic scalar field which is not only interacting with the dark matter, but also nonminimally coupled to gravity, in the framework of standard Einsteinian gravity. In order to analyze the dynamic of the system, we first give the quantum field theoretical description of the q-deformed scalar field dark energy and then construct the action and the dynamical structure of this interacting and nonminimally coupled dark sector. As a second issue, we perform the phase-space analysis of the model to check the reliability of our proposal by searching the stable attractor solutions implying the late-time accelerating expansion phase of the universe. 9. Energy absorption at high strain rate of glass fiber reinforced mortars Directory of Open Access Journals (Sweden) Fenu Luigi 2015-01-01 Full Text Available In this paper, the dynamic behaviour of cement mortars reinforced with glass fibers was studied. The influence of the addition of glass fibers on energy absorption and tensile strength at high strain-rate was investigated. Static tests in compression, in tension and in bending were first performed. Dynamic tests by means of a Modified Hopkinson Bar were then carried out in order to investigate how glass fibers affected energy absorption and tensile strength at high strain-rate of the fiber reinforced mortar. The Dynamic Increase Factor (DIF was finally evaluated. 10. A Review on the Perforated Impact Energy Absorption of Kenaf Fibres Reinforced Composites Science.gov (United States) Ismail, Al Emran; Khalid, S. N. A.; Nor, Nik Hisyamudin Muhd 2017-10-01 This paper reviews the potential of mechanical energy absorption of natural fiber reinforced composites subjected to perforated impact. According to literature survey, several research works discussing on the impact performances on natural fiber reinforced composites are available. However, most of these composite fibers are randomly arranged. Due to high demand for sustainable materials, many researches give high attention to enhance the mechanical capability of natural fiber composites especially focused on the fiber architecture. Therefore, it is important to review the progress of impact energy absorption on woven fiber composite in order to identify the research opportunities in the future. 11. Diaryl-substituted norbornadienes with red-shifted absorption for molecular solar thermal energy storage. Science.gov (United States) Gray, Victor; Lennartson, Anders; Ratanalert, Phasin; Börjesson, Karl; Moth-Poulsen, Kasper 2014-05-25 Red-shifting the absorption of norbornadienes (NBDs), into the visible region, enables the photo-isomerization of NBDs to quadricyclanes (QCs) to be driven by sunlight. This is necessary in order to utilize the NBD-QC system for molecular solar thermal (MOST) energy storage. Reported here is a study on five diaryl-substituted norbornadienes. The introduced aryl-groups induce a significant red-shift of the UV/vis absorption spectrum of the norbornadienes, and device experiments using a solar-simulator set-up demonstrate the potential use of these compounds for MOST energy storage. 12. Collisionless energy absorption in the short-pulse intense laser-cluster interaction International Nuclear Information System (INIS) Kundu, M.; Bauer, D. 2006-01-01 In a previous paper [Phys. Rev. Lett. 96, 123401 (2006)] we have shown by means of three-dimensional particle-in-cell simulations and a simple rigid-sphere model that nonlinear resonance absorption is the dominant collisionless absorption mechanism in the intense, short-pulse laser cluster interaction. In this paper we present a more detailed account of the matter. In particular we show that the absorption efficiency is almost independent of the laser polarization. In the rigid-sphere model, the absorbed energy increases by many orders of magnitude at a certain threshold laser intensity. The particle-in-cell results display maximum fractional absorption around the same intensity. We calculate the threshold intensity and show that it is underestimated by the common overbarrier ionization estimate 13. Relationship between high-energy absorption cross section and strong gravitational lensing for black hole International Nuclear Information System (INIS) Wei Shaowen; Liu Yuxiao; Guo Heng 2011-01-01 In this paper, we obtain a relation between the high-energy absorption cross section and the strong gravitational lensing for a static and spherically symmetric black hole. It provides us a possible way to measure the high-energy absorption cross section for a black hole from strong gravitational lensing through astronomical observation. More importantly, it allows us to compute the total energy emission rate for high-energy particles emitted from the black hole acting as a gravitational lens. It could tell us the range of the frequency, among which the black hole emits the most of its energy and the gravitational waves are most likely to be observed. We also apply it to the Janis-Newman-Winicour solution. The results suggest that we can test the cosmic censorship hypothesis through the observation of gravitational lensing by the weakly naked singularities acting as gravitational lenses. 14. Energy absorption buildup factors of human organs and tissues at energies and penetration depths relevant for radiotherapy and diagnostics DEFF Research Database (Denmark) Manohara, S. R.; Hanagodimath, S. M.; Gerward, Leif 2011-01-01 Energy absorption geometric progression (GP) fitting parameters and the corresponding buildup factors have been computed for human organs and tissues, such as adipose tissue, blood (whole), cortical bone, brain (grey/white matter), breast tissue, eye lens, lung tissue, skeletal muscle, ovary......, testis, soft tissue, and soft tissue (4-component), for the photon energy range 0.015-15 MeV and for penetration depths up to 40 mfp (mean free path). The chemical composition of human organs and tissues is seen to influence the energy absorption buildup factors. It is also found that the buildup factor...... of human organs and tissues changes significantly with the change of incident photon energy and effective atomic number, Zeff. These changes are due to the dominance of different photon interaction processes in different energy regions and different chemical compositions of human organs and tissues... 15. Simultaneous evaluation of the shell and pairing corrections to the nuclear deformation energy: the case of odd-systems Energy Technology Data Exchange (ETDEWEB) Benhamouda, N [Laboratoire de Physique Theoique, Faculte des Sciences, USTHB BP 32 El-Alia, 16111 Bab-Ezzouar, Algers (Algeria); Oudih, M R [CRNA, 2. Bd Frantz Fanon, BP 399 Alger-Gare, Algers (Algeria) 2002-09-15 A method of simultaneous evaluation of the shell and pairing corrections to the nuclear deformation energy, recently proposed for the even-even nuclei, is generalized to the case of odd systems. {sup *} By means of the blocked-level technique, a level density with explicit dependence on pairing correlations is defined. The microscopic corrections to the deformation energy are then determined by a procedure which is analogous to that of Strutinsky. The method is applied to the ground state of Europium isotopes using the single-particle energies of a deformed Woods-Saxon mean-field. The obtained results are in good agreement with the experimental values. 16. Simultaneous evaluation of the shell and pairing corrections to the nuclear deformation energy: the case of odd-systems International Nuclear Information System (INIS) Benhamouda, N.; Oudih, M.R. 2002-01-01 A method of simultaneous evaluation of the shell and pairing corrections to the nuclear deformation energy, recently proposed for the even-even nuclei, is generalized to the case of odd systems. * By means of the blocked-level technique, a level density with explicit dependence on pairing correlations is defined. The microscopic corrections to the deformation energy are then determined by a procedure which is analogous to that of Strutinsky. The method is applied to the ground state of Europium isotopes using the single-particle energies of a deformed Woods-Saxon mean-field. The obtained results are in good agreement with the experimental values 17. Energy Absorption and Dynamic Deformation of Backing Material for Ballistic Evaluation of Body Armour OpenAIRE Debarati Bhattacharjee; Ajay Kumar; Ipsita Biswas 2014-01-01 The measurement of back face signature (BFS) or behind armour blunt trauma (BABT) is a critical aspect of ballistic evaluation of body armour. BFS is the impact experienced by the armour wearing body, when subjected to a non-penetrating projectile. Mineral or polymeric clay is used to measure the BFS. In addition to stopping the projectile, the body armour can be used only when the BFS also falls within permissible limits. The extent of the BFS depends upon the behavior of the backing materia... 18. Enhancement of optical absorption of Si (100) surfaces by low energy N+ ion beam irradiation Science.gov (United States) Bhowmik, Dipak; Karmakar, Prasanta 2018-05-01 The increase of optical absorption efficiency of Si (100) surface by 7 keV and 8 keV N+ ions bombardment has been reported here. A periodic ripple pattern on surface has been observed as well as silicon nitride is formed at the ion impact zones by these low energy N+ ion bombardment [P. Karmakar et al., J. Appl. Phys. 120, 025301 (2016)]. The light absorption efficiency increases due to the presence of silicon nitride compound as well as surface nanopatterns. The Atomic Force Microscopy (AFM) study shows the formation of periodic ripple pattern and increase of surface roughness with N+ ion energy. The enhancement of optical absorption by the ion bombarded Si, compared to the bare Si have been measured by UV - visible spectrophotometer. 19. ADM mass and quasilocal energy of black hole in the deformed Horava-Lifshitz gravity International Nuclear Information System (INIS) Myung, Yun Soo 2010-01-01 Inspired by the Einstein-Born-Infeld black hole, we introduce the isolated horizon to study the Kehagias-Sfetsos (KS) black hole in the deformed Horava-Lifshitz gravity. This is because the KS black hole is more close to the Einstein-Born-Infeld black hole than the Reissner-Nordstroem black hole. We find the horizon and ADM masses by using the first law of thermodynamics and the area-law entropy. The mass parameter m is identified with the quasilocal energy at infinity. Accordingly, we discuss the phase transition between the KS and Schwarzschild black holes by considering the heat capacity and free energy. 20. Energy absorption and failure response of silk/epoxy composite square tubes: Experimental DEFF Research Database (Denmark) Oshkovr, Simin Ataollahi; Taher, Siavash Talebi; A. Eshkoor, Rahim 2012-01-01 This paper focuses on natural silk/epoxy composite square tubes energy absorption and failure response. The tested specimens were featured by a material combination of different lengths and same numbers of natural silk/epoxy composite layers in form of reinforced woven fabric in thermosetting epoxy... 1. Chemical absorption of acoustic energy due to an eddy in the western Bay of Bengal Digital Repository Service at National Institute of Oceanography (India) PrasannaKumar, S.; Navelkar, G.S.; Murty, T.V.R.; Somayajulu, Y.K.; Murty, C.S. Acoustic energy losses due to chemical absorption, within the western Bay of Bengal, in the presence of a subsurface meso-scale cold core eddy has been analysed. These estimates, for two different frequencies - 400 Hz and 10 kHz, find applications... 2. Optical absorption and energy transport in compact dendrimers with unsymmetrical branching International Nuclear Information System (INIS) Supritz, C.; Gounaris, V.; Reineker, P. 2008-01-01 We investigate the linear optical absorption and the energy transport in compact dendrimers with unsymmetrical branching, using the Frenkel exciton concept. The electron-phonon interaction is taken into account by introducing a heat bath that interacts with the exciton in a stochastic manner 3. Laser Absorption and Energy Transfer in Foams of Various Pore Structures and Chemical Compositions, Czech Academy of Sciences Publication Activity Database Limpouch, J.; Borisenko, N.G.; Demchenko, N. N.; Gus´kov, S.Y.; Kasperczuk, A.; Khalenkov, A.M.; Kondrashov, V. N.; Krouský, Eduard; Kuba, J.; Mašek, Karel; Merkul´ev, A.Y.; Nazarov, W.; Pisarczyk, P.; Pisarczyk, T.; Pfeifer, Miroslav; Renner, Oldřich; Rozanov, V. B. 2006-01-01 Roč. 133, - (2006), s. 457-459 ISSN 1155-4339 R&D Projects: GA MŠk(CZ) LC528 Grant - others:INTAS(XE) 01-0572 Institutional research plan: CEZ:AV0Z10100523 Keywords : laser absorption * energy transfer * foam Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.315, year: 2006 4. Stopping-power and mass energy-absorption coefficient ratios for Solid Water International Nuclear Information System (INIS) Ho, A.K.; Paliwal, B.R. 1986-01-01 The AAPM Task Group 21 protocol provides tables of ratios of average restricted stopping powers and ratios of mean energy-absorption coefficients for different materials. These values were based on the work of Cunningham and Schulz. We have calculated these quantities for Solid Water (manufactured by RMI), using the same x-ray spectra and method as that used by Cunningham and Schulz. These values should be useful to people who are using Solid Water for high-energy photon calibration 5. Energy momentum tensor and marginal deformations in open string field theory International Nuclear Information System (INIS) Sen, Ashoke 2004-01-01 Marginal boundary deformations in a two dimensional conformal field theory correspond to a family of classical solutions of the equations of motion of open string field theory. In this paper we develop a systematic method for relating the parameter labelling the marginal boundary deformation in the conformal field theory to the parameter labelling the classical solution in open string field theory. This is done by first constructing the energy-momentum tensor associated with the classical solution in open string field theory using Noether method, and then comparing this to the answer obtained in the conformal field theory by analysing the boundary state. We also use this method to demonstrate that in open string field theory the tachyon lump solution on a circle of radius larger than one has vanishing pressure along the circle direction, as is expected for a co-dimension one D-brane. (author) 6. Plastic incompatibility stresses and stored elastic energy in plastically deformed copper Energy Technology Data Exchange (ETDEWEB) Baczmanski, A. [Faculty of Physics and Applied Computer Science, AGH-University of Science and Technology, al. Mickiewicza 30, 30-059 Krakow (Poland)], E-mail: [email protected]; Hfaiedh, N.; Francois, M. [LASMIS, Universite de Technologie de Troyes, 11 rue Marie Curie, B.P. 2060, 10010 Troyes (France); Wierzbanowski, K. [Faculty of Physics and Applied Computer Science, AGH-University of Science and Technology, al. Mickiewicza 30, 30-059 Krakow (Poland) 2009-02-15 The X-ray diffraction method and theoretical model of elastoplastic deformation were used to examine the residual stresses in polycrystalline copper. To this end, the {l_brace}2 2 0{r_brace} strain pole figures were determined for samples subjected to different magnitudes of tensile deformation. Using diffraction data and the self-consistent model, the tensor of plastic incompatibility stress was found for each orientation of a polycrystalline grain. Crystallographic textures, macroscopic and second-order residual stresses were considered in the analysis. As a result, the distributions of elastic stored energy and von Mises equivalent stress were presented in Euler space and correlated with the preferred orientations of grains. Moreover, using the model prediction, the variation of the critical resolved shear stress with grain orientation was determined. 7. Improved model of activation energy absorption for different electrical breakdowns in semi-crystalline insulating polymers Science.gov (United States) Sima, Wenxia; Jiang, Xiongwei; Peng, Qingjun; Sun, Potao 2018-05-01 Electrical breakdown is an important physical phenomenon in electrical equipment and electronic devices. Many related models and theories of electrical breakdown have been proposed. However, a widely recognized understanding on the following phenomenon is still lacking: impulse breakdown strength which varies with waveform parameters, decrease in the breakdown strength of AC voltage with increasing frequency, and higher impulse breakdown strength than that of AC. In this work, an improved model of activation energy absorption for different electrical breakdowns in semi-crystalline insulating polymers is proposed based on the Harmonic oscillator model. Simulation and experimental results show that, the energy of trapped charges obtained from AC stress is higher than that of impulse voltage, and the absorbed activation energy increases with the increase in the electric field frequency. Meanwhile, the frequency-dependent relative dielectric constant ε r and dielectric loss tanδ also affect the absorption of activation energy. The absorbed activation energy and modified trap level synergistically determine the breakdown strength. The mechanism analysis of breakdown strength under various voltage waveforms is consistent with the experimental results. Therefore, the proposed model of activation energy absorption in the present work may provide a new possible method for analyzing and explaining the breakdown phenomenon in semi-crystalline insulating polymers. 8. Constitutive equations for energy balance evaluation in metals under inelastic deformation Science.gov (United States) Kostina, A.; Plekhov, O.; Venkatraman, B. 2017-12-01 The work is devoted to the development of constitutive equations for energy balance evaluation in plastically deformed metals. The evolution of the defect system is described by a previously obtained model based on the Boltzmann-Gibbs statistics. In the framework of this model, a collective behavior of mesodefect ensembles is taken into account by the introduction of an internal variable representing additional structural strain. This parameter enables the partition of plastic work into dissipated heat and stored energy. The proposed model is applied to energy balance calculation in a Ti-1Al-1Mn specimen subjected to cyclic loading. Simulation results have shown that the model is able to describe an upward trend in the stored energy value with the increase in the load ratio. 9. Features of energy impact on a billet material when cutting with outstripping plastic deformation Directory of Open Access Journals (Sweden) V. M. Yaroslavtsev 2014-01-01 Full Text Available In the last decades the so-called combined machining methods based on parallel, serial or parallelserial combination of different types of energy impacts on the billet are designed and developed. Combination of two or more sources of external energy in one method of machining can be directed to the solution of different technological tasks, such as: improvement of a basic method to enhance technicaland-economic and technological indicators of machining, expansion of technological capabilities of the method, increase of reliability and stability of technological process to produce details, etc. Besides, the combined methods of machining are considered as one of the means, which enables reducing the number of operations in technological process, allows the growth of workforce productivity.When developing the combined technologies, one of the main scientific tasks is to define the general regularities of interaction and mutual influence of the energy fluxes brought to the zone of machining. The result of such mutual influence becomes apparent from the forming technological parameters of machining and determines the most rational operating conditions of technological process.In the context of conducted in BMSTU researches on the combined cutting method with outstripping plastic deformation (OPD the mutual influence of the energetic components of machining has been quantitatively assessed. The paper shows a direct relationship between the rational ratio of the two types of the mechanical energy brought in the machining zone, the machining conditions, and the optimum operating conditions.The paper offers a physical model of chip formation when machining with OPD. The essence of model is that specific works spent on material deformation of a cut-off layer are quantitatively compared at usual cutting and at cutting with OPD. It is experimentally confirmed that the final strain-deformed material states of a cut-off layer, essentially, coincide in both 10. Numerical examinations of simplified spondylodesis models concerning energy absorption in magnetic resonance imaging Directory of Open Access Journals (Sweden) 2016-09-01 Full Text Available Metallic implants in magnetic resonance imaging (MRI are a potential safety risk since the energy absorption may increase temperature of the surrounding tissue. The temperature rise is highly dependent on implant size. Numerical examinations can be used to calculate the energy absorption in terms of the specific absorption rate (SAR induced by MRI on orthopaedic implants. This research presents the impact of titanium osteosynthesis spine implants, called spondylodesis, deduced by numerical examinations of energy absorption in simplified spondylodesis models placed in 1.5 T and 3.0 T MRI body coils. The implants are modelled along with a spine model consisting of vertebrae and disci intervertebrales thus extending previous investigations [1], [2]. Increased SAR values are observed at the ends of long implants, while at the center SAR is significantly lower. Sufficiently short implants show increased SAR along the complete length of the implant. A careful data analysis reveals that the particular anatomy, i.e. vertebrae and disci intervertebrales, has a significant effect on SAR. On top of SAR profile due to the implant length, considerable SAR variations at small scale are observed, e.g. SAR values at vertebra are higher than at disc positions. 11. Integration of Semiconducting Sulfides for Full-Spectrum Solar Energy Absorption and Efficient Charge Separation. Science.gov (United States) Zhuang, Tao-Tao; Liu, Yan; Li, Yi; Zhao, Yuan; Wu, Liang; Jiang, Jun; Yu, Shu-Hong 2016-05-23 The full harvest of solar energy by semiconductors requires a material that simultaneously absorbs across the whole solar spectrum and collects photogenerated electrons and holes separately. The stepwise integration of three semiconducting sulfides, namely ZnS, CdS, and Cu2-x S, into a single nanocrystal, led to a unique ternary multi-node sheath ZnS-CdS-Cu2-x S heteronanorod for full-spectrum solar energy absorption. Localized surface plasmon resonance (LSPR) in the nonstoichiometric copper sulfide nanostructures enables effective NIR absorption. More significantly, the construction of pn heterojunctions between Cu2-x S and CdS leads to staggered gaps, as confirmed by first-principles simulations. This band alignment causes effective electron-hole separation in the ternary system and hence enables efficient solar energy conversion. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 12. Tunable evolutions of shock absorption and energy partitioning in magnetic granular chains Science.gov (United States) Leng, Dingxin; Liu, Guijie; Sun, Lingyu 2018-01-01 In this paper, we investigate the tunable characteristics of shock waves propagating in one-dimensional magnetic granular chains at various chain lengths and magnetic flux densities. According to the Hertz contact theory and Maxwell principle, a discrete element model with coupling elastic and field-induced interaction potentials of adjacent magnetic grains is proposed. We also present hard-sphere approximation analysis to describe the energy partitioning features of magnetic granular chains. The results demonstrate that, for a fixed magnetic field strength, when the chain length is greater than two times of the wave width of the solitary wave, the chain length has little effect on the output energy of the system; for a fixed chain length, the shock absorption and energy partitioning features of magnetic granular chains are remarkably influenced by varying magnetic flux densities. This study implies that the magnetic granular chain is potential to construct adaptive shock absorption components for impulse mitigation. 13. Impact of Tidal Level Variations on Wave Energy Absorption at Wave Hub Directory of Open Access Journals (Sweden) Valeria Castellucci 2016-10-01 Full Text Available The energy absorption of the wave energy converters (WEC characterized by a limited stroke length —like the point absorbers developed at Uppsala University—depends on the sea level variation at the deployment site. In coastal areas characterized by high tidal ranges, the daily energy production of the generators is not optimal. The study presented in this paper quantifies the effects of the changing sea level at the Wave Hub test site, located at the south-west coast of England. This area is strongly affected by tides: the tidal height calculated as the difference between the Mean High Water Spring and the Mean Low Water Spring in 2014 was about 6.6 m. The results are obtained from a hydro-mechanic model that analyzes the behaviour of the point absorber at the Wave Hub, taking into account the sea state occurrence scatter diagram and the tidal time series at the site. It turns out that the impact of the tide decreases the energy absorption by 53%. For this reason, the need for a tidal compensation system to be included in the design of the WEC becomes compelling. The economic advantages are evaluated for different scenarios: the economic analysis proposed within the paper allows an educated guess to be made on the profits. The alternative of extending the stroke length of the WEC is investigated, and the gain in energy absorption is estimated. 14. Supporting Structure of the LSD Wave in an Energy Absorption Perspective International Nuclear Information System (INIS) Fukui, Akihiro; Hatai, Keigo; Cho, Shinatora; Arakawa, Yoshihiro; Komurasaki, Kimiya 2008-01-01 In Repetitively Pulsed (RP) Laser Propulsion, laser energy irradiated to a vehicle is converted to blast wave enthalpy during the Laser Supported Detonation (LSD) regime. Based on the measured post-LSD electron number density profiles by two-wavelength Mach Zehnder interferometer in a line-focusing optics, electron temperature and absorption coefficient were estimated assuming Local Thermal Equilibrium. A 10J/pulse CO 2 laser was used. As a result, laser absorption was found completed in the layer between the shock wave and the electron density peak. Although the LSD-termination timing was not clear from the shock-front/ionization-front separation in the shadowgraph images, there observed drastic changes in the absorption layer thickness from 0.2 mm to 0.5 mm and in the peak heating rate from 12-17x10 13 kW/m 3 to 5x10 13 kW/m 3 at the termination 15. Research of waste heat energy efficiency for absorption heat pump recycling thermal power plant circulating water Science.gov (United States) Zhang, Li; Zhang, Yu; Zhou, Liansheng; E, Zhijun; Wang, Kun; Wang, Ziyue; Li, Guohao; Qu, Bin 2018-02-01 The waste heat energy efficiency for absorption heat pump recycling thermal power plant circulating water has been analyzed. After the operation of heat pump, the influences on power generation and heat generation of unit were taken into account. In the light of the characteristics of heat pump in different operation stages, the energy efficiency of heat pump was evaluated comprehensively on both sides of benefits belonging to electricity and benefits belonging to heat, which adopted the method of contrast test. Thus, the reference of energy efficiency for same type projects was provided. 16. Effect of laser energy on the deformation behavior in microscale laser bulge forming International Nuclear Information System (INIS) Zheng Chao; Sun Sheng; Ji Zhong; Wang Wei 2010-01-01 Microscale laser bulge forming is a high strain rate microforming method using high-amplitude shock wave pressure induced by pulsed laser irradiation. The process can serve as a rapidly established and high precision technique to impress microfeatures on thin sheet metals and holds promise of manufacturing complex miniaturized devices. The present paper investigated the forming process using both numerical and experimental methods. The effect of laser energy on microformability of pure copper was discussed in detail. A 3D measuring laser microscope was adopted to measure deformed regions under different laser energy levels. The deformation measurements showed that the experimental and numerical results were in good agreement. With the verified simulation model, the residual stress distribution at different laser energy was predicted and analyzed. The springback was found as a key factor to determine the distribution and magnitude of the compressive residual stress. In addition, the absorbent coating and the surface morphology of the formed samples were observed through the scanning electron microscope. The observation confirmed that the shock forming process was non-thermal attributed to the protection of the absorbent coating. 17. Pervasive nanoscale deformation twinning as a catalyst for efficient energy dissipation in a bioceramic armour Science.gov (United States) Li, Ling; Ortiz, Christine 2014-05-01 Hierarchical composite materials design in biological exoskeletons achieves penetration resistance through a variety of energy-dissipating mechanisms while simultaneously balancing the need for damage localization to avoid compromising the mechanical integrity of the entire structure and to maintain multi-hit capability. Here, we show that the shell of the bivalve Placuna placenta (~99 wt% calcite), which possesses the unique optical property of ~80% total transmission of visible light, simultaneously achieves penetration resistance and deformation localization via increasing energy dissipation density (0.290 ± 0.072 nJ μm-3) by approximately an order of magnitude relative to single-crystal geological calcite (0.034 ± 0.013 nJ μm-3). P. placenta, which is composed of a layered assembly of elongated diamond-shaped calcite crystals, undergoes pervasive nanoscale deformation twinning (width ~50 nm) surrounding the penetration zone, which catalyses a series of additional inelastic energy dissipating mechanisms such as interfacial and intracrystalline nanocracking, viscoplastic stretching of interfacial organic material, and nanograin formation and reorientation. 18. Empirical formulae for mass attenuation and energy absorption coefficients from 1 keV to 20 MeV International Nuclear Information System (INIS) Manjunatha, H.C.; Sowmya, N.; Seenappa, L.; Sridhar, K.N.; Hanumantharayappa, C. 2017-01-01 Mass attenuation and energy absorption coefficients represents attenuation and absorption of X-rays and gamma rays in the material medium. A new empirical formula is proposed for mass attenuation and energy absorption coefficients in the region 1 < Z < 92 and from 1 keV to 20 MeV. The mass attenuation and energy absorption coefficients do not varies linearly with energy. We have performed the nonlinear regressions/nonlinear least square fittings and proposed the simple empirical relations between mass attenuation coefficients (μ/ρ) and mass energy absorption coefficients (μ en /ρ) and energy. We have compared the values produced by this formula with that of experiments. A good agreement of present formula with the experiments/previous models suggests that the present formulae could be used to evaluate mass attenuation and energy absorption coefficients in the region 1 < Z < 92. This formula is a model-independent formula and is the first of its kind that produces a mass attenuation and energy absorption coefficient values with the only simple input of energy for wide energy range 1 keV - 20 MeV in the atomic number region 1 < Z < 92. This formula is very much useful in the fields of radiation physics and dosimetry 19. Energy-Looping Nanoparticles: Harnessing Excited-State Absorption for Deep-Tissue Imaging. Science.gov (United States) Levy, Elizabeth S; Tajon, Cheryl A; Bischof, Thomas S; Iafrati, Jillian; Fernandez-Bravo, Angel; Garfield, David J; Chamanzar, Maysamreza; Maharbiz, Michel M; Sohal, Vikaas S; Schuck, P James; Cohen, Bruce E; Chan, Emory M 2016-09-27 Near infrared (NIR) microscopy enables noninvasive imaging in tissue, particularly in the NIR-II spectral range (1000-1400 nm) where attenuation due to tissue scattering and absorption is minimized. Lanthanide-doped upconverting nanocrystals are promising deep-tissue imaging probes due to their photostable emission in the visible and NIR, but these materials are not efficiently excited at NIR-II wavelengths due to the dearth of lanthanide ground-state absorption transitions in this window. Here, we develop a class of lanthanide-doped imaging probes that harness an energy-looping mechanism that facilitates excitation at NIR-II wavelengths, such as 1064 nm, that are resonant with excited-state absorption transitions but not ground-state absorption. Using computational methods and combinatorial screening, we have identified Tm(3+)-doped NaYF4 nanoparticles as efficient looping systems that emit at 800 nm under continuous-wave excitation at 1064 nm. Using this benign excitation with standard confocal microscopy, energy-looping nanoparticles (ELNPs) are imaged in cultured mammalian cells and through brain tissue without autofluorescence. The 1 mm imaging depths and 2 μm feature sizes are comparable to those demonstrated by state-of-the-art multiphoton techniques, illustrating that ELNPs are a promising class of NIR probes for high-fidelity visualization in cells and tissue. 20. Time-resolved photoion imaging spectroscopy: Determining energy distribution in multiphoton absorption experiments Science.gov (United States) Qian, D. B.; Shi, F. D.; Chen, L.; Martin, S.; Bernard, J.; Yang, J.; Zhang, S. F.; Chen, Z. Q.; Zhu, X. L.; Ma, X. 2018-04-01 We propose an approach to determine the excitation energy distribution due to multiphoton absorption in the case of excited systems following decays to produce different ion species. This approach is based on the measurement of the time-resolved photoion position spectrum by using velocity map imaging spectrometry and an unfocused laser beam with a low fluence and homogeneous profile. Such a measurement allows us to identify the species and the origin of each ion detected and to depict the energy distribution using a pure Poisson's equation involving only one variable which is proportional to the absolute photon absorption cross section. A cascade decay model is used to build direct connections between the energy distribution and the probability to detect each ionic species. Comparison between experiments and simulations permits the energy distribution and accordingly the absolute photon absorption cross section to be determined. This approach is illustrated using C60 as an example. It may therefore be extended to a wide variety of molecules and clusters having decay mechanisms similar to those of fullerene molecules. 1. Importance of the green color, absorption gradient, and spectral absorption of chloroplasts for the radiative energy balance of leaves. Science.gov (United States) Kume, Atsushi 2017-05-01 2. Study and Optimization of Helicopter Subfloor Energy Absorption Structure with Foldcore Sandwich Structures Science.gov (United States) HuaZhi, Zhou; ZhiJin, Wang 2017-11-01 The intersection element is an important part of the helicopter subfloor structure. In order to improve the crashworthiness properties, the floor and the skin of the intersection element are replaced with foldcore sandwich structures. Foldcore is a kind of high-energy absorption structure. Compared with original structure, the new intersection element shows better buffering capacity and energy-absorption capacity. To reduce structure’s mass while maintaining the crashworthiness requirements satisfied, optimization of the intersection element geometric parameters is conducted. An optimization method using NSGA-II and Anisotropic Kriging is used. A significant CPU time saving can be obtained by replacing numerical model with Anisotropic Kriging surrogate model. The operation allows 17.15% reduce of the intersection element mass. 3. Efficient energy absorption of intense ps-laser pulse into nanowire target Energy Technology Data Exchange (ETDEWEB) Habara, H.; Honda, S.; Katayama, M.; Tanaka, K. A. [Graduate School of Engineering, Osaka University, 2-1 Suita, Osaka 565-0871 (Japan); Sakagami, H. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Nagai, K. [Laboratory for Chemistry and Life Science, Institute of Innovative Research, Tokyo Institute of Technology, Nagatsuda 4259, Midori-ku, Yokohama 226-8503, Kanagawa (Japan) 2016-06-15 The interaction between ultra-intense laser light and vertically aligned carbon nanotubes is investigated to demonstrate efficient laser-energy absorption in the ps laser-pulse regime. Results indicate a clear enhancement of the energy conversion from laser to energetic electrons and a simultaneously small plasma expansion on the surface of the target. A two-dimensional plasma particle calculation exhibits a high absorption through laser propagation deep into the nanotube array, even for a dense array whose structure is much smaller than the laser wavelength. The propagation leads to the radial expansion of plasma perpendicular to the nanotubes rather than to the front side. These features may contribute to fast ignition in inertial confinement fusion and laser particle acceleration, both of which require high current and small surface plasma simultaneously. 4. Development of whole energy absorption spectrometer for decay heat measurement on fusion reactor materials Energy Technology Data Exchange (ETDEWEB) Maekawa, Fujio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment 1997-03-01 To measure decay heat on fusion reactor materials irradiated by D-T neutrons, a Whole Energy Absorption Spectrometer (WEAS) consisting of a pair of large BGO (bismuth-germanate) scintillators was developed. Feasibility of decay heat measurement with WEAS for various materials and for a wide range of half-lives (seconds - years) was demonstrated by experiments at FNS. Features of WEAS, such as high sensitivity, radioactivity identification, and reasonably low experimental uncertainty of {approx} 10 %, were found. (author) 5. Energy Absorption Mechanisms in Unidirectional Composites Subjected to Dynamic Loading Events Science.gov (United States) 2012-03-30 integral part of commercial, recreation, and defense markets . The proliferation of applications for fiber-reinforced composite technology can be in large...soft body armors. The growth of composites in high-performance markets continues to outpace the development of new and improved physics-based...pp. 718 – 730, 2008. 16. G. C. Jacob, J. F. Fellers, S. Simunovic, and J. M. Starbuck , “Energy Absorption in Polymer Composites for 6. Field distribution of a source and energy absorption in an inhomogeneous magneto-active plasma International Nuclear Information System (INIS) Galushko, N.P.; Erokhin, N.S.; Moiseev, S.S. 1975-01-01 In the present paper the distribution of source fields in in a magnetoactive plasma is studied from the standpoint of the possibility of an effective SHF heating of an inhomogeneous plasma in both high (ωapproximatelyωsub(pe) and low (ωapproximatelyωsub(pi) frequency ranges, where ωsub(pe) and ωsub(pi) are the electron and ion plasma frequencies. The localization of the HF energy absorption regions in cold and hot plasma and the effect of plasma inhomogeneity and source dimensions on the absorption efficiency are investigated. The linear wave transformation in an inhomogeneous hot plasma is taken into consideration. Attention is paid to the difference between the region localization for collisional and non-collisional absorption. It has been shown that the HF energy dissipation in plasma particle collisions is localized in the region of thin jets going from the source; the radiation field has a sharp peak in this region. At the same time, non-collisional HF energy dissipation is spread over the plasma volume as a result of Cherenkov and cyclotron wave attenuation. The essential contribution to the source field from resonances due to standing wave excitation in an inhomogeneous plasma shell near the source is pointed out 7. Experimental investigation on photothermal properties of nanofluids for direct absorption solar thermal energy systems International Nuclear Information System (INIS) He, Qinbo; Wang, Shuangfeng; Zeng, Shequan; Zheng, Zhaozhi 2013-01-01 Highlights: • The factors affecting the transmittance of Cu–H 2 O nanofluids were studied with UV–Vis–NIR spectrophotometer. • The optical properties of Cu–H 2 O nanofluids were studied through the theoretical model. • The Cu–H 2 O nanofluids can enhance the absorption ability for solar energy. - Abstract: In this article, Cu–H 2 O nanofluids were prepared through two-step method. The transmittance of nanofluids over solar spectrum (250–2500 nm) was measured by the UV–Vis–NIR spectrophotometer based on integrating sphere principle. The factors influencing transmittance of nanofluids, such as particle size, mass fraction and optical path were investigated. The extinction coefficients measured experimentally were compared with the theoretical calculation value. Meanwhile, the photothermal properties of nanofluids were also investigated. The experimental results show that the transmittance of Cu–H 2 O nanofluids is much less than that of deionized water, and decreases with increasing nanoparticle size, mass fraction and optical depth. The highest temperature of Cu–H 2 O nanofluids (0.1 wt.%) can increased up to 25.3% compared with deionized water. The good absorption ability of Cu–H 2 O nanofluids for solar energy indicates that it is suitable for direct absorption solar thermal energy systems 8. Ab-sorption machines for heating and cooling in future energy systems - Final report Energy Technology Data Exchange (ETDEWEB) Tozer, R.; Gustafsson, M. 2000-12-15 After the Executive Summary and a brief introductory chapter, Chapter 2, Sorption Technologies for Heating and Cooling in Future Energy Systems, reviews the main types of sorption systems. Chapter 3, Market Segmentation, then considers the major segments of the market including residential, commercial/institutional and industrial, and the types of sorption hardware most suitable to each. The highly important residential and commercial/institutional markets are mostly concerned with air-conditioning of buildings. More applications are identified and discussed for the industrial market, including refrigeration, food-storage cooling, process cooling, and process heating at various temperature ranges from hot water for hand-washing to high-temperature (greater than 130C). Other interesting industrial applications are absorption cooling or heating combined with co-generation, desiccant cooling, gas turbine inlet air cooling, combining absorption chillers with district heating systems, direct-fired absorption heat pumps (AHPs), and a closed greenhouse concept being developed for that economically important sector in the Netherlands. Most of the sorption market at this time comprises direct-fired absorption chillers, or hot water or steam absorption chillers indirectly driven by direct-fired boilers. Throughout the report, this category of absorption chillers is referred to generically as 'direct-fired'. In addition, this report covers absorption (reversible) heat pumps, absorption heat transformers, compression-absorption heat pumps, and adsorption chillers and heat pumps. Adsorption systems together with desiccant systems are also addressed. Chapter 4, Factors Affecting the Market, considers economic, environmental and policy issues. The geographical make-up of the world sorption market is then reviewed, followed by a number of practical operating and control considerations. These include vacuum requirements, crystallisation, corrosion, maintenance, health and 9. Study on new energy development planning and absorptive capability of Xinjiang in China considering resource characteristics and demand prediction Science.gov (United States) Shao, Hai; Miao, Xujuan; Liu, Jinpeng; Wu, Meng; Zhao, Xuehua 2018-02-01 Xinjiang, as the area where wind energy and solar energy resources are extremely rich, with good resource development characteristics, can provide a support for regional power development and supply protection. This paper systematically analyzes the new energy resource and development characteristics of Xinjiang and carries out the demand prediction and excavation of load characteristics of Xinjiang power market. Combing the development plan of new energy of Xinjiang and considering the construction of transmission channel, it analyzes the absorptive capability of new energy. It provides certain reference for the comprehensive planning of new energy development in Xinjiang and the improvement of absorptive capacity of new energy. 10. Relativistic deformed mean-field calculation of binding energy differences of mirror nuclei International Nuclear Information System (INIS) Koepf, W.; Barreiro, L.A. 1996-01-01 Binding energy differences of mirror nuclei for A=15, 17, 27, 29, 31, 33, 39 and 41 are calculated in the framework of relativistic deformed mean-field theory. The spatial components of the vector meson fields and the photon are fully taken into account in a self-consistent manner. The calculated binding energy differences are systematically smaller than the experimental values and lend support to the existence of the Okamoto-Nolen-Schiffer anomaly found decades ago in nonrelativistic calculations. For the majority of the nuclei studied, however, the results are such that the anomaly is significantly smaller than the one obtained within state-of-the-art nonrelativistic calculations. (author). 35 refs 11. Surface absorption in the 32S+24Mg interactions at energies near the Coulomb barrier International Nuclear Information System (INIS) Pacheco, J.C.; Sanchez, F.; Diaz, J.; Ferrero, J.L.; Bilwes, B.; Kadi-Hanifi, D. 1995-01-01 Elastic scattering 32 S on 24 Mg has been measured at 65.0, 75.0, 86.3, 95.0 and 110.0 MeV-lab energies, and the data were systematically analysed with semi-phenomenological potentials. Using microscopic potentials we found similar results at the lowest incident energies, for which we have compared both the microscopic and semi-phenomenological potentials. It appears that the absorption takes place in a narrow range at the nuclear surface and is mainly due to the low lying collective surface states. (author). 41 refs., 11 figs., 4 tabs Science.gov (United States) Banyasz, Akos; Ketola, Tiia; Martínez-Fernández, Lara; Improta, Roberto; Markovitsi, Dimitra 2018-04-17 13. Using waste heat of ship as energy source for an absorption refrigeration system International Nuclear Information System (INIS) Salmi, Waltteri; Vanttola, Juha; Elg, Mia; Kuosa, Maunu; Lahdelma, Risto 2017-01-01 Highlights: • A steady-state thermodynamic model is developed for absorption refrigeration in a ship. • Operation profile of B.Delta37 bulk carrier is used as an initial data. • Suitability of water-LiBr and ammonia-water working pairs were validated. • Coefficient of performance (COP) was studied in ISO and tropical conditions. • Estimated energy savings were 47 and 95 tons of fuel every year. - Abstract: This work presents a steady-state thermodynamic model for absorption refrigeration cycles with water-LiBr and ammonia-water working pairs for purpose of application on a ship. The coefficient of performance was studied with different generator and evaporator temperatures in ISO and tropical conditions. Absorption refrigeration systems were examined using exhaust gases, jacket water, and scavenge air as energy sources. Optimal generator temperatures for different refrigerant temperatures were found using different waste heat sources and for the absorption cycle itself. Critical temperature values (where the refrigeration power drops to zero) were defined. All of these values were used in order to evaluate the cooling power and energy production possibilities in a bulk carrier. The process data of exhaust gases and cooling water flows in two different climate conditions (ISO and tropical) and operation profiles of a B. Delta37 bulk carrier were used as initial data in the study. With the case ship data, a theoretical potential of saving of 70% of the electricity used in accommodation (AC use) compressor in ISO conditions and 61% in tropical conditions was recognized. Those estimates enable between 47 and 95 tons of annual fuel savings, respectively. Moreover, jacket water heat recovery with a water-LiBr system has the potential to provide 2.2–4.0 times more cooling power than required during sea-time operations in ISO conditions, depending on the main engine load. 14. Special dynamic behavior of an aluminum alloy and effects on energy absorption in train collisions Directory of Open Access Journals (Sweden) Chao Yang 2016-05-01 Full Text Available Dynamic tension tests and compression tests were carried out for 5083-H111 aluminum alloy to investigate the dynamic mechanical behavior and its effect on energy absorption characteristics of an energy-absorbing device. The material constitutive relations were obtained at various levels of strain rates by means of tests. Three material models were performed on the energy-absorbing device of railway vehicles. We investigated the influence of the material dynamic behavior on the energy absorption capability. The results indicate that 5083-H111 aluminum alloy is endowed with negative strain rate sensitivity at medium–low strain rates and possesses the feature of negative and then positive strain rate sensitivity in the range of medium strain rates. The material presents obvious strain rate strengthening effect at high strain rates. Moreover, the order of magnitudes of the strain rate in the train collision is 0–2. It belongs to the medium strain rate. The practical absorbed energy of the structure made of 5083-H111 alloy is less than that of the same structure without regard to the strain rate effect in design phases. 15. Optimization of operation of energy supply systems with co-generation and absorption refrigeration Directory of Open Access Journals (Sweden) Stojiljković Mirko M. 2012-01-01 Full Text Available Co-generation systems, together with absorption refrigeration and thermal storage, can result in substantial benefits from the economic, energy and environmental point of view. Optimization of operation of such systems is important as a component of the entire optimization process in pre-construction phases, but also for short-term energy production planning and system control. This paper proposes an approach for operational optimization of energy supply systems with small or medium scale co-generation, additional boilers and heat pumps, absorption and compression refrigeration, thermal energy storage and interconnection to the electric utility grid. In this case, the objective is to minimize annual costs related to the plant operation. The optimization problem is defined as mixed integer nonlinear and solved combining modern stochastic techniques: genetic algorithms and simulated annealing with linear programming using the object oriented “ESO-MS” software solution for simulation and optimization of energy supply systems, developed as a part of this research. This approach is applied to optimize a hypothetical plant that might be used to supply a real residential settlement in Niš, Serbia. Results are compared to the ones obtained after transforming the problem to mixed 0-1 linear and applying the branch and bound method. 16. Investigation of human teeth with respect to the photon interaction, energy absorption and buildup factor Energy Technology Data Exchange (ETDEWEB) Kurudirek, Murat, E-mail: [email protected] [Faculty of Science, Department of Physics, Ataturk University, 25240 Erzurum (Turkey); Topcuoglu, Sinan [Faculty of Dentistry, Department of Endodontic, Ataturk University, 25240 Erzurum (Turkey) 2011-05-15 The effective atomic numbers and electron densities of human teeth have been calculated for total photon interaction (Z{sub PI{sub e{sub f{sub f}}}},Ne{sub PI{sub e{sub f{sub f}}}}) and photon energy absorption (Z{sub PEA{sub e{sub f{sub f}}}},Z{sub RW{sub e{sub f{sub f}}}}Ne{sub PEA{sub e{sub f{sub f}}}}) in the energy region 1 keV-20 MeV. Besides, the energy absorption (EABF) and exposure (EBF) buildup factors have been calculated for these samples by using the geometric progression fitting approximation in the energy region 0.015-15 MeV up to 40 mfp (mean free path). Wherever possible the results were compared with experiment. Effective atomic numbers (Z{sub PI{sub e{sub f{sub f}}}}) of human teeth were calculated using different methods. Discrepancies were noted in Z{sub PI{sub e{sub f{sub f}}}} between the direct and interpolation methods in the low and high energy regions where absorption processes dominate while good agreement was observed in intermediate energy region where Compton scattering dominates. Significant variations up to 22% were observed between Z{sub PI{sub e{sub f{sub f}}}} and Z{sub PEA{sub e{sub f{sub f}}}} in the energy region 30-150 keV which is the used energy range in dental cone beam computed tomography (CBCT) X-ray machines. The Z{sub eff} values of human teeth were found to relatively vary within 1% if different laser treatments are applied. In this variation, the Er:YAG laser treated samples were found to be less effected than Nd:YAG laser treated ones when compared with control group. Relative differences between EABF and EBF were found to be significantly high in the energy region 60 keV-1 MeV even though they have similar variations with respect to the different parameters viz. photon energy, penetration depth. 17. Electromagnetic radiation energy arrangement. [coatings for solar energy absorption and infrared reflection Science.gov (United States) Lipkis, R. R.; Vehrencamp, J. E. (Inventor) 1965-01-01 A solar energy collector and infrared energy reflector is described which comprises a vacuum deposited layer of aluminum of approximately 200 to 400 Angstroms thick on one side of a substrate. An adherent layer of titanium with a thickness of between 800 and 1000 Angstroms is vacuum deposited on the aluminum substrate and is substantially opaque to solar energy and substantially transparent to infrared energy. 18. Modular Hamiltonians for deformed half-spaces and the averaged null energy condition Science.gov (United States) Faulkner, Thomas; Leigh, Robert G.; Parrikar, Onkar; Wang, Huajia 2016-09-01 We study modular Hamiltonians corresponding to the vacuum state for deformed half-spaces in relativistic quantum field theories on {{R}}^{1,d-1} . We show that in addition to the usual boost generator, there is a contribution to the modular Hamiltonian at first order in the shape deformation, proportional to the integral of the null components of the stress tensor along the Rindler horizon. We use this fact along with monotonicity of relative entropy to prove the averaged null energy condition in Minkowski space-time. This subsequently gives a new proof of the Hofman-Maldacena bounds on the parameters appearing in CFT three-point functions. Our main technical advance involves adapting newly developed perturbative methods for calculating entanglement entropy to the problem at hand. These methods were recently used to prove certain results on the shape dependence of entanglement in CFTs and here we generalize these results to excited states and real time dynamics. We also discuss the AdS/CFT counterpart of this result, making connection with the recently proposed gravitational dual for modular Hamiltonians in holographic theories. 19. Mass Absorption Coefficients At 661,6 keV Energy In Various Samples International Nuclear Information System (INIS) Suhariyono, Gatot; Bunawas 2000-01-01 Determination mass absorption coefficients (mum) at 661.6 keV energy in the samples various, such as lysine, coffee, chocolate, nutrisari, coconut oil, monosodium glutamate (MSG), tea, tin fish and the soil with experiment method has been carried out. The mum research was carried out in effort to give the measurement result of Cs-137 concentration that more accurate to the samples, because the sample density increases, mass absorption coefficients (mum) decreases. The mum correction on measurement of Cs-137 concentration in the samples various around between 0 and 13%, the highest is on the chocolate sample and the lowest is on the tin fish sample. Density of the samples decreases, the mum influence increases on the counting of Cs-137 concentration in the sample (Bq/kg) 20. The congruence energy: A contribution to nuclear masses and deformation energies International Nuclear Information System (INIS) Myers, W.D.; Swiatecki, W.J. 1995-06-01 The difference between measured binding energies and those calculated using a shell- and pairing-corrected Thomas-Fermi model can be described approximately by C(I) = -10exp(-4.2|I|) MeV. The authors' interpretation of this extra binding is in terms of the granularity of quantal nucleonic density distributions, which leads to a stronger interaction for a neutron and proton with congruent nodal structures of their wave functions. The predicted doubling of this congruence energy in fission is supported by an analysis of measured fission barriers and by a study of wave functions in a dividing Hill-Wheeler box potential. A semi-empirical formula for the shape-dependent congruence energy is described 1. The congruence energy: a contribution to nuclear masses, deformation energies and fission barriers International Nuclear Information System (INIS) Myers, W.D.; Swiatecki, W.J. 1997-01-01 The difference between measured binding energies and those calculated using a shell- and pairing-corrected Thomas-Fermi model can be described approximately by C(I)=-10 exp(-4.2 vertical stroke I vertical stroke) MeV, where I=(N-Z)/A. Our interpretation of this extra binding is in terms of the granularity of quantal nucleonic density distributions, which leads to a stronger interaction for a neutron and proton with congruent nodal structures of their wave functions. The predicted doubling of this congruence energy in fission is supported by an analysis of measured fission barriers and by a study of wave functions in a dividing Hill-Wheeler box potential. A semi-empirical formula for the shape-dependent congruence energy is described. (orig.) 2. Depth-selective X-ray absorption spectroscopy by detection of energy-loss Auger electrons Energy Technology Data Exchange (ETDEWEB) Isomura, Noritake, E-mail: [email protected] [Toyota Central R& D Labs., Inc., 41-1 Yokomichi, Nagakute, Aichi 480-1192 (Japan); Soejima, Narumasa; Iwasaki, Shiro [Toyota Central R& D Labs., Inc., 41-1 Yokomichi, Nagakute, Aichi 480-1192 (Japan); Nomoto, Toyokazu; Murai, Takaaki [Aichi Synchrotron Radiation Center (AichiSR), 250-3 Minamiyamaguchi-cho, Seto, Aichi 489-0965 (Japan); Kimoto, Yasuji [Toyota Central R& D Labs., Inc., 41-1 Yokomichi, Nagakute, Aichi 480-1192 (Japan) 2015-11-15 Graphical abstract: - Highlights: • A unique XAS method is proposed for depth profiling of chemical states. • PEY mode detecting energy-loss electrons enables a variation in the probe depth. • Si K-edge XAS spectra of the Si{sub 3}N{sub 4}/SiO{sub 2}/Si multilayer films have been investigated. • Deeper information was obtained in the spectra measured at larger energy loss. • Probe depth could be changed by the selection of the energy of detected electrons. - Abstract: A unique X-ray absorption spectroscopy (XAS) method is proposed for depth profiling of chemical states in material surfaces. Partial electron yield mode detecting energy-loss Auger electrons, called the inelastic electron yield (IEY) mode, enables a variation in the probe depth. As an example, Si K-edge XAS spectra for a well-defined multilayer sample (Si{sub 3}N{sub 4}/SiO{sub 2}/Si) have been investigated using this method at various kinetic energies. We found that the peaks assigned to the layers from the top layer to the substrate appeared in the spectra in the order of increasing energy loss relative to the Auger electrons. Thus, the probe depth can be changed by the selection of the kinetic energy of the energy loss electrons in IEY-XAS. 3. Depth-selective X-ray absorption spectroscopy by detection of energy-loss Auger electrons International Nuclear Information System (INIS) Isomura, Noritake; Soejima, Narumasa; Iwasaki, Shiro; Nomoto, Toyokazu; Murai, Takaaki; Kimoto, Yasuji 2015-01-01 Graphical abstract: - Highlights: • A unique XAS method is proposed for depth profiling of chemical states. • PEY mode detecting energy-loss electrons enables a variation in the probe depth. • Si K-edge XAS spectra of the Si_3N_4/SiO_2/Si multilayer films have been investigated. • Deeper information was obtained in the spectra measured at larger energy loss. • Probe depth could be changed by the selection of the energy of detected electrons. - Abstract: A unique X-ray absorption spectroscopy (XAS) method is proposed for depth profiling of chemical states in material surfaces. Partial electron yield mode detecting energy-loss Auger electrons, called the inelastic electron yield (IEY) mode, enables a variation in the probe depth. As an example, Si K-edge XAS spectra for a well-defined multilayer sample (Si_3N_4/SiO_2/Si) have been investigated using this method at various kinetic energies. We found that the peaks assigned to the layers from the top layer to the substrate appeared in the spectra in the order of increasing energy loss relative to the Auger electrons. Thus, the probe depth can be changed by the selection of the kinetic energy of the energy loss electrons in IEY-XAS. 4. Energy and exergy analysis of a double effect absorption refrigeration system based on different heat sources International Nuclear Information System (INIS) Kaynakli, Omer; Saka, Kenan; Kaynakli, Faruk 2015-01-01 Highlights: • Energy and exergy analysis was performed on double effect series flow absorption refrigeration system. • The refrigeration system runs on various heat sources such as hot water, hot air and steam. • A comparative analysis was carried out on these heat sources in terms of exergy destruction and mass flow rate of heat source. • The effect of heat sources on the exergy destruction of high pressure generator was investigated. - Abstract: Absorption refrigeration systems are environmental friendly since they can utilize industrial waste heat and/or solar energy. In terms of heat source of the systems, researchers prefer one type heat source usually such as hot water or steam. Some studies can be free from environment. In this study, energy and exergy analysis is performed on a double effect series flow absorption refrigeration system with water/lithium bromide as working fluid pair. The refrigeration system runs on various heat sources such as hot water, hot air and steam via High Pressure Generator (HPG) because of hot water/steam and hot air are the most common available heat source for absorption applications but the first law of thermodynamics may not be sufficient analyze the absorption refrigeration system and to show the difference of utilize for different type heat source. On the other hand operation temperatures of the overall system and its components have a major effect on their performance and functionality. In this regard, a parametric study conducted here to investigate this effect on heat capacity and exergy destruction of the HPG, coefficient of performance (COP) of the system, and mass flow rate of heat sources. Also, a comparative analysis is carried out on several heat sources (e.g. hot water, hot air and steam) in terms of exergy destruction and mass flow rate of heat source. From the analyses it is observed that exergy destruction of the HPG increases at higher temperature of the heat sources, condenser and absorber, and lower 5. Does a deformation of special relativity imply energy dependent photon time delays? Science.gov (United States) Carmona, J. M.; Cortés, J. L.; Relancio, J. J. 2018-01-01 Theoretical arguments in favor of energy dependent photon time delays from a modification of special relativity (SR) have met with recent gamma ray observations that put severe constraints on the scale of such deviations. We review the case of the generality of this theoretical prediction in the case of a deformation of SR and find that, at least in the simple model based on the analysis of photon worldlines which is commonly considered, there are many scenarios compatible with a relativity principle which do not contain a photon time delay. This will be the situation for any modified dispersion relation which reduces to E=\\vert p\\vert for photons, independently of the quantum structure of spacetime. This fact opens up the possibility of a phenomenologically consistent relativistic generalization of SR with a new mass scale many orders of magnitude below the Planck mass. 6. Development of energy-harvesting system using deformation of magnetic elastomer Science.gov (United States) Shinoda, Hayato; Tsumori, Fujio 2018-06-01 In this paper, we propose a power generation method using the deformation of a magnetic elastomer for vibration energy harvesting. The magnetic flux lines in the structure of the magnetic elastomer could be markedly changed if the properly designed structure was expanded and contracted in a static magnetic field. We set a coil on the magnetic elastomer to generate electricity by capturing this change in magnetic flux flow. We fabricated a centimeter-scale device and demonstrated that it generated 10.5 mV of maximum voltage by 10 Hz vibration. We also simulated the change in the magnetic flux flow using finite element analysis, and compared the result with the experimental data. Furthermore, we evaluated the power generation of a miniaturized device. 7. Transient absorption microscopy studies of energy relaxation in graphene oxide thin film. Science.gov (United States) Murphy, Sean; Huang, Libai 2013-04-10 Spatial mapping of energy relaxation in graphene oxide (GO) thin films has been imaged using transient absorption microscopy (TAM). Correlated AFM images allow us to accurately determine the thickness of the GO films. In contrast to previous studies, correlated TAM-AFM allows determination of the effect of interactions of GO with the substrate and between stacked GO layers on the relaxation dynamics. Our results show that energy relaxation in GO flakes has little dependence on the substrate, number of stacked layers, and excitation intensity. This is in direct contrast to pristine graphene, where these factors have great consequences in energy relaxation. This suggests intrinsic factors rather than extrinsic ones dominate the excited state dynamics of GO films. 8. Transient absorption microscopy studies of energy relaxation in graphene oxide thin film International Nuclear Information System (INIS) Murphy, Sean; Huang, Libai 2013-01-01 Spatial mapping of energy relaxation in graphene oxide (GO) thin films has been imaged using transient absorption microscopy (TAM). Correlated AFM images allow us to accurately determine the thickness of the GO films. In contrast to previous studies, correlated TAM–AFM allows determination of the effect of interactions of GO with the substrate and between stacked GO layers on the relaxation dynamics. Our results show that energy relaxation in GO flakes has little dependence on the substrate, number of stacked layers, and excitation intensity. This is in direct contrast to pristine graphene, where these factors have great consequences in energy relaxation. This suggests intrinsic factors rather than extrinsic ones dominate the excited state dynamics of GO films. (paper) 9. Measurement of the thorium absorption cross section shape near thermal energy (LWBR development program) International Nuclear Information System (INIS) Green, L. 1976-11-01 The shape of the thorium absorption cross section near thermal energies was investigated. This shape is dominated by one or more negative energy resonances whose parameters are not directly known, but must be inferred from higher energy data. Since the integral quantity most conveniently describing the thermal cross section shape is the Westcottg-factor, effort was directed toward establishing this quantity to high precision. Three nearly independent g-factor estimates were obtained from measurements on a variety of foils in three different neutron spectra provided by polyethylene-moderated neutrons from a 252 Cf source and from irradiations in the National Bureau of Standards ''Standard Thermal Neutron Density.'' The weighted average of the three measurements was 0.993 +- 0.004. This is in good agreement with two recent evaluations and supports the adequacy of the current cross section descriptions 10. Co-60 irradation facility for hens eggs, radiation field parameters and energy absorption in the egg International Nuclear Information System (INIS) Giese, W.; Mueller-Buder, A. 1981-01-01 For irradiation experiments with 33 530 hens eggs to test the effect of γ-rays on the hatchability of chicken a 60 Co irradiation facility was constructed, which is described in this article. Physical parameters of the radiation field as the dose rate caused by a 60 Co point source in a distance r, the flux of γ-quantae and energy towards an egg and the role of 60 Co betarays are quantitatively described. The intensity decrease, the dose build-up factor and energy absorption due to the interaction of γ-rays with atoms of the eggs content were calculated. Thus this contribution should give an impression of the physical processes involved in the γ-irradiation of eggs and on the magnitude of energy absorbed therein. (orig.) [de 11. Gamma-ray energy absorption and exposure buildup factor studies in some human tissues with endometriosis Energy Technology Data Exchange (ETDEWEB) Kurudirek, Murat, E-mail: [email protected] [Faculty of Science, Department of Physics, Ataturk University, 25240 Erzurum (Turkey); Dogan, Bekir [Faculty of Science, Department of Physics, Ataturk University, 25240 Erzurum (Turkey); Ingec, Metin [Faculty of Medicine, Department of Obstetrics and Gynecology, Ataturk University, 25240 Erzurum (Turkey); Ekinci, Neslihan; Ozdemir, Yueksel [Faculty of Science, Department of Physics, Ataturk University, 25240 Erzurum (Turkey) 2011-02-15 Human tissues with endometriosis have been analyzed in terms of energy absorption (EABF) and exposure (EBF) buildup factors using the five-parameter geometric progression (G-P) fitting formula in the energy region 0.015-15 MeV up to a penetration depth of 40 mfp (mean free path). Chemical compositions of the tissue samples were determined using a wavelength dispersive X-ray fluorescence spectrometer (WDXRFS). Possible conclusions were drawn due to significant variations in EABF and EBF for the selected tissues when photon energy, penetration depth and chemical composition changed. Buildup factors so obtained may be of use when the method of choice for treatment of endometriosis is radiotherapy. 12. Mass energy-absorption coefficients and average atomic energy-absorption cross-sections for amino acids in the energy range 0.122-1.330 MeV Energy Technology Data Exchange (ETDEWEB) More, Chaitali V., E-mail: [email protected]; Lokhande, Rajkumar M.; Pawar, Pravina P., E-mail: [email protected] [Department of physics, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004 (India) 2016-05-06 Mass attenuation coefficients of amino acids such as n-acetyl-l-tryptophan, n-acetyl-l-tyrosine and d-tryptophan were measured in the energy range 0.122-1.330 MeV. NaI (Tl) scintillation detection system was used to detect gamma rays with a resolution of 8.2% at 0.662 MeV. The measured attenuation coefficient values were then used to determine the mass energy-absorption coefficients (σ{sub a,en}) and average atomic energy-absorption cross sections (μ{sub en}/ρ) of the amino acids. Theoretical values were calculated based on XCOM data. Theoretical and experimental values are found to be in good agreement. 13. Investigation on energy absorption efficiency of each layer in ballistic armour panel for applications in hybrid design OpenAIRE Yang, Yanfei; Chen, Xiaogang 2017-01-01 This study aims to reveal different energy absorption efficiency of each layer when armour panel is under ballistic impact. Through Finite Element (FE) modelling and ballistic tests, it is found that when fabrics are layered up in a panel, energy absorption efficiency is only 30%–60% of an individual fabric layer with free boundary condition. In addition, fabric layers in front, middle, and back exhibit different ballistic characteristics. Therefore, a new hybrid design principle has been pro... 14. Hybrid compression/absorption type heat utilization system (eco-energy city project) Energy Technology Data Exchange (ETDEWEB) Karimata, T.; Susami, S.; Ogawa, Y. [Research and Development Dept., EBARA Corp., Kanagawa pref. (Japan) 1999-07-01 This research is intended to develop a 'hybrid compression/absorption type heat utilization system' by combining an absorption process with a compression process in one circulation cycle. This system can produce chilling heat for ice thermal storage by utilizing low-temperature waste heat (lower than 100 C) which is impossible to treat with a conventional absorption chiller. It means that this system will be able to solve the problem of a timing mismatch between waste heat and heat demand. The working fluid used in this proposed system should be suitable for producing ice, be safe, and not damage the ozone layer. In this project, new working fluids were searched as substitutes for the existing H{sub 2}O/LiBr or NH{sub 3}/H{sub 2}O. The interim results of this project in 1997, a testing unit using NH{sub 3}/H{sub 2}O was built for demonstration of the system and evaluation of its characteristics, and R134a/E181 was found to be one of the good working fluid for this system. The COP (ratio of energy of ice produced to electric power provided) of this system using R134a/E181 is expected to achieve 5.5 by computer simulation. The testing unit with this working fluid was built recently and prepared for the tests to confirm the result of the simulation. (orig.) 15. A laser heating facility for energy-dispersive X-ray absorption spectroscopy DEFF Research Database (Denmark) Kantor, Innokenty; Marini, C.; Mathon, O. 2018-01-01 A double-sided laser heating setup for diamond anvil cells installed on the ID24 beamline of the ESRF is presented here. The setup geometry is specially adopted for the needs of energy-dispersive X-ray absorption spectroscopic (XAS) studies of materials under extreme pressure and temperature...... conditions. We illustrate the performance of the facility with a study on metallic nickel at 60 GPa. The XAS data provide the temperature of the melting onset and quantitative information on the structural parameters of the first coordination shell in the hot solid up to melting.... 16. Integrated Spectral Energy Distributions and Absorption Feature Indices of Single Stellar Populations OpenAIRE Zhang, Fenghui; Han, Zhanwen; Li, Lifang; Hurley, Jarrod R. 2004-01-01 Using evolutionary population synthesis, we present integrated spectral energy distributions and absorption-line indices defined by the Lick Observatory image dissector scanner (referred to as Lick/IDS) system, for an extensive set of instantaneous burst single stellar populations (SSPs). The ages of the SSPs are in the range 1-19 Gyr and the metallicities [Fe/H] are in the range -2.3 - 0.2. Our models use the rapid single stellar evolution algorithm of Hurley, Pols and Tout for the stellar e... 17. Extension to Low Energies (<7keV) of High Pressure X-Ray Absorption Spectroscopy International Nuclear Information System (INIS) Itie, J.-P.; Flank, A.-M.; Lagarde, P.; Idir, M.; Polian, A.; Couzinet, B. 2007-01-01 High pressure x-ray absorption has been performed down to 3.6 keV, thanks to the new LUCIA beamline (SLS, PSI) and to the use of perforated diamonds or Be gasket. Various experimental geometries are proposed, depending on the energy of the edge and on the concentration of the studied element. A few examples will be presented: BaTiO3 at the titanium K edge, Zn0.95 Mn0.05O at the manganese K edge, KCl at the potassium K edge 18. X-ray Absorption Spectroscopy Characterization of Electrochemical Processes in Renewable Energy Storage and Conversion Devices Energy Technology Data Exchange (ETDEWEB) Farmand, Maryam [George Washington Univ., Washington, DC (United States) 2013-05-19 The development of better energy conversion and storage devices, such as fuel cells and batteries, is crucial for reduction of our global carbon footprint and improving the quality of the air we breathe. However, both of these technologies face important challenges. The development of lower cost and better electrode materials, which are more durable and allow more control over the electrochemical reactions occurring at the electrode/electrolyte interface, is perhaps most important for meeting these challenges. Hence, full characterization of the electrochemical processes that occur at the electrodes is vital for intelligent design of more energy efficient electrodes. X-ray absorption spectroscopy (XAS) is a short-range order, element specific technique that can be utilized to probe the processes occurring at operating electrode surfaces, as well for studying the amorphous materials and nano-particles making up the electrodes. It has been increasingly used in recent years to study fuel cell catalysts through application of the and #916; and mgr; XANES technique, in combination with the more traditional X-ray Absorption Near Edge Structure (XANES) and Extended X-ray Absorption Fine Structure (EXAFS) techniques. The and #916; and mgr; XANES data analysis technique, previously developed and applied to heterogeneous catalysts and fuel cell electrocatalysts by the GWU group, was extended in this work to provide for the first time space resolved adsorbate coverages on both electrodes of a direct methanol fuel cell. Even more importantly, the and #916; and mgr; technique was applied for the first time to battery relevant materials, where bulk properties such as the oxidation state and local geometry of a cathode are followed. 19. On the determination of the energy of antiprotonic X-rays by critical absorption and the theoretical discussion of results International Nuclear Information System (INIS) Joedicke, B. 1985-06-01 This work examines the possibility of measuring the energies of antiprotonic X-rays by critical absorption. Scanning the periodic table many isotopes are found where the energy of an antiprotonic X-ray coincides with a K-absorption-edge of a chemical element. Those candidates where the energy can be measured with high accuracy are discussed here. Also a computer program which calculates transition energies of antiprotonic atoms is examined. Necessary additions are listed and the corrections are shown. In combination with this program the candidates are the basis for a precise determination of the mass of the antiproton. (orig.) [de 20. Multiscale deformable registration for dual-energy x-ray imaging International Nuclear Information System (INIS) Gang, G. J.; Varon, C. A.; Kashani, H.; Richard, S.; Paul, N. S.; Van Metter, R.; Yorkston, J.; Siewerdsen, J. H. 2009-01-01 Dual-energy (DE) imaging of the chest improves the conspicuity of subtle lung nodules through the removal of overlying anatomical noise. Recent work has shown double-shot DE imaging (i.e., successive acquisition of low- and high-energy projections) to provide detective quantum efficiency, spectral separation (and therefore contrast), and radiation dose superior to single-shot DE imaging configurations (e.g., with a CR cassette). However, the temporal separation between high-energy (HE) and low-energy (LE) image acquisition can result in motion artifacts in the DE images, reducing image quality and diminishing diagnostic performance. This has motivated the development of a deformable registration technique that aligns the HE image onto the LE image before DE decomposition. The algorithm reported here operates in multiple passes at progressively smaller scales and increasing resolution. The first pass addresses large-scale motion by means of mutual information optimization, while successive passes (2-4) correct misregistration at finer scales by means of normalized cross correlation. Evaluation of registration performance in 129 patients imaged using an experimental DE imaging prototype demonstrated a statistically significant improvement in image alignment. Specific to the cardiac region, the registration algorithm was found to outperform a simple cardiac-gating system designed to trigger both HE and LE exposures during diastole. Modulation transfer function (MTF) analysis reveals additional advantages in DE image quality in terms of noise reduction and edge enhancement. This algorithm could offer an important tool in enhancing DE image quality and potentially improving diagnostic performance. 1. Fractional energy absorption from beta-emitting particles in the rat lung International Nuclear Information System (INIS) Snipes, M.B. 1977-01-01 Forty-four male, Fischer-344 rats were exposed nose-only to an aerosol of 144 Ce in fused aluminosilicate particles to obtain a relatively insoluble lung burden of this material. Twenty-eight rats, ages 12 to 25 weeks with body weights of 183 to 337 grams were analyzed seven to nine days after exposure; lung burdens were 13 to 82 nCi. An additional group of 16 rats was exposed when 12 weeks old and maintained for six months prior to analysis; body weights and lung burdens at six months after exposure ranged from 276 to 368 grams and 16 to 46 nCi, respectively. Lungs were analyzed, inflated and deflated in a 4π beta spectrometer to determine fractional energy absorption for 144 Ce. Over the relatively narrow range of sizes, 0.88 to 1.66 grams, for lungs in this study the average fractional energy absorption and its standard deviation was 0.23 +- 0.078 for the inflated lung and 0.40 +- 0.087 for the deflated lung 2. Damage Analysis and Evaluation of High Strength Concrete Frame Based on Deformation-Energy Damage Model Directory of Open Access Journals (Sweden) Huang-bin Lin 2015-01-01 Full Text Available A new method of characterizing the damage of high strength concrete structures is presented, which is based on the deformation energy double parameters damage model and incorporates both of the main forms of damage by earthquakes: first time damage beyond destruction and energy consumption. Firstly, test data of high strength reinforced concrete (RC columns were evaluated. Then, the relationship between stiffness degradation, strength degradation, and ductility performance was obtained. And an expression for damage in terms of model parameters was determined, as well as the critical input data for the restoring force model to be used in analytical damage evaluation. Experimentally, the unloading stiffness was found to be related to the cycle number. Then, a correction for this changing was applied to better describe the unloading phenomenon and compensate for the shortcomings of structure elastic-plastic time history analysis. The above algorithm was embedded into an IDARC program. Finally, a case study of high strength RC multistory frames was presented. Under various seismic wave inputs, the structural damages were predicted. The damage model and correction algorithm of stiffness unloading were proved to be suitable and applicable in engineering design and damage evaluation of a high strength concrete structure. 3. On the Importance of Morphing Deformation Scheduling for Actuation Force and Energy Directory of Open Access Journals (Sweden) Roeland De Breuker 2016-11-01 Full Text Available Morphing aircraft offer superior properties as compared to non-morphing aircraft. They can achieve this by adapting their shape depending on the requirements of various conflicting flight conditions. These shape changes are often associated with large deformations and strains, and hence dedicated morphing concepts are developed to carry out the required changes in shape. Such intricate mechanisms are often heavy, which reduces, or even completely cancels, the performance increase of the morphing aircraft. Part of this weight penalty is determined by the required actuators and associated batteries, which are mainly driven by the required actuation force and energy. Two underexposed influences on the actuation force and energy are the flight condition at which morphing should take place and the order of the morphing manoeuvres, also called morphing scheduling. This paper aims at highlighting the importance of both influences by using a small Unmanned Aerial Vehicle (UAV with different morphing mechanisms as an example. The results in this paper are generated using a morphing aircraft analysis and design code that was developed at the Delft University of Technology. The importance of the flight condition and a proper morphing schedule is demonstrated by investigating the required actuation forces for various flight conditions and morphing sequences. More importantly, the results show that there is not necessarily one optimal flight condition or morphing schedule and a tradeoff needs to be made. 4. Direct observation of radial distribution change during tensile deformation of metallic glass by high energy X-ray diffraction method Energy Technology Data Exchange (ETDEWEB) Nasu, Toshio, E-mail: [email protected] [Faculty of Education, Arts and Science, Yamagata University, 1-4-12 Kojirakawa, Yamagata, Yamagata, 990-8560 (Japan); Sasaki, Motokatsu [Faculty of Education, Arts and Science, Yamagata University, 1-4-12 Kojirakawa, Yamagata, Yamagata, 990-8560 (Japan); Usuki, Takeshi; Sekine, Mai [Faculty of Science, Yamagata University, Yamagata 990-8560 (Japan); Takigawa, Yorinobu; Higashi, Kenji [Graduate School of Engineering, Osaka Prefecture University, Sakai 599-8531 (Japan); Kohara, Shinji [Japan Synchrotron Radiation Research Institute, Harima Science Garden City, Sayo town, Hyogo 679-5198 (Japan); Sakurai, Masaki; Wei Zhang; Inoue, Akihisa [Institute for Materials Research, Tohoku University, Sendai 980-8577 (Japan) 2009-08-26 The purpose of this research is to investigate the micro-mechanism of deformation behavior of metallic glasses. We report the results of direct observations of short-range and medium-range structural change during tensile deformation of metallic glasses by high energy X-ray diffraction method. Cu{sub 50}Zr{sub 50} and Ni{sub 30}Zr{sub 70} metallic glass samples in the ribbon shape (1.5 mm width and 25 mum) were made by using rapid quenching method. Tensile deformation added to the sample was made by using special equipment adopted for measuring the high energy X-ray diffraction. The peaks in pair distribution function g(r) for Cu{sub 50}Zr{sub 50} and N{sub 30}iZr{sub 70} metallic glasses move zigzag into front and into rear during tensile deformation. These results of direct observation on atomic distribution change for Cu{sub 50}Zr{sub 50} and Ni{sub 30}Zr{sub 70} metallic glass ribbons during tensile deformation suggest that the micro-relaxations occur. 5. Role of rotational energy and deformations in the dynamics of {sup 6}Li+{sup 90}Zr reaction Energy Technology Data Exchange (ETDEWEB) Kaur, Gurvinder; Grover, Neha; Sandhu, Kirandeep; Sharma, Manoj K., E-mail: [email protected] 2014-07-15 In reference to recent experimental data, the dynamical cluster-decay model (DCM) has been applied to study the neutron evaporation residue (ER) cross sections of intermediate mass nucleus {sup 96}Tc{sup ⁎} spread over a wide range of incident energy across the Coulomb barrier. In order to analyze the effect of rotational energy in the dynamics of {sup 6}Li+{sup 90}Zr reaction, the cross sections have been calculated using the sticking (I{sub S}) and the non-sticking (I{sub NS}) limits of moment of inertia with inclusion of quadrupole (β{sub 2}) deformation within optimum orientation approach. The effect of either of the two approaches on the angular momentum, and hence the rotational energy associated with it, is assessed through the fragment mass distribution, preformation factor and the barrier penetrability. Also, the role of deformations is studied through a comparative analysis of decay path for spherical and β{sub 2} deformed fragmentation. The calculated evaporation residue cross sections show excellent agreement with the reported data at all incident energies for both spherical and β{sub 2}-deformed approach. Finally, the incomplete fusion (ICF) process observed due to loosely bound projectile {sup 6}Li is addressed within the framework of DCM. 6. Vibration energy absorption in the whole-body system of a tractor operator. Science.gov (United States) Szczepaniak, Jan; Tanaś, Wojciech; Kromulski, Jacek 2014-01-01 Many people are exposed to whole-body vibration (WBV) in their occupational lives, especially drivers of vehicles such as tractor and trucks. The main categories of effects from WBV are perception degraded comfort interference with activities-impaired health and occurrence of motion sickness. Absorbed power is defined as the power dissipated in a mechanical system as a result of an applied force. The vibration-induced injuries or disorders in a substructure of the human system are primarily associated with the vibration power absorption distributed in that substructure. The vibration power absorbed by the exposed body is a measure that combines both the vibration hazard and the biodynamic response of the body. The article presents measurement method for determining vibration power dissipated in the human whole body system called Vibration Energy Absorption (VEA). The vibration power is calculated from the real part of the force-velocity cross-spectrum. The absorbed power in the frequency domain can be obtained from the cross-spectrum of the force and velocity. In the context of the vibration energy transferred to a seated human body, the real component reflects the energy dissipated in the biological structure per unit of time, whereas the imaginary component reflects the energy stored/released by the system. The seated human is modeled as a series/parallel 4-DOF dynamic models. After introduction of the excitation, the response in particular segments of the model can be analyzed. As an example, the vibration power dissipated in an operator has been determined as a function of the agricultural combination operating speed 1.39 - 4.16 ms(-1). 7. Vibration energy absorption in the whole-body system of a tractor operator Directory of Open Access Journals (Sweden) Jan Szczepaniak 2014-06-01 Full Text Available Many people are exposed to whole-body vibration (WBV in their occupational lives, especially drivers of vehicles such as tractor and trucks. The main categories of effects from WBV are perception degraded comfort interference with activities-impaired health and occurrence of motion sickness. Absorbed power is defined as the power dissipated in a mechanical system as a result of an applied force. The vibration-induced injuries or disorders in a substructure of the human system are primarily associated with the vibration power absorption distributed in that substructure. The vibration power absorbed by the exposed body is a measure that combines both the vibration hazard and the biodynamic response of the body. The article presents measurement method for determining vibration power dissipated in the human whole body system called Vibration Energy Absorption (VEA. The vibration power is calculated from the real part of the force-velocity cross-spectrum. The absorbed power in the frequency domain can be obtained from the cross-spectrum of the force and velocity. In the context of the vibration energy transferred to a seated human body, the real component reflects the energy dissipated in the biological structure per unit of time, whereas the imaginary component reflects the energy stored/released by the system. The seated human is modeled as a series/parallel 4-DOF dynamic models. After introduction of the excitation, the response in particular segments of the model can be analyzed. As an example, the vibration power dissipated in an operator has been determined as a function of the agricultural combination operating speed 1.39 – 4.16 ms[sup] -1 [/sup]. 8. Energy levels and far-infrared optical absorption of impurity doped semiconductor nanorings: Intense laser and electric fields effects Energy Technology Data Exchange (ETDEWEB) Barseghyan, M.G., E-mail: [email protected] 2016-11-10 Highlights: • The electron-impurity interaction on energy levels in nanoring have been investigated. • The electron-impurity interaction on far-infrared absorption have been investigated. • The energy levels are more stable for higher values of electric field. - Abstract: The effects of electron-impurity interaction on energy levels and far-infrared absorption in semiconductor nanoring under the action of intense laser and lateral electric fields have been investigated. Numerical calculations are performed using exact diagonalization technique. It is found that the electron-impurity interaction and external fields change the energy spectrum dramatically, and also have significant influence on the absorption spectrum. Strong dependence on laser field intensity and electric field of lowest energy levels, also supported by the Coulomb interaction with impurity, is clearly revealed. 9. A parameterization scheme for the x-ray linear attenuation coefficient and energy absorption coefficient. Science.gov (United States) Midgley, S M 2004-01-21 A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies. 10. Numerical study for identification of influence of energy absorption and frontal crush for vehicle crashworthiness Science.gov (United States) Suman, Shwetabh; Shah, Haard; Susarla, Vaibhav; Ravi, K. 2017-11-01 According to the statistics it has been seen that everyday nearly 400 people are killed due to road accidents. Due to this it has become an important concern to concentrate on the safety of the passengers which can be done by improving the crashworthiness of the vehicle. During the impact, a large amount of energy is released which if not absorbed, will be transmitted to the passenger compartment. For the safety of the passenger this energy has to be absorbed. Front rail is one of the main energy absorbing components in the vehicle front structure. When it comes to the structure and material of the part or component of the vehicle that is to be designed for crash, it is done based on three parameters: Specific Energy of Absorption, Mass of the front rail and maximum crush force. In this work, we are considering different internal geometries with different materials to increase the energy absorbing capacity of the front rail. Based on the extensive analysis carried aluminium seizes to be the opt material for frontal crash. 11. Experimental investigation on charging and discharging performance of absorption thermal energy storage system International Nuclear Information System (INIS) Zhang, Xiaoling; Li, Minzhi; Shi, Wenxing; Wang, Baolong; Li, Xianting 2014-01-01 Highlights: • A prototype of ATES using LiBr/H 2 O was designed and built. • Charging and discharging performances of ATES system were investigated. • ESE and ESD for cooling, domestic hot water and heating were obtained. - Abstract: Because of high thermal storage density and little heat loss, absorption thermal energy storage (ATES) is known as a potential thermal energy storage (TES) technology. To investigate the performance of the ATES system with LiBr–H 2 O, a prototype with 10 kW h cooling storage capacity was designed and built. The experiments demonstrated that charging and discharging processes are successful in producing 7 °C chilled water, 65 °C domestic hot water, or 43 °C heating water to meet the user’s requirements. Characteristics such as temperature, concentration and power variation of the ATES system during charging and discharging processes were investigated. The performance of the ATES system for supplying cooling, heating or domestic hot water was analyzed and compared. The results indicate that the energy storage efficiencies (ESE) for cooling, domestic hot water and heating are 0.51, 0.97, 1.03, respectively, and the energy storage densities (ESD) for cooling, domestic hot water and heating reach 42, 88, 110 kW h/m 3 , respectively. The performance is better than those of previous TES systems, which proves that the ATES system using LiBr–H 2 O may be a good option for thermal energy storage 12. CT dose equilibration and energy absorption in polyethylene cylinders with diameters from 6 to 55 cm International Nuclear Information System (INIS) Li, Xinhua; Zhang, Da; Liu, Bob 2015-01-01 Purpose: ICRU Report No. 87 Committee and AAPM Task Group 200 designed a three-sectional polyethylene phantom of 30 cm in diameter and 60 cm in length for evaluating the midpoint dose D L (0) and its rise-to-the-equilibrium curve H(L) = D L (0)/D eq from computed tomography (CT) scanning, where D eq is the equilibrium dose. To aid the use of the phantom in radiation dose assessment and to gain an understanding of dose equilibration and energy absorption in polyethylene, the authors evaluated the short (20 cm) to long (60 cm) phantom dose ratio with a polyethylene diameter of 30 cm, assessed H(L) in polyethylene cylinders of 6–55 cm in diameters, and examined energy absorption in these cylinders. Methods: A GEANT4-based Monte Carlo program was used to simulate the single axial scans of polyethylene cylinders (diameters 6–55 cm and length 90 cm, as well as diameter 30 cm and lengths 20 and 60 cm) on a clinical CT scanner (Somatom Definition dual source CT, Siemens Healthcare). Axial dose distributions were computed on the phantom central and peripheral axes. An average dose over the central 23 or 100 mm region was evaluated for modeling dose measurement using a 0.6 cm 3 thimble chamber or a 10 cm long pencil ion chamber, respectively. The short (20 cm) to long (90 cm) phantom dose ratios were calculated for the 30 cm diameter polyethylene phantoms scanned at four tube voltages (80–140 kV) and a range of beam apertures (1–25 cm). H(L) was evaluated using the dose integrals computed with the 90 cm long phantoms. The resultant H(L) data were subsequently used to compute the fraction of the total energy absorbed inside or outside the scan range (E in /E or E out /E) on the phantom central and peripheral axes, where E = LD eq was the total energy absorbed along the z axis. Results: The midpoint dose in the 60 cm long polyethylene phantom was equal to that in the 90 cm long polyethylene phantom. The short-to-long phantom dose ratios changed with beam aperture and 13. CT dose equilibration and energy absorption in polyethylene cylinders with diameters from 6 to 55 cm Energy Technology Data Exchange (ETDEWEB) Li, Xinhua; Zhang, Da; Liu, Bob, E-mail: [email protected] [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States) 2015-06-15 Purpose: ICRU Report No. 87 Committee and AAPM Task Group 200 designed a three-sectional polyethylene phantom of 30 cm in diameter and 60 cm in length for evaluating the midpoint dose D{sub L}(0) and its rise-to-the-equilibrium curve H(L) = D{sub L}(0)/D{sub eq} from computed tomography (CT) scanning, where D{sub eq} is the equilibrium dose. To aid the use of the phantom in radiation dose assessment and to gain an understanding of dose equilibration and energy absorption in polyethylene, the authors evaluated the short (20 cm) to long (60 cm) phantom dose ratio with a polyethylene diameter of 30 cm, assessed H(L) in polyethylene cylinders of 6–55 cm in diameters, and examined energy absorption in these cylinders. Methods: A GEANT4-based Monte Carlo program was used to simulate the single axial scans of polyethylene cylinders (diameters 6–55 cm and length 90 cm, as well as diameter 30 cm and lengths 20 and 60 cm) on a clinical CT scanner (Somatom Definition dual source CT, Siemens Healthcare). Axial dose distributions were computed on the phantom central and peripheral axes. An average dose over the central 23 or 100 mm region was evaluated for modeling dose measurement using a 0.6 cm{sup 3} thimble chamber or a 10 cm long pencil ion chamber, respectively. The short (20 cm) to long (90 cm) phantom dose ratios were calculated for the 30 cm diameter polyethylene phantoms scanned at four tube voltages (80–140 kV) and a range of beam apertures (1–25 cm). H(L) was evaluated using the dose integrals computed with the 90 cm long phantoms. The resultant H(L) data were subsequently used to compute the fraction of the total energy absorbed inside or outside the scan range (E{sub in}/E or E{sub out}/E) on the phantom central and peripheral axes, where E = LD{sub eq} was the total energy absorbed along the z axis. Results: The midpoint dose in the 60 cm long polyethylene phantom was equal to that in the 90 cm long polyethylene phantom. The short-to-long phantom dose 14. Evaluation of bulk and surfaces absorption edge energy of sol-gel-dip-coating SnO2 thin films Directory of Open Access Journals (Sweden) Emerson Aparecido Floriano 2010-12-01 Full Text Available The absorption edge and the bandgap transition of sol-gel-dip-coating SnO2 thin films, deposited on quartz substrates, are evaluated from optical absorption data and temperature dependent photoconductivity spectra. Structural properties of these films help the interpretation of bandgap transition nature, since the obtained nanosized dimensions of crystallites are determinant on dominant growth direction and, thus, absorption energy. Electronic properties of the bulk and (110 and (101 surfaces are also presented, calculated by means of density functional theory applied to periodic calculations at B3LYP hybrid functional level. Experimentally obtained absorption edge is compared to the calculated energy band diagrams of bulk and (110 and (101 surfaces. The overall calculated electronic properties in conjunction with structural and electro-optical experimental data suggest that the nature of the bandgap transition is related to a combined effect of bulk and (101 surface, which presents direct bandgap transition. 15. Anelastic deformation processes in metallic glasses and activation energy spectrum model NARCIS (Netherlands) Ocelik, [No Value; Csach, K; Kasardova, A; Bengus, VZ; Ocelik, Vaclav 1997-01-01 The isothermal kinetics of anelastic deformation below the glass transition temperature (so-called 'stress induced ordering' or 'creep recovery' deformation) was investigated in Ni-Si-B metallic glass. The relaxation time spectrum model and two recently developed methods for its calculation from the 16. Experimental analysis of a diffusion absorption refrigeration system used alternative energy sources International Nuclear Information System (INIS) Soezen, A.; Oezbas, E. 2009-01-01 The continuous-cycle absorption refrigeration device is widely used in domestic refrigerators, and recreational vehicles. It is also used in year-around air conditioning of both homes and larger buildings. The unit consists of four main parts the boiler, condenser, evaporator and the absorber. When the unit operates on kerosene or gas, the heat is supplied by a burner. This element is fitted underneath the central tube. When operating on electricity, the heat is supplied by an element inserted in the pocket. No moving parts are employed. The operation of the refrigerating mechanism is based on Dalton's law. In this study, experimental analysis was performed of a diffusion absorption refrigeration system (DARS) used alternative energy sources such as solar, liquid petroleum gas (LPG) sources. Two basic DAR cycles were set up and investigated: i) In the first cycle (DARS-1), the condensate is sub-cooled prior to the evaporator entrance by the coupled evaporator/gas heat exchanger similar with manufactured by Electrolux Sweden. ii) In the second cycle (DARS-2), the condensate is not sub-cooled prior to the evaporator entrance and gas heat exchanger is separated from the evaporator. (author) 17. Energy and Momentum Relaxation Times of 2D Electrons Due to Near Surface Deformation Potential Scattering Science.gov (United States) 1997-03-01 The low temperature energy and momentum relaxation rates of 2D electron gas placed near the free or clamped surface of a semi-infinit sample are calculated. To describe the electron-acoustic phonon interaction with allowance of the surface effect the method of elasticity theory Green functions was used. This method allows to take into account the reflection of acoustic waves from the surface and related mutual conversion of LA and TA waves. It is shown that the strength of the deformation potential scattering at low temperatures substantially depends on the mechanical conditions at the surface: relaxation rates are suppressed for the free surface while for the rigid one the rates are enhanced. The dependence of the conductivity on the distance between the 2D layer and the surface is discussed. The effect is most pronounced in the range of temperatures 2 sl pF < T < (2 hbar s_l)/d, where pF is the Fermi momentum, sl is the velocity of LA waves, d is the width of the quantum well. 18. Non-thermal plasma instabilities induced by deformation of the electron energy distribution function Science.gov (United States) Dyatko, N. A.; Kochetov, I. V.; Napartovich, A. P. 2014-08-01 Non-thermal plasma is a key component in gas lasers, microelectronics, medical applications, waste gas cleaners, ozone generators, plasma igniters, flame holders, flow control in high-speed aerodynamics and others. A specific feature of non-thermal plasma is its high sensitivity to variations in governing parameters (gas composition, pressure, pulse duration, E/N parameter). This sensitivity is due to complex deformations of the electron energy distribution function (EEDF) shape induced by variations in electric field strength, electron and ion number densities and gas excitation degree. Particular attention in this article is paid to mechanisms of instabilities based on non-linearity of plasma properties for specific conditions: gas composition, steady-state and decaying plasma produced by the electron beam, or by an electric current pulse. The following effects are analyzed: the negative differential electron conductivity; the absolute negative electron mobility; the stepwise changes of plasma properties induced by the EEDF bi-stability; thermo-current instability and the constriction of the glow discharge column in rare gases. Some of these effects were observed experimentally and some of them were theoretically predicted and still wait for experimental confirmation. 19. Determination of the activation energy of A-center in the uniaxially deformed n-Ge single crystals Directory of Open Access Journals (Sweden) S. V. Luniov 2017-08-01 Full Text Available Based on the decisions of electroneutrality equation and experimental results of measurements of the piezo-Hall-effect the dependences of activation energy of the deep level A-center depending on the uniaxial pressure along the crystallographic directions [100], [110] and [111] for n-Ge single crystals, irradiated by the electrons with energy 10 MeV are obtained. Using the method of least squares approximational polynomials for the calculation of these dependences are obtained. It is shown that the activation energy of A-center deep level decreases linearly for the entire range of uniaxial pressure along the crystallographic direction [100]. For the cases of uniaxial deformation along the crystallographic directions [110] and [111] decrease of the activation energy according to the linear law is observed only at high uniaxial pressures, when the A-center deep level interacts with the minima of the germanium conduction band, which proved the lower at the deformation. The various dependences of the activation energy of A-center depending on the orientation of the axis of deformation may be connected with features of its microstructure. 20. Hip fractures. Epidemiology, risk factors, falls, energy absorption, hip protectors, and prevention DEFF Research Database (Denmark) Lauritzen, J B 1997-01-01 have a high risk of hip fracture (annual rate of 5-6%), and the incidence of falls is about 1,500 falls/1,000 persons/year. Most hip fractures are a result of a direct trauma against the hip. The incidence of falls on the hip among nursing home residents is about 290 falls/1,000 persons/year and about......%, corresponding to 9 out of 247 residents saved from sustaining a hip fracture. The review points to the essentials of the development of hip fracture, which constitutes; risk of fall, type of fall, type of impact, energy absorption, and lastly bone strength, which is the ultimate and last permissive factor......The present review summarizes the pathogenic mechanisms leading to hip fracture based on epidemiological, experimental, and controlled studies. The estimated lifetime risk of hip fracture is about 14% in postmenopausal women and 6% in men. The incidence of hip fractures increases exponentially... 1. Energy and Exergy Based Optimization of Licl-Water Absorption Cooling System Directory of Open Access Journals (Sweden) Bhargav Pandya 2017-06-01 Full Text Available This study presents thermodynamic analysis and optimization of single effect LiCl-H2O absorption cooling system. Thermodynamic models are employed in engineering equation solver to compute the optimum performance parameters. In this study, cut off temperature to operate system has been obtained at various operating temperatures. Analysis depicts that on 3.59 % rise in evaporator temperature, the required cut-off temperature decreased by 12.51%. By realistic comparison between thermodynamic first and second law analysis, optimum generator temperature relative to energy and exergy based prospective has been evaluated. It is found that optimum generator temperature is strong function of evaporator and condenser temperature. Thus, it is feasible to find out optimum generator temperature for various combinations of evaporator and condenser temperatures. Contour plots of optimum generator temperature for several combinations of condenser and absorber temperatures have been also depicted. 2. A study of the energy absorption and exposure buildup factors of some anti-inflammatory drugs International Nuclear Information System (INIS) Ekinci, Neslihan; Kavaz, Esra; Özdemir, Yüksel 2014-01-01 3. Failure mechanism and supporting measures for large deformation of Tertiary deep soft rock Institute of Scientific and Technical Information of China (English) Guo Zhibiao; Wang Jiong; Zhang Yuelin 2015-01-01 The Shenbei mining area in China contains typical soft rock from the Tertiary Period. As mining depths increase, deep soft rock roadways are damaged by large deformations and constantly need to be repaired to meet safety requirements, which is a great security risk. In this study, the characteristics of deformation and failure of typical roadway were analyzed, and the fundamental reason for the roadway deformation was that traditional support methods and materials cannot control the large deformation of deep soft rock. Deep soft rock support technology was developed based on constant resistance energy absorption using constant resistance large deformation bolts. The correlative deformation mechanisms of surrounding rock and bolt were analyzed to understand the principle of constant resistance energy absorption. The new technology works well on-site and provides a new method for the excavation of roadways in Tertiary deep soft rock. 4. Increase the absorption plasm and the flow of light energy in ultra ... African Journals Online (AJOL) The silicon thin film solar cells in the visible region, The low absorption which reduces its efficiency. The use of metallic nanostructures help, to increase light absorption and reduce the size of the entire structure will be. The process of light absorption in solar cells is one of the factors in improving the performance of solar ... 5. Mass absorption and mass energy transfer coefficients for 0.4-10 MeV gamma rays in elemental solids and gases Energy Technology Data Exchange (ETDEWEB) Gurler, O. [Physics Department, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey)], E-mail: [email protected]; Oz, H. [Physics Department, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey); Gundogdu, O. [Department of Physics, University of Surrey, Guildford GU2 7XH (United Kingdom); NCCPM, Medical Physics, Royal Surrey County Hospital, GU2 7XX (United Kingdom) 2009-01-15 The mass energy absorption, the mass energy transfer and mass absorption coefficients have been widely used for problems and applications involving dose calculations. Direct measurements of the coefficients are difficult, and theoretical computations are usually employed. In this paper, analytical equations are presented for determining the mass energy transfer and mass absorption coefficients for gamma rays with an incident energy range between 0.4 and 10 MeV in nitrogen, silicon, carbon, copper and sodium iodide. The mass absorption and mass energy transfer coefficients for gamma rays were calculated, and the results obtained were compared with the values reported in the literature. 6. Mass absorption and mass energy transfer coefficients for 0.4-10 MeV gamma rays in elemental solids and gases International Nuclear Information System (INIS) Gurler, O.; Oz, H.; Yalcin, S.; Gundogdu, O. 2009-01-01 The mass energy absorption, the mass energy transfer and mass absorption coefficients have been widely used for problems and applications involving dose calculations. Direct measurements of the coefficients are difficult, and theoretical computations are usually employed. In this paper, analytical equations are presented for determining the mass energy transfer and mass absorption coefficients for gamma rays with an incident energy range between 0.4 and 10 MeV in nitrogen, silicon, carbon, copper and sodium iodide. The mass absorption and mass energy transfer coefficients for gamma rays were calculated, and the results obtained were compared with the values reported in the literature 7. Numerical simulation of mechanisms of deformation,failure and energy dissipation in porous rock media subjected to wave stresses Institute of Scientific and Technical Information of China (English) 2010-01-01 The pore characteristics,mineral compositions,physical and mechanical properties of the subarkose sandstones were acquired by means of CT scan,X-ray diffraction and physical tests.A few physical models possessing the same pore characteristics and matrix properties but different porosities compared to the natural sandstones were developed.The 3D finite element models of the rock media with varied porosities were established based on the CT image processing of the physical models and the MIMICS software platform.The failure processes of the porous rock media loaded by the split Hopkinson pressure bar(SHPB) were simulated by satisfying the elastic wave propagation theory.The dynamic responses,stress transition,deformation and failure mechanisms of the porous rock media subjected to the wave stresses were analyzed.It is shown that an explicit and quantitative analysis of the stress,strain and deformation and failure mechanisms of porous rocks under the wave stresses can be achieved by using the developed 3D finite element models.With applied wave stresses of certain amplitude and velocity,no evident pore deformation was observed for the rock media with a porosity less than 15%.The deformation is dominantly the combination of microplasticity(shear strain),cracking(tensile strain) of matrix and coalescence of the cracked regions around pores.Shear stresses lead to microplasticity,while tensile stresses result in cracking of the matrix.Cracking and coalescence of the matrix elements in the neighborhood of pores resulted from the high transverse tensile stress or tensile strain which exceeded the threshold values.The simulation results of stress wave propagation,deformation and failure mechanisms and energy dissipation in porous rock media were in good agreement with the physical tests.The present study provides a reference for analyzing the intrinsic mechanisms of the complex dynamic response,stress transit mode,deformation and failure mechanisms and the disaster 8. The effects of heating temperatures and time on deformation energy and oil yield of sunflower bulk seeds in compression loading Science.gov (United States) Kabutey, A.; Herak, D.; Sigalingging, R.; Demirel, C. 2018-02-01 The deformation energy (J) and percentage oil yield (%) of sunflower bulk seeds under the influence of heat treatment temperatures and heating time were examined in compression test using the universal compression testing machine and vessel diameter of 60 mm with a plunger. The heat treatment temperatures were between 40 and 100 °C and the heating time at specific temperatures of 40 and 100 °C ranged from 15 to 75 minutes. The bulk sunflower seeds were measured at a pressing height of 60 mm and pressed at a maximum force of 100 kN and speed of 5 mm/min. Based on the compression results, the deformation energy and oil yield increased along with increasing heat treatment temperatures. The results were statistically significant (p 0.05). 9. Characterizing the spatio-temporal and energy-dependent response of riometer absorption to particle precipitation Science.gov (United States) Kellerman, Adam; Makarevich, Roman; Spanswick, Emma; Donovan, Eric; Shprits, Yuri 2016-07-01 Energetic electrons in the 10's of keV range precipitate to the upper D- and lower E-region ionosphere, and are responsible for enhanced ionization. The same particles are important in the inner magnetosphere, as they provide a source of energy for waves, and thus relate to relativistic electron enhancements in Earth's radiation belts.In situ observations of plasma populations and waves are usually limited to a single point, which complicates temporal and spatial analysis. Also, the lifespan of satellite missions is often limited to several years which does not allow one to infer long-term climatology of particle precipitation, important for affecting ionospheric conditions at high latitudes. Multi-point remote sensing of the ionospheric plasma conditions can provide a global view of both ionospheric and magnetospheric conditions, and the coupling between magnetospheric and ionospheric phenomena can be examined on time-scales that allow comprehensive statistical analysis. In this study we utilize multi-point riometer measurements in conjunction with in situ satellite data, and physics-based modeling to investigate the spatio-temporal and energy-dependent response of riometer absorption. Quantifying this relationship may be a key to future advancements in our understanding of the complex D-region ionosphere, and may lead to enhanced specification of auroral precipitation both during individual events and over climatological time-scales. 10. Study of electron transition energies between anions and cations in spinel ferrites using differential UV–vis absorption spectra International Nuclear Information System (INIS) Xue, L.C.; Wu, L.Q.; Li, S.Q.; Li, Z.Z.; Tang, G.D.; Qi, W.H.; Ge, X.S.; Ding, L.L. 2016-01-01 It is very important to determine electron transition energies (E_t_r) between anions and different cations in order to understand the electrical transport and magnetic properties of a material. Many authors have analyzed UV–vis absorption spectra using the curve (αhν)"2 vs E, where α is the absorption coefficient and E(=hν) is the photon energy. Such an approach can give only two band gap energies for spinel ferrites. In this paper, using differential UV–vis absorption spectra, dα/dE vs E, we have obtained electron transition energies (E_t_r) between the anions and cations, Fe"2"+ and Fe"3"+ at the (A) and [B] sites and Ni"2"+ at the [B] sites for the (A)[B]_2O_4 spinel ferrite samples Co_xNi_0_._7_−_xFe_2_._3O_4 (0.0≤x≤0.3), Cr_xNi_0_._7Fe_2_._3_−_xO_4 (0.0≤x≤0.3) and Fe_3O_4. We suggest that the differential UV–vis absorption spectra should be accepted as a general analysis method for determining electron transition energies between anions and cations. 11. Energy absorption during impact on the proximal femur is affected by body mass index and flooring surface. Science.gov (United States) Bhan, Shivam; Levine, Iris C; Laing, Andrew C 2014-07-18 Impact mechanics theory suggests that peak loads should decrease with increase in system energy absorption. In light of the reduced hip fracture risk for persons with high body mass index (BMI) and for falls on soft surfaces, the purpose of this study was to characterize the effects of participant BMI, gender, and flooring surface on system energy absorption during lateral falls on the hip with human volunteers. Twenty university-aged participants completed the study with five men and five women in both low BMI (27.5 kg/m(2)) groups. Participants underwent lateral pelvis release experiments from a height of 5 cm onto two common floors and four safety floors mounted on a force plate. A motion-capture system measured pelvic deflection. The energy absorbed during the initial compressive phase of impact was calculated as the area under the force-deflection curve. System energy absorption was (on average) 3-fold greater for high compared to low BMI participants, but no effects of gender were observed. Even after normalizing for body mass, high BMI participants absorbed 1.8-fold more energy per unit mass. Additionally, three of four safety floors demonstrated significantly increased energy absorption compared to a baseline resilient-rolled-sheeting system (% increases ranging from 20.7 to 28.3). Peak system deflection was larger for high BMI persons and for impacts on several safety floors. This study indicates that energy absorption may be a common mechanism underlying the reduced risk of hip fracture for persons with high BMI and for those who fall on soft surfaces. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved. 12. The influence of multiscale fillers reinforcement into impact resistance and energy absorption properties of polyamide 6 and polypropylene nanocomposite structures International Nuclear Information System (INIS) Silva, Francesco; Njuguna, James; Sachse, Sophia; Pielichowski, Krzysztof; Leszczynska, Agnieszka; Giacomelli, Marco 2013-01-01 Highlights: ► Significant improvement in PA composites impact resistance performance. ► Decrease in energy absorption capabilities of PP, this phenomenon is explained. ► Positive effects on mechanical and interphase properties of the matrix material. ► Transition from brittle to ductile fracture mode established. ► Two different toughening mechanisms were observed and explained. - Abstract: Three-phase composites (thermoplastic polymer, glass-fibres and nano-particles) were investigated as an alternative to two-phase (polymer and glass-fibres) composites. The effect of matrix and reinforcement material on the energy absorption capabilities of composite structures was studied in details in this paper. Dynamic and quasi-static axial collapse of conical structures was conducted using a high energy drop tower, as well as Instron universal testing machine. The impact event was recorded using a high-speed camera and the fracture surface was investigated with scanning electron microscopy (SEM). Attention was directed towards the relation between micro and macro fracture process with crack propagation mechanism and energy absorbed by the structure. The obtained results indicated an important influence of filler and matrix material on the energy absorption capabilities of the polymer composites. A significant increase in specific energy absorption (SEA) was observed in polyamide 6 (PA6) reinforced with nano-silica particles and glass-spheres, whereas addition of montmorillonite (MMT) caused a decrease in that property. On the other hand, very little influence of the secondary reinforcement on the energy absorption capabilities of polypropylene (PP) composites was found 13. Effect of cold deformation on latent energy value and high-temperature mechanical properties of 12Cr18Ni10Ti steel International Nuclear Information System (INIS) Maksimkin, O.P.; Shiganakov, Sh.B.; Gusev, M.N. 1997-01-01 Energetic and magnetic characteristics and also the high-temperature mechanical properties depending on the preliminary cold deformation of 12Cr18Ni10Ti steel are presented. It is shown that the value of storage energy in the steel has being grown with increase of the deformation. The rate of its growth has been increased after beginning of martensitic γ→α'- transformation when value of comparative storage energy at first decreased and then has been stay practically constant. Level of mechanical properties of the steel at 1073 K has been determined not only by value of cold deformation but and structural reconstruction corresponding to deformations 35-45% and accompanying with α'-phase martensite formation and change of energy accumulating rate. Preliminary cold deformation (40-60 %) does not improve high- temperature plasticity of steel samples implanted by helium. refs. 7, figs. 2 14. A Correction of Random Incidence Absorption Coefficients for the Angular Distribution of Acoustic Energy under Measurement Conditions DEFF Research Database (Denmark) Jeong, Cheol-Ho 2009-01-01 Most acoustic measurements are based on an assumption of ideal conditions. One such ideal condition is a diffuse and reverberant field. In practice, a perfectly diffuse sound field cannot be achieved in a reverberation chamber. Uneven incident energy density under measurement conditions can cause...... discrepancies between the measured value and the theoretical random incidence absorption coefficient. Therefore the angular distribution of the incident acoustic energy onto an absorber sample should be taken into account. The angular distribution of the incident energy density was simulated using the beam...... tracing method for various room shapes and source positions. The averaged angular distribution is found to be similar to a Gaussian distribution. As a result, an angle-weighted absorption coefficient was proposed by considering the angular energy distribution to improve the agreement between... 15. Effective atomic numbers for photon energy absorption of essential amino acids in the energy range 1 keV to 20 MeV International Nuclear Information System (INIS) Manohara, S.R.; Hanagodimath, S.M. 2007-01-01 Effective atomic numbers for photon energy-absorption (Z PEAeff ) of essential amino acids histidine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan and valine have been calculated by a direct method in the energy region of 1 keV to 20 MeV. The Z PEAeff values have been found to change with energy and composition of the amino acids. The variations of mass energy-absorption coefficient, effective atomic number for photon interaction (Z PIeff ) and Z PEAeff with energy are shown graphically. Significant differences exist between Z PIeff and the Z PEAeff in the energy region of 8-100 keV for histidine and threonine; 6-100 keV for leucine, lysine, tryptophan, phenylalanine and valine; 15-400 keV for methionine. The effect of absorption edge on effective atomic numbers and the possibility of defining two set values of these parameters at the K-absorption edge of high-Z element present in the amino acids are discussed. The reasons for using Z PEAeff rather than the commonly used Z PIeff in medical radiation dosimetry for the calculation of absorbed dose in radiation therapy are also discussed 16. Model of yield response of corn to plant population and absorption of solar energy. Directory of Open Access Journals (Sweden) Allen R Overman Full Text Available Biomass yield of agronomic crops is influenced by a number of factors, including crop species, soil type, applied nutrients, water availability, and plant population. This article is focused on dependence of biomass yield (Mg ha(-1 and g plant(-1 on plant population (plants m(-2. Analysis includes data from the literature for three independent studies with the warm-season annual corn (Zea mays L. grown in the United States. Data are analyzed with a simple exponential mathematical model which contains two parameters, viz. Y(m (Mg ha(-1 for maximum yield at high plant population and c (m(2 plant(-1 for the population response coefficient. This analysis leads to a new parameter called characteristic plant population, x(c = 1/c (plants m(-2. The model is shown to describe the data rather well for the three field studies. In one study measurements were made of solar radiation at different positions in the plant canopy. The coefficient of absorption of solar energy was assumed to be the same as c and provided a physical basis for the exponential model. The three studies showed no definitive peak in yield with plant population, but generally exhibited asymptotic approach to maximum yield with increased plant population. Values of x(c were very similar for the three field studies with the same crop species. 17. Finite temperature effects on the X-ray absorption spectra of energy related materials Science.gov (United States) Pascal, Tod; Prendergast, David 2014-03-01 We elucidate the role of room-temperature-induced instantaneous structural distortions in the Li K-edge X-ray absorption spectra (XAS) of crystalline LiF, Li2SO4, Li2O, Li3N and Li2CO3 using high resolution X-ray Raman spectroscopy (XRS) measurements and first-principles density functional theory calculations within the eXcited electron and Core Hole (XCH) approach. Based on thermodynamic sampling via ab-initio molecular dynamics (MD) simulations, we find calculated XAS in much better agreement with experiment than those computed using the rigid crystal structure alone. We show that local instantaneous distortion of the atomic lattice perturbs the symmetry of the Li 1 s core-excited-state electronic structure, broadening spectral line-shapes and, in some cases, producing additional spectral features. This work was conducted within the Batteries for Advanced Transportation Technologies (BATT) Program, supported by the U.S. Department of Energy Vehicle Technologies Program under Contract No. DE-AC02-05CH11231. 18. A new method for the direct measurement of the energy absorption coefficient of gamma rays International Nuclear Information System (INIS) Bradley, D.A.; Chong, C.S.; Shukri, A.; Tajuddin, A.A.; Ghose, A.M. 1988-01-01 The most important primary interaction cross section of gamma radiation which is of interest in radiation dosimetry and health physics is the energy absorption coefficient μ en of the medium under study. Direct measurement of μ en is, however, difficult and recourse is t aken to theoretical computations for its estimation. In this study a new, simple and direct method for the determination of μ en is reported. The method is based on paraxial sphere transmission using a proportional-response gamma detector. The bremsstrahlung originating from photoelectrons in the absorbing medium and fluorescence radiations from shielding etc. have been suppressed by using suitable filters. The effects of nonparaxiality of finite sample thickness have been accounted for, using extrapolation procedures. The deviation from nonproportionality and other corrections have been shown to be small. The measured value of μ en for paraffin has been determined as (3.3+-0.2)x10 -3 m 2 /Kg. This compares favourably with the theoretically computed value of 3.35 x 10 -3 m 2 /Kg given by Hubbell et al [pt 19. Assessment of specific energy absorption rate (SAR) in the head from a TETRA handset International Nuclear Information System (INIS) Dimbylow, Peter; Khalid, Mohammed; Mann, Simon 2003-01-01 Finite-difference time-domain (FDTD) calculations of the specific energy absorption rate (SAR) from a representative TETRA handset have been performed in an anatomically realistic model of the head. TETRA (Terrestrial Trunked Radio) is a modern digital private mobile radio system designed to meet the requirements of professional users, such as the police and fire brigade. The current frequency allocations in the UK are 380-385 MHz and 390-395 MHz for the public sector network. A comprehensive set of calculations of SAR in the head was performed for positions of the handset in front of the face and at both sides of the head. The representative TETRA handset considered, operating at 1 W in normal use, will show compliance with both the ICNIRP occupational and public exposure restrictions. The handset with a monopole antenna operating at 3 W in normal use will show compliance with both the ICNIRP occupational and public exposure restrictions. The handset with a helical antenna operating at 3 W in normal use will show compliance with the ICNIRP occupational exposure restriction but will be over the public exposure restriction by up to ∼50% if kept in the position of maximum SAR for 6 min continuously 20. Trapped Bose-Einstein condensates with Planck-scale induced deformation of the energy-momentum dispersion relation International Nuclear Information System (INIS) Briscese, F. 2012-01-01 We show that harmonically trapped Bose-Einstein condensates can be used to constrain Planck-scale physics. In particular we prove that a Planck-scale induced deformation of the Minkowski energy-momentum dispersion relation δE≃ξ 1 mcp/2M p produces a shift in the condensation temperature T c of about ΔT c /T c 0 ≃10 -6 ξ 1 for typical laboratory conditions. Such a shift allows to bound the deformation parameter up to |ξ 1 |≤10 4 . Moreover we show that it is possible to enlarge ΔT c /T c 0 and improve the bound on ξ 1 lowering the frequency of the harmonic trap. Finally we compare the Planck-scale induced shift in T c with similar effects due to interboson interactions and finite size effects. 1. Deformation-induced martensitic transformation in a 201 austenitic steel: The synergy of stacking fault energy and chemical driving force Energy Technology Data Exchange (ETDEWEB) Moallemi, M., E-mail: [email protected] [Department of Materials Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Kermanpur, A. [Department of Materials Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Najafizadeh, A. [Department of Materials Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Fould Institute of Technology, Fouladshahr, Isfahan, 8491663763 (Iran, Islamic Republic of); Rezaee, A.; Baghbadorani, H. Samaei; Nezhadfar, P. Dastranjy [Department of Materials Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of) 2016-01-20 The present study deals with the correlation of stacking fault energy's synergy and driving force in the formation of deformation-induced martensitic transformation in a 201 austenitic stainless steel. The fraction of deformation-induced martensite was characterized by means of X-ray diffraction and magnetic induction techniques. The kinetics of the martensite formation versus applied strain was evaluated through the sigmoidal model. It was shown that the volume fraction of ά-martensite is closely related to the driving force/SFE ratio of the alloy. The results also showed that the martensite content is similar in both XRD and magnetic methods and the applied sigmoidal model was consistent with the obtained experimental data. 2. On the Importance of Morphing Deformation Scheduling for Actuation Force and Energy NARCIS (Netherlands) De Breuker, R. 2016-01-01 Morphing aircraft offer superior properties as compared to non-morphing aircraft. They can achieve this by adapting their shape depending on the requirements of various conflicting flight conditions. These shape changes are often associated with large deformations and strains, and hence dedicated 3. Absorption of high-frequency electromagnetic energy in a high-temperature plasma Energy Technology Data Exchange (ETDEWEB) Sagdeyev, R S; Shafranov, V D 1958-07-01 In this paper an analysis of the cyclotron and Cherenkov mechanisms is given. These are two fundamental mechanisms for noncollisional absorption of electromagnetic radiation by plasma in a magnetic field. The expressions for the dielectric permeability tensor, for plasma with a nonisotropic temperature distribution in a magnetic field, are obtained by integrating the kinetic equation with Lagrangian particle co-ordinates in a form suitable to allow a comprehensive physical interpretation of the absorption mechanisms. The oscillations of a plasma column stabilized by a longitudinal field have been analyzed. For uniform plasma, the frequency spectrum has been obtained together with the direction of electromagnetic wave propagation when both the cyclotron and Cherenkov absorption mechanisms take place. The influence of nonlinear effects on the electromagnetic wave absorption and the part which cyclotron and Cherenkov absorption play in plasma heating have also been investigated. 4. Continuum corrections to the level density and its dependence on excitation energy, n-p asymmetry, and deformation International Nuclear Information System (INIS) Charity, R.J.; Sobotka, L.G. 2005-01-01 In the independent-particle model, the nuclear level density is determined from the neutron and proton single-particle level densities. The single-particle level density for the positive-energy continuum levels is important at high excitation energies for stable nuclei and at all excitation energies for nuclei near the drip lines. This single-particle level density is subdivided into compound-nucleus and gas components. Two methods are considered for this subdivision: In the subtraction method, the single-particle level density is determined from the scattering phase shifts. In the Gamov method, only the narrow Gamov states or resonances are included. The level densities calculated with these two methods are similar; both can be approximated by the backshifted Fermi-gas expression with level-density parameters that are dependent on A, but with very little dependence on the neutron or proton richness of the nucleus. However, a small decrease in the level-density parameter is predicted for some nuclei very close to the drip lines. The largest difference between the calculations using the two methods is the deformation dependence of the level density. The Gamov method predicts a very strong peaking of the level density at sphericity for high excitation energies. This leads to a suppression of deformed configurations and, consequently, the fission rate predicted by the statistical model is reduced in the Gamov method 5. On the uncertainties of photon mass energy-absorption coefficients and their ratios for radiation dosimetry Science.gov (United States) Andreo, Pedro; Burns, David T.; Salvat, Francesc 2012-04-01 A systematic analysis of the available data has been carried out for mass energy-absorption coefficients and their ratios for air, graphite and water for photon energies between 1 keV and 2 MeV, using representative kilovoltage x-ray spectra for mammography and diagnostic radiology below 100 kV, and for 192Ir and 60Co gamma-ray spectra. The aim of this work was to establish ‘an envelope of uncertainty’ based on the spread of the available data. Type A uncertainties were determined from the results of Monte Carlo (MC) calculations with the PENELOPE and EGSnrc systems, yielding mean values for µen/ρ with a given statistical standard uncertainty. Type B estimates were based on two groupings. The first grouping consisted of MC calculations based on a similar implementation but using different data and/or approximations. The second grouping was formed by various datasets, obtained by different authors or methods using the same or different basic data, and with different implementations (analytical, MC-based, or a combination of the two); these datasets were the compilations of NIST, Hubbell, Johns-Cunningham, Attix and Higgins, plus MC calculations with PENELOPE and EGSnrc. The combined standard uncertainty, uc, for the µen/ρ values for the mammography x-ray spectra is 2.5%, decreasing gradually to 1.6% for kilovoltage x-ray spectra up to 100 kV. For 60Co and 192Ir, uc is approximately 0.1%. The Type B uncertainty analysis for the ratios of µen/ρ values includes four methods of analysis and concludes that for the present data the assumption that the data interval represents 95% confidence limits is a good compromise. For the mammography x-ray spectra, the combined standard uncertainties of (µen/ρ)graphite,air and (µen/ρ)graphite,water are 1.5%, and 0.5% for (µen/ρ)water,air, decreasing gradually down to uc = 0.1% for the three µen/ρ ratios for the gamma-ray spectra. The present estimates are shown to coincide well with those of Hubbell (1977 Rad. Res 6. Device and method for luminescence enhancement by resonant energy transfer from an absorptive thin film Science.gov (United States) Akselrod, Gleb M.; Bawendi, Moungi G.; Bulovic, Vladimir; Tischler, Jonathan R.; Tisdale, William A.; Walker, Brian J. 2017-12-12 Disclosed are a device and a method for the design and fabrication of the device for enhancing the brightness of luminescent molecules, nanostructures, and thin films. The device includes a mirror, a dielectric medium or spacer, an absorptive layer, and a luminescent layer. The absorptive layer is a continuous thin film of a strongly absorbing organic or inorganic material. The luminescent layer may be a continuous luminescent thin film or an arrangement of isolated luminescent species, e.g., organic or metal-organic dye molecules, semiconductor quantum dots, or other semiconductor nanostructures, supported on top of the absorptive layer. 7. {Delta}I = 2 energy staggering in normal deformed dysprosium nuclei Energy Technology Data Exchange (ETDEWEB) Riley, M.A.; Brown, T.B.; Archer, D.E. [Florida State Univ., Tallahassee, FL (United States)] [and others 1996-12-31 Very high spin states (I{ge}50{Dirac_h}) have been observed in {sup 155,156,157}Dy. The long regular band sequences, free from sharp backbending effects, observed in these dysprosium nuclei offer the possibility of investigating the occurence of any {Delta}I = 2 staggering in normal deformed nuclei. Employing the same analysis techniques as used in superdeformed nuclei, certain bands do indeed demonstrate an apparent staggering and this is discussed. 8. Lower extremity energy absorption and biomechanics during landing, part II: frontal-plane energy analyses and interplanar relationships. Science.gov (United States) Norcross, Marc F; Lewek, Michael D; Padua, Darin A; Shultz, Sandra J; Weinhold, Paul S; Blackburn, J Troy 2013-01-01 Greater sagittal-plane energy absorption (EA) during the initial impact phase (INI) of landing is consistent with sagittal-plane biomechanics that likely increase anterior cruciate ligament (ACL) loading, but it does not appear to influence frontal-plane biomechanics. We do not know whether frontal-plane INI EA is related to high-risk frontal-plane biomechanics. To compare biomechanics among INI EA groups, determine if women are represented more in the high group, and evaluate interplanar INI EA relationships. Descriptive laboratory study. Research laboratory. Participants included 82 (41 men, 41 women; age = 21.0 ± 2.4 years, height = 1.74 ± 0.10 m, mass = 70.3 ± 16.1 kg) healthy, physically active volunteers. We assessed landing biomechanics with an electromagnetic motion-capture system and force plate. We calculated frontal- and sagittal-plane total, hip, knee, and ankle INI EA. Total frontal-plane INI EA was used to create high, moderate, and low tertiles. Frontal-plane knee and hip kinematics, peak vertical and posterior ground reaction forces, and peak internal knee-varus moment (pKVM) were identified and compared across groups using 1-way analyses of variance. We used a χ (2) analysis to evaluate male and female allocation to INI EA groups. We used simple, bivariate Pearson product moment correlations to assess interplanar INI EA relationships. The high-INI EA group exhibited greater knee valgus at ground contact, hip adduction at pKVM, and peak hip adduction than the low-INI EA group (P .05). Greater frontal-plane INI EA was associated with less favorable frontal-plane biomechanics that likely result in greater ACL loading. Women were more likely than men to use greater frontal-plane INI EA. The magnitudes of sagittal- and frontal-plane INI EA were largely independent. 9. Alternating magnetic field energy absorption in the dispersion of iron oxide nanoparticles in a viscous medium Energy Technology Data Exchange (ETDEWEB) Smolkova, Ilona S. [Centre of Polymer Systems, University Institute, Tomas Bata University in Zlin, nad Ovcirnou 3685, 760 01 Zlin (Czech Republic); Polymer Centre, Faculty of Technology, Tomas Bata University in Zlin, T.G. Masaryk Sq. 275, 762 72 Zlin (Czech Republic); Kazantseva, Natalia E., E-mail: [email protected] [Centre of Polymer Systems, University Institute, Tomas Bata University in Zlin, nad Ovcirnou 3685, 760 01 Zlin (Czech Republic); Babayan, Vladimir; Smolka, Petr; Parmar, Harshida; Vilcakova, Jarmila [Centre of Polymer Systems, University Institute, Tomas Bata University in Zlin, nad Ovcirnou 3685, 760 01 Zlin (Czech Republic); Schneeweiss, Oldrich; Pizurova, Nadezda [Institute of Physics of Materials, Academy of Sciences of the Czech Republic, Zizkova 22, 616 62 Brno (Czech Republic) 2015-01-15 Magnetic iron oxide nanoparticles were obtained by a coprecipitation method in a controlled growth process leading to the formation of uniform highly crystalline nanoparticles with average size of 13 nm, which corresponds to the superparamagnetic state. Nanoparticles obtained are a mixture of single-phase nanoparticles of magnetite and maghemite as well as nanoparticles of non-stoichiometric magnetite. The subsequent annealing of nanoparticles at 300 °C in air during 6 h leads to the full transformation to maghemite. It results in reduced value of the saturation magnetization (from 56 emu g{sup −1} to 48 emu g{sup −1}) but does not affect the heating ability of nanoparticles. A 2–7 wt% dispersion of as-prepared and annealed nanoparticles in glycerol provides high heating rate in alternating magnetic fields allowed for application in magnetic hyperthermia; however the value of specific loss power does not exceed 30 W g{sup −1}. This feature of heat output is explained by the combined effect of magnetic interparticle interactions and the properties of the carrier medium. Nanoparticles coalesce during the synthesis and form aggregates showing ferromagnetic-like behavior with magnetization hysteresis, distinct sextets on Mössbauer spectrum, blocking temperature well about room temperature, which accounts for the higher energy barrier for magnetization reversal. At the same time, low specific heat capacity of glycerol intensifies heat transfer in the magnetic dispersion. However, high viscosity of glycerol limits the specific loss power value, since predominantly the Neel relaxation accounts for the absorption of AC magnetic field energy. - Highlights: • Mixed phase iron oxide magnetic nanoparticles were obtained by coprecipitation. • A part of nanoparticles was annealed at 300 °C to achieve the single-phase γ-Fe{sub 2}O{sub 3}. • Nanoparticles revealed ferromagnetic-like behavior due to interparticle interactions. • Nanoparticles glycerol 10. Challenges for energy dispersive X-ray absorption spectroscopy at the ESRF: microsecond time resolution and Mega-bar pressures International Nuclear Information System (INIS) Aquilanti, G. 2002-01-01 This Thesis concerns the development of two different applications of energy-dispersive X-ray absorption spectroscopy at the ESRF: time-resolved studies pushed to the microsecond time resolution and high-pressure studies at the limit of the Mega-bar pressures. The work has been developed in two distinct parts, and the underlying theme has been the exploitation of the capabilities of an X-ray absorption spectrometer in dispersive geometry on a third generation synchrotron source. For time-resolved studies, the study of the triplet excited state following a laser excitation of Pt 2 (P 2 O 5 H 2 ) 4 4- has been chosen to push the technique to the microsecond time resolution. In the high-pressure part, the suitability of the energy dispersive X-ray absorption spectrometer for high-pressure studies using diamond anvils cell is stressed. Some technical developments carried out on beamline ID24 are discussed. Finally, the most extensive scientific part concerns a combined X-ray absorption and diffraction study of InAs under pressure. (author) 11. The selection of stopping power and mass energy absorption coefficient data for the HPA Code of Practice for dosimetry International Nuclear Information System (INIS) Williams, P.C. 1985-01-01 The author draws attention to a discussion by Cunningham and Schultz (1984) which states that, 'with the exception of the NACP and AAPM protocols, the selection of stopping power and energy absorption coefficient ratios has been based upon only the stated accelerating potential of the accelerator', and points out that the HPA Revised Code of Practice should be added to these exceptions. In calculating the HPA's new Csub(lambda) values, a similar, but not identical, approach was taken in order to determine the stopping power and absorption coefficient ratios at each radiation quality. It was recognised that the approximation of a spectrum to a monoenergetic spectrum of between 0.4 and 0.45 of the maximum energy, as had been done in calculating the values, given in ICRU Report 14, was incorrect. (U.K.) 12. From Semi- to Full-Two-Dimensional Conjugated Side-Chain Design: A Way toward Comprehensive Solar Energy Absorption Energy Technology Data Exchange (ETDEWEB) Chao, Pengjie [Department; School; Wang, Huan [Department; Qu, Shiwei [Department; Mo, Daize [Department; Meng, Hong [School; Chen, Wei [Materials; Institute; He, Feng [Department 2017-12-05 Two polymers with fully two-dimensional (2D) conjugated side chains, 2D-PTB-Th and 2D-PTB-TTh, were synthesized and characterized through simultaneously integrating the 2D-TT and the 2D-BDT monomers onto the polymer backbone. Resulting from the synergistic effect from the conjugated side chains on both monomers, the two polymers showed remarkably efficient absorption of the sunlight and improved pi-pi intermolecular interactions for efficient charge carrier transport. The optimized bulk heterojunction device based on 2D-PTB-Th and PC71BM shows a higher PCE of 9.13% compared to PTB7-Th with a PCE of 8.26%, which corresponds to an approximately 10% improvement in solar energy conversion. The fully 2D-conjugated side-chain concept reported here developed a new molecular design strategy for polymer materials with enhanced sunlight absorption and efficient solar energy conversion. 13. Evaluation of absorbents for an absorption heat pump using natural organic working fluids (eco-energy city project) Energy Technology Data Exchange (ETDEWEB) Hisajima, Daisuke; Sakiyama, Ryoko; Nishiguchi, Akira [Hitachi Ltd., Tsuchiura (Japan). Mechanical Engineering Research Lab. 1999-07-01 The present situation of electric power supply and energy consumption in Japan has made it necessary to develop a new absorption air conditioning system which has low electric energy consumption, uses natural organic refrigerants, and can work as a heat pump in winter. Estimating vapor and liquid equilibrium of new pairs of working fluids is prerequisite to developing the new absorption heat pump system. In this phase of the work, methods for estimating vapor and liquid equilibrium that take into account intermolecular force were investigated. Experimental and calculated data on natural organic materials mixtures were considered to find optimum candidates, and then a procedure for evaluation was chosen. Several candidate absorbents were selected that used isobutane and dimethyl ether as refrigerants. (orig.) 14. Role of stacking fault energy on the deformation characteristics of copper alloys processed by plane strain compression International Nuclear Information System (INIS) El-Danaf, Ehab A.; Al-Mutlaq, Ayman; Soliman, Mahmoud S. 2011-01-01 Highlights: → Different compositions of Cu-Zn and Cu-Al alloys are plane strain compressed. → Strain hardening rates, microstructure and texture evolution are documented. → SFE has an indirect effect rather a critical dislocation density controls twinning. → Cu-Al exhibited the need for higher dislocation density for twin initiation. → Onset of twinning occurs in the copper alloys tested with a normalized SFE ≤ 10-3. - Abstract: Samples of Cu-Al and Cu-Zn alloys with different compositions were subjected to large strains under plane strain compression (PSC), a process that simulates the rolling operation. Four compositions in the Cu-Al system, namely 1, 2, 4.7 and 7 wt.% Al and three compositions in the Cu-Zn system of 10, 20 and 30 wt.% Zn, were investigated. Adding Al or Zn to Cu effectively lowers the stacking fault energy (SFE) of the alloy and changes the deformation mechanism from dislocation slipping to dislocation slipping and deformation twinning. True stress-true strain responses in PSC were documented and the strain hardening rates were calculated and correlated to the evolved microstructure. The onset of twinning in low SFE alloys was not directly related to the low value of SFE, but rather to build up of a critical dislocation density during strain hardening in the early stage of deformation (ε < 0.1). The evolution of texture was documented for the Cu-Al samples using X-ray diffraction for samples plane strain compressed to true axial strains of 0.25, 0.5, 0.75 and 1.0. Orientation distribution function (ODF) plots were generated and quantitative information on the volume fraction of ideal rolling orientations were depicted and correlated with the stacking fault energy. 15. Low-energy E1 transitions and octupole softness in odd-A deformed nuclei Energy Technology Data Exchange (ETDEWEB) Hagemann, G B [Niels Bohr Inst., Copenhagen (Denmark); Hamamoto, I [Lund Univ. (Sweden). Dept. of Mathematical Physics; Kownacki, J; Satula, W [Warsaw Univ. (Poland) 1992-08-01 It is found that B(E1) values for yrast spectroscopy of deformed odd-A rare-earth nuclei calculated by using a model in which one quasiparticle is coupled to a rotor are more than an order of magnitude too small. Therefore, measured B(E1) values for {sup 169}Lu were analyzed by introducing parameters which effectively took octupole softness into account. Some preliminary results of the theoretical analysis which are presented in this paper still give do not agree completely with experiment. 4 refs., 1 tab., 5 figs. 16. Simulations about self-absorption of tritium in titanium tritide and the energy deposition in a silicon Schottky barrier diode International Nuclear Information System (INIS) Li, Hao; Liu, Yebing; Hu, Rui; Yang, Yuqing; Wang, Guanquan; Zhong, Zhengkun; Luo, Shunzhong 2012-01-01 Simulations on the self-absorption of tritium electrons in titanium tritide films and the energy deposition in a silicon Schottky barrier diode are carried out using the Geant4 radiation transport toolkit. Energy consumed in each part of the Schottky radiovoltaic battery is simulated to give a clue about how to make the battery work better. The power and energy-conversion efficiency of the tritium silicon Schottky radiovoltaic battery in an optimized design are simulated. Good consistency with experiments is obtained. - Highlights: ► Simulation of the energy conversion inside the radiovoltaic battery is carried out. ► Energy-conversion efficiency in the simulation shows good consistency with experimental result. ► Inadequacy of the present configuration is studied in this work and improvements are proposed. 17. Precipitation of ferromagnetic phase induced by defect energies during creep deformation in Type 304 austenitic steel International Nuclear Information System (INIS) Tsukada, Yuhki; Shiraki, Atsuhiro; Murata, Yoshinori; Takaya, Shigeru; Koyama, Toshiyuki; Morinaga, Masahiko 2010-01-01 The correlation of defect energies with precipitation of the ferromagnetic phase near M 23 C 6 carbide during creep tests at high temperature in Type 304 austenitic steel was examined by estimating the defect energies near the carbide, based on micromechanics. As one of the defect energies, the precipitation energy was calculated by assuming M 23 C 6 carbide to be a spherical inclusion. The other defect energy, creep dislocation energy, was calculated based on dislocation density data obtained from transmission electron microscopy observations of the creep samples. The dislocation energy density was much higher than the precipitation energy density in the initial stage of the creep process, when the ferromagnetic phase started to increase. Creep dislocation energy could be the main driving force for precipitation of the ferromagnetic phase. 18. Precipitation of ferromagnetic phase induced by defect energies during creep deformation in Type 304 austenitic steel Energy Technology Data Exchange (ETDEWEB) Tsukada, Yuhki, E-mail: [email protected] [Department of Materials, Physics and Energy Engineering, Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603 (Japan); Shiraki, Atsuhiro; Murata, Yoshinori [Department of Materials, Physics and Energy Engineering, Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603 (Japan); Takaya, Shigeru [Japan Atomic Energy Agency, 4002 Narita-cho, O-arai-machi, Higashi-ibaraki-gun, Ibaraki 311-1393 (Japan); Koyama, Toshiyuki [National Institute for Materials Science, 1-2-1 Sengen, Tsukuba, Ibaraki 305-0047 (Japan); Morinaga, Masahiko [Department of Materials, Physics and Energy Engineering, Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603 (Japan) 2010-06-15 The correlation of defect energies with precipitation of the ferromagnetic phase near M{sub 23}C{sub 6} carbide during creep tests at high temperature in Type 304 austenitic steel was examined by estimating the defect energies near the carbide, based on micromechanics. As one of the defect energies, the precipitation energy was calculated by assuming M{sub 23}C{sub 6} carbide to be a spherical inclusion. The other defect energy, creep dislocation energy, was calculated based on dislocation density data obtained from transmission electron microscopy observations of the creep samples. The dislocation energy density was much higher than the precipitation energy density in the initial stage of the creep process, when the ferromagnetic phase started to increase. Creep dislocation energy could be the main driving force for precipitation of the ferromagnetic phase. 19. LiCl Dehumidifier LiBr absorption chiller hybrid air conditioning system with energy recovery Science.gov (United States) Ko, Suk M. 1980-01-01 This invention relates to a hybrid air conditioning system that combines a solar powered LiCl dehumidifier with a LiBr absorption chiller. The desiccant dehumidifier removes the latent load by absorbing moisture from the air, and the sensible load is removed by the absorption chiller. The desiccant dehumidifier is coupled to a regenerator and the desiccant in the regenerator is heated by solar heated hot water to drive the moisture therefrom before being fed back to the dehumidifier. The heat of vaporization expended in the desiccant regenerator is recovered and used to partially preheat the driving fluid of the absorption chiller, thus substantially improving the overall COP of the hybrid system. 20. Absorption dips at low x-ray energies in Cygnus X-1 International Nuclear Information System (INIS) Murdin, P. 1976-01-01 Three more looks with the Copernicus satellite at Cygnus X-1 have produced four more examples of absorption dips, decreases in the 2 to 7 keV flux from Cygnus X-1 with an increase of spectral hardness consistent with photoelectric absorption (Mason et al 1974). The nine now seen, including one by OSO-7 (Li and Clark 1974), are listed in Table 1. Their phase in the spectroscopic binary HD 226868 is also listed, calculated from a newer ephemeris than that in Mason et al (1974), adding the radial velocities by Bolton (1975) and unpublished RGO radial velocities from the 1975 season. (These elements do not differ significantly from Bolton's 1. Measurement of energy spectra of charged particles emitted after the absorption of stopped negative pions in carbon International Nuclear Information System (INIS) Mechtersheimer, G. 1978-06-01 The energy spectra of charged particles (p,d,t, 3 He, 4 He and Li-nuclei) emitted after the absorption of stopped negative pions in carbon targets of different thickness (1.227, 0.307, 0.0202 g/cm 2 ) have been measured from the experimental threshold energy of about 0.5 MeV up to the kinematical limit of about 100 MeV. The experiments have been carried out at the biomedical pion channel πE3 of the Swiss Institute of Nuclear Research (SIN). (orig.) [de 2. Limiting absorption principle at low energies for a mathematical model of weak interaction: the decay of a boson International Nuclear Information System (INIS) Barbarouxa, J.M.; Guillot, J.C. 2009-01-01 We study the spectral properties of a Hamiltonian describing the weak decay of spin 1 massive bosons into the full family of leptons. We prove that the considered Hamiltonian is self-adjoint, with a unique ground state and we derive a Mourre estimate and a limiting absorption principle above the ground state energy and below the first threshold, for a sufficiently small coupling constant. As a corollary, we prove absence of eigenvalues and absolute continuity of the energy spectrum in the same spectral interval. (authors) 3. Dose and absorption spectra response of EBT2 Gafchromic film to high energy X-rays International Nuclear Information System (INIS) Butson, M.J.; Cheung, T.; Yu, P.K.N.; Alnawaf, H. 2009-01-01 Full text: With new advancements in radiochromic film designs and sensitivity to suit different niche applications, EBT2 is the latest offering for the megavoltage radiotherapy market. New construction specifications including different physical construction and the use of a yellow coloured dye has provided the next generation radiochromic film for therapy applications. The film utilises the same active chemical for radiation measurement as its predecessor, EBT Gafchromic. Measurements have been performed using photo spectrometers to analyse the absorption spectra properties of this new EBT2 Gafchromic, radiochromic film. Results have shown that whilst the physical coloration or absorption spectra of the film, which turns yellow to green as compared to EBT film, (clear to blue) is significantly different due to the added yellow dye, the net change in absorption spectra properties for EBT2 are similar to the original EBT film. Absorption peaks are still located at 636 n m and 585 n m positions. A net optical density change of 0.590 ± 0.020 (2SD) for a 1 Gy radiation absorbed dose using 6 MV x-rays when measured at the 636 n m absorption peak was found. This is compared to 0.602 ± 0.025 (2SD) for the original EBT film (2005 Batch) and 0.557 ± 0.027 (2009 Batch) at the same absorption peak. The yellow dye and the new coating material produce a significantly different visible absorption spectra results for the EBT2 film compared to EBT at wavelengths especially below approximately 550 n m. At wavelengths above 550 n m differences in absolute OD are seen however, when dose analysis is performed at wavelengths above 550 n m using net optical density changes, no significant variations are seen. If comparing results of the late production EBT to new production EBT2 film, net optical density variations of approximately 10 % to 15 % are seen. As all new film batches should be calibrated for sensitivity upon arrival this should not be of concern. 4. Design and analysis of a waste gasification energy system with solid oxide fuel cells and absorption chillers DEFF Research Database (Denmark) Rokni, Masoud 2018-01-01 Energy saving is an open point in most European countries where energy policies are oriented to reduce the use of fossil fuels, greenhouses emissions and energy independence, and to increase the use of renewable energies. In the last several years, new technologies have been developed and some...... of them received subsidies to increase installation and reduce cost. This article presents a new sustainable trigeneration system (power, heat and cool) based on a solid oxide fuel cell (SOFC) system integrated with an absorption chiller for special applications such as hotels, resorts, hospitals, etc....... with a focus on plant design and performance. The proposal system is based on the idea of gasifying the municipal waste, producing syngas serving as fuel for the trigeneration system. Such advanced system when improved is thus self-sustainable without dependency on net grid, district heating and district... 5. Deformation behavior of curling strips on tearing tubes Energy Technology Data Exchange (ETDEWEB) Choi, Ji Won; Kwon, Tae Soo; Jung, Hyun Seung; Kim, Jin Sung [Dept. of Robotics and Virtual Engineering, Korea University of Science and Technology, Seoul (Korea, Republic of) 2015-10-15 This paper discusses the analysis of the curl deformation behavior when a dynamic force is applied to a tearing tube installed on a flat die to predict the energy absorption capacity and deformation behavior. The deformation of the tips of the curling strips was obtained when the curl tips and tube body are in contact with each other, and a formula describing the energy dissipation rate caused by the deformation of the curl tips is proposed. To improve this formula, we focused on the variation of the curl radius and the reduced thickness of the tube. A formula describing the mean curl radius is proposed and verified using the curl radius measurement data of collision test specimens. These improved formulas are added to the theoretical model previously proposed by Huang et al. and verified from the collision test results of a tearing tube. 6. Photon mass energy absorption coefficients from 0.4 MeV to 10 MeV for silicon, carbon, copper and sodium iodide International Nuclear Information System (INIS) Oz, H.; Gurler, O.; Gultekin, A.; Yalcin, S.; Gundogdu, O. 2006-01-01 The absorption coefficients have been widely used for problems and applications involving dose calculations. Direct measurements of the coefficients are difficult, and theoretical computations are usually employed. In this paper, analytical equations are presented for determining the mass energy absorption coefficients for gamma rays with an incident energy range between 0.4 MeV and 10 MeV in silicon, carbon, copper and sodium iodide. The mass energy absorption coefficients for gamma rays were calculated, and the results obtained were compared with the values reported in the literature. 7. Photon mass energy absorption coefficients from 0.4 MeV to 10 MeV for silicon, carbon, copper and sodium iodide Energy Technology Data Exchange (ETDEWEB) Oz, H.; Gurler, O.; Gultekin, A. [Uludag University, Bursa (Turkmenistan); Yalcin, S. [Kastamonu University, Kastamonu (Turkmenistan); Gundogdu, O. [University of Surrey, Guildford (United Kingdom) 2006-07-15 The absorption coefficients have been widely used for problems and applications involving dose calculations. Direct measurements of the coefficients are difficult, and theoretical computations are usually employed. In this paper, analytical equations are presented for determining the mass energy absorption coefficients for gamma rays with an incident energy range between 0.4 MeV and 10 MeV in silicon, carbon, copper and sodium iodide. The mass energy absorption coefficients for gamma rays were calculated, and the results obtained were compared with the values reported in the literature. 8. An Overview on Impact Behaviour and Energy Absorption of Collapsible Metallic and Non-Metallic Energy Absorbers used in Automotive Applications Science.gov (United States) Shinde, R. B.; Mali, K. D. 2018-04-01 Collapsible impact energy absorbers play an important role of protecting automotive components from damage during collision. Collision of the two objects results into the damage to one or both of them. Damage may be in the form of crack, fracture and scratch. Designers must know about how the material and object behave under impact event. Owing to above reasons different types of collapsible impact energy absorbers are developed. In the past different studies were undertaken to improve such collapsible impact energy absorbers. This article highlights such studies on common shapes of collapsible impact energy absorber and their impact behaviour under the axial compression. The literature based on studies and analyses of effects of different geometrical parameters on the crushing behaviour of impact energy absorbers is presented in detail. The energy absorber can be of different shape such as circular tube, square tube, and frustums of cone and pyramids. The crushing behaviour of energy absorbers includes studies on crushing mechanics, modes of deformation, energy absorbing capacity, effect on peak and mean crushing load. In this work efforts are made to cover major outcomes from past studies on such behavioural parameters. Even though the major literature reviewed is related to metallic energy absorbers, emphasis is also laid on covering literature on use of composite tube, fiber metal lamination (FML) member, honeycomb plate and functionally graded thickness (FGT) tube as a collapsible impact energy absorber. 9. Critical coupling and coherent perfect absorption for ranges of energies due to a complex gain and loss symmetric system International Nuclear Information System (INIS) 2014-01-01 We consider a non-Hermitian medium with a gain and loss symmetric, exponentially damped potential distribution to demonstrate different scattering features analytically. The condition for critical coupling (CC) for unidirectional wave and coherent perfect absorption (CPA) for bidirectional waves are obtained analytically for this system. The energy points at which total absorption occurs are shown to be the spectral singular points for the time reversed system. The possible energies at which CC occurs for left and right incidence are different. We further obtain periodic intervals with increasing periodicity of energy for CC and CPA to occur in this system. -- Highlights: •Energy ranges for CC and CPA are obtained explicitly for complex WS potential. •Analytical conditions for CC and CPA for PT symmetric WS potential are obtained. •Conditions for left and right CC are shown to be different. •Conditions for CC and CPA are shown to be that of SS for the time reversed system. •Our model shows the great flexibility of frequencies for CC and CPA 10. Cooling performance and energy saving of a compression-absorption refrigeration system driven by a gas engine Energy Technology Data Exchange (ETDEWEB) Sun, Z.G.; Guo, K.H. [Sun Yat-Sen University, Guangzhou (China). Engineering School 2006-07-01 The prototype of combined vapour compression-absorption refrigeration system was set up, where a gas engine drove directly an open screw compressor in a vapour compression refrigeration chiller and waste heat from the gas engine was used to operate absorption refrigeration cycle. The experimental procedure and results showed that the combined refrigeration system was feasible. The cooling capacity of the prototype reached about 589 kW at the Chinese rated conditions of air conditioning (the inlet and outlet temperatures of chilled water are 12 and 7{sup o}C, the inlet and outlet temperatures of cooling water are 30 and 35{sup o}C, respectively). Primary energy rate (PER) and comparative primary energy saving were used to evaluate energy utilization efficiency of the combined refrigeration system. The calculated results showed that the PER of the prototype was about 1.81 and the prototype saved more than 25% of primary energy compared to a conventional electrically driven vapour compression refrigeration unit. Error analysis showed that the total error of the combined cooling system measurement was about 4.2% in this work. (author) 11. Förster resonance energy transfer, absorption and emission spectra in multichromophoric systems. III. Exact stochastic path integral evaluation. Science.gov (United States) Moix, Jeremy M; Ma, Jian; Cao, Jianshu 2015-03-07 A numerically exact path integral treatment of the absorption and emission spectra of open quantum systems is presented that requires only the straightforward solution of a stochastic differential equation. The approach converges rapidly enabling the calculation of spectra of large excitonic systems across the complete range of system parameters and for arbitrary bath spectral densities. With the numerically exact absorption and emission operators, one can also immediately compute energy transfer rates using the multi-chromophoric Förster resonant energy transfer formalism. Benchmark calculations on the emission spectra of two level systems are presented demonstrating the efficacy of the stochastic approach. This is followed by calculations of the energy transfer rates between two weakly coupled dimer systems as a function of temperature and system-bath coupling strength. It is shown that the recently developed hybrid cumulant expansion (see Paper II) is the only perturbative method capable of generating uniformly reliable energy transfer rates and emission spectra across a broad range of system parameters. 12. A fast neutron and dual-energy gamma-ray absorption method (NEUDEG) for investigating materials using a 252Cf source International Nuclear Information System (INIS) Bartle, C. Murray 2014-01-01 DEXA (dual-energy X-ray absorption) is widely used in airport scanners, industrial scanners and bone densitometers. DEXA determines the properties of materials by measuring the absorption differences of X-rays from a bremsstrahlung tube source with and without filtering. Filtering creates a beam with a higher mean energy, which causes lower material absorption. The absorption difference between measurements (those with a filter subtracted from those without a filter) is a positive number that increases with the effective atomic number of the material. In this paper, the concept of using a filter to create a dual beam and an absorption difference in materials is applied to radiation from a 252 Cf source, called NEUDEG (neutron and dual-energy gamma absorption). NEUDEG includes absorptions for fast neutrons as well as the dual photon beams and thus an incentive for developing the method is that, unlike DEXA, it is inherently sensitive to the hydrogen content of materials. In this paper, a model for the absorption difference and absorption sum in NEUDEG is presented using the combined gamma ray and fast neutron mass attenuation coefficients. Absorption differences can be either positive or negative in NEUDEG, increasing with increases in the effective atomic number and decreasing with increases in the hydrogen content. Sample sets of absorption difference curves are calculated for materials with typical gamma-ray and fast neutron mass attenuation coefficients. The model, which uses tabulated mass attenuated coefficients, agrees with experimental data for porcelain tiles and polyethylene sheets. The effects of “beam hardening” are also investigated. - Highlights: • Creation of a dual neutron/gamma beam from 252 Cf is described. • An absorption model is developed using mass attenuation coefficients. • A graphical method is used to show sample results from the model. • The model is successfully compared with experimental results. • The importance of 13. Free energy on a cycle graph and trigonometric deformation of heat kernel traces on odd spheres Science.gov (United States) Kan, Nahomi; Shiraishi, Kiyoshi 2018-01-01 We consider a possible ‘deformation’ of the trace of the heat kernel on odd dimensional spheres, motivated by the calculation of the free energy of a scalar field on a discretized circle. By using an expansion in terms of the modified Bessel functions, we obtain the values of the free energies after a suitable regularization. 14. Studies on effective atomic numbers for photon energy absorption and electron density of some narcotic drugs in the energy range 1 keV-20 MeV Science.gov (United States) Gounhalli, Shivraj G.; Shantappa, Anil; Hanagodimath, S. M. 2013-04-01 Effective atomic numbers for photon energy absorption ZPEA,eff, photon interaction ZPI,eff and for electron density Nel, have been calculated by a direct method in the photon-energy region from 1 keV to 20 MeV for narcotic drugs, such as Heroin (H), Cocaine (CO), Caffeine (CA), Tetrahydrocannabinol (THC), Cannabinol (CBD), Tetrahydrocannabivarin (THCV). The ZPEA,eff, ZPI,eff and Nel values have been found to change with energy and composition of the narcotic drugs. The energy dependence ZPEA,eff, ZPI,eff and Nel is shown graphically. The maximum difference between the values of ZPEA,eff, and ZPI,eff occurs at 30 keV and the significant difference of 2 to 33% for the energy region 5-100 keV for all drugs. The reason for these differences is discussed. 15. Investigation of Absorption Cooling Application Powered by Solar Energy in the South Coast Region of Turkey Directory of Open Access Journals (Sweden) Ozgoren M. 2013-04-01 Full Text Available In this study, an absorption system using ammonia-water (NH3-H2O solution has been theoretically examined in order to meet the cooling need of a detached building having 150 m2 floor area for Antalya, Mersin and Mugla provinces in Turkey. Hourly dynamic cooling load capacities of the building were determined by using Radiant Time Series (RTS method in the chosen cities. For the analysis, hourly average meteorological data such as atmospheric air temperature and solar radiation belonging to the years 1998-2008 are used for performance prediction of the proposed system. Thermodynamic relations for each component of absorption cooling system is explained and coefficients of performance of the system are calculated. The maximum daily total radiation data were calculated as 7173 W/m2day on July 15, 7277 W/m2 day on July 19 and 7231 W/m2day on July 19 for Mersin, Antalya and Mugla, respectively on the 23° toward to south oriented panels from horizontal surface. The generator operating temperatures are considered between 90-130°C and the best result for 110°C is found the optimum degree for maximum coefficient of performance (COP values at the highest solar radiation occurred time during the considered days for each province. The COP values varies between 0.521 and 0.530 for the provinces. In addition, absorber and condenser capacities and thermal efficiency for the absorption cooling system were calculated. The necessary evacuated tube collector area for the different provinces were found in the range of 45 m2 to 47 m2. It is shown that although the initial investment cost is higher for the proposed absorption cooling system, it is economically feasible because of its lower annual operation costs and can successfully be operated for the considered provinces. 16. Low-energy absorption and luminescence of higher plant photosystem II core samples International Nuclear Information System (INIS) Hughes, Joseph L.; Smith, Paul J.; Pace, Ron J.; Krausz, Elmars 2007-01-01 The charge-separating state of PSII has been recently assigned as a homogeneously broadened band peaking at 705 nm. The possibility of observing emission due to luminescence from the charge-separating state was investigated. Emission from the charge-separating state is predicted to be both broad and substantially Stokes shifted. Our PSII cores show an easily observable and broad emission peaking near 735 nm when excited at 707 nm and beyond for temperatures below 100 K as well as the well-known F685 and F695 nm emission when excited at 633 nm. However, the 735 nm emission bears a close correspondence to that previously reported for the light harvesting pigment of photosystem I (PSI), LHCI-730, and we attribute our observed emission to a minor contamination of our sample with this protein. High sensitivity circular dichroism (CD) spectra establish that LHCI and/or PSI contamination of our samples does not contribute significantly to the absorption seen in the 700-730 nm region. Furthermore, systematic illumination-induced absorption changes seen in this region are shown to quantitatively track with charge separation and the subsequent secondary acceptor plastoquinone (Q A ) acceptor anion formation. These results confirm that absorption in the 700-730 nm region is associated with the reaction centre of active PSII 17. Enhanced solar energy absorption by internally-mixed black carbon in snow grains Directory of Open Access Journals (Sweden) M. G. Flanner 2012-05-01 Full Text Available Here we explore light absorption by snowpack containing black carbon (BC particles residing within ice grains. Basic considerations of particle volumes and BC/snow mass concentrations show that there are generally 0.05–109 BC particles for each ice grain. This suggests that internal BC is likely distributed as multiple inclusions within ice grains, and thus the dynamic effective medium approximation (DEMA (Chýlek and Srivastava, 1983 is a more appropriate optical representation for BC/ice composites than coated-sphere or standard mixing approximations. DEMA calculations show that the 460 nm absorption cross-section of BC/ice composites, normalized to the mass of BC, is typically enhanced by factors of 1.8–2.1 relative to interstitial BC. BC effective radius is the dominant cause of variation in this enhancement, compared with ice grain size and BC volume fraction. We apply two atmospheric aerosol models that simulate interstitial and within-hydrometeor BC lifecycles. Although only ~2% of the atmospheric BC burden is cloud-borne, 71–83% of the BC deposited to global snow and sea-ice surfaces occurs within hydrometeors. Key processes responsible for within-snow BC deposition are development of hydrophilic coatings on BC, activation of liquid droplets, and subsequent snow formation through riming or ice nucleation by other species and aggregation/accretion of ice particles. Applying deposition fields from these aerosol models in offline snow and sea-ice simulations, we calculate that 32–73% of BC in global surface snow resides within ice grains. This fraction is smaller than the within-hydrometeor deposition fraction because meltwater flux preferentially removes internal BC, while sublimation and freezing within snowpack expose internal BC. Incorporating the DEMA into a global climate model, we simulate increases in BC/snow radiative forcing of 43–86%, relative to scenarios that apply external optical properties to all BC. We 18. Enhanced Solar Energy Absorption by Internally-mixed Black Carbon in Snow Grains Energy Technology Data Exchange (ETDEWEB) Flanner, M. G.; Liu, Xiaohong; Zhou, Cheng; Penner, Joyce E.; Jiao, C. 2012-05-30 Here we explore light absorption by snowpack containing black carbon (BC) particles residing within ice grains. Basic considerations of particle volumes and BC/snow mass concentrations show that there are generally 0:05-109 BC particles for each ice grain. This suggests that internal BC is likely distributed as multiple inclusions within ice grains, and thus the dynamic effective medium approximation (DEMA) (Chylek and Srivastava, 1983) is a more appropriate optical representation for BC/ice composites than coated-sphere or standard mixing approximations. DEMA calculations show that the 460 nm absorption cross-section of BC/ice composites, normalized to the mass of BC, is typically enhanced by factors of 1.8-2.1 relative to interstitial BC. BC effective radius is the dominant cause of variation in this enhancement, compared with ice grain size and BC volume fraction. We apply two atmospheric aerosol models that simulate interstitial and within-hydrometeor BC lifecycles. Although only {approx}2% of the atmospheric BC burden is cloud-borne, 71-83% of the BC deposited to global snow and sea-ice surfaces occurs within hydrometeors. Key processes responsible for within-snow BC deposition are development of hydrophilic coatings on BC, activation of liquid droplets, and subsequent snow formation through riming or ice nucleation by other species and aggregation/accretion of ice particles. Applying deposition fields from these aerosol models in offline snow and sea-ice simulations, we calculate that 32-73% of BC in global surface snow resides within ice grains. This fraction is smaller than the within-hydrometeor deposition fraction because meltwater flux preferentially removes internal BC, while sublimation and freezing within snowpack expose internal BC. Incorporating the DEMA into a global climate model, we simulate increases in BC/snow radiative forcing of 43-86%, relative to scenarios that apply external optical properties to all BC. We show that snow metamorphism 19. Deformation and failure in extreme regimes by high-energy pulsed lasers: A review Energy Technology Data Exchange (ETDEWEB) Remington, Tane P. [The University of California, San Diego, La Jolla, CA 92093 (United States); Remington, Bruce A. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Hahn, Eric N. [The University of California, San Diego, La Jolla, CA 92093 (United States); Meyers, Marc A., E-mail: [email protected] [The University of California, San Diego, La Jolla, CA 92093 (United States) 2017-03-14 The use of high-power pulsed lasers to probe the response of materials at pressures of hundreds of GPa up to several TPa, time durations of nanoseconds, and strain rates of 10{sup 6}–10{sup 1}° s{sup −1} is revealing novel mechanisms of plastic deformation, phase transformations, and even amorphization. This unique experimental tool, aided by advanced diagnostics, analysis, and characterization, allows us to explore these new regimes that simulate those encountered in the interiors of planets. Fundamental Materials Science questions such as dislocation velocity regimes, the transition between thermally-activated and phonon drag regimes, the slip-twinning transition, the ultimate tensile strength of metals, the dislocation mechanisms of void growth are being answered through this powerful tool. In parallel with experiments, molecular dynamics simulations provide modeling and visualization at comparable strain rates (10{sup 8}–10{sup 10} s{sup −1}) and time durations (hundreds of picoseconds). This powerful synergy is illustrated in our past and current work, using representative face-centered cubic (fcc) copper, body-centered cubic (bcc) tantalum and diamond cubic silicon as model structures. 20. Comparison between diffraction contrast tomography and high-energy diffraction microscopy on a slightly deformed aluminium alloy Directory of Open Access Journals (Sweden) 2016-01-01 Full Text Available The grain structure of an Al–0.3 wt%Mn alloy deformed to 1% strain was reconstructed using diffraction contrast tomography (DCT and high-energy diffraction microscopy (HEDM. 14 equally spaced HEDM layers were acquired and their exact location within the DCT volume was determined using a generic algorithm minimizing a function of the local disorientations between the two data sets. The microstructures were then compared in terms of the mean crystal orientations and shapes of the grains. The comparison shows that DCT can detect subgrain boundaries with disorientations as low as 1° and that HEDM and DCT grain boundaries are on average 4 µm apart from each other. The results are important for studies targeting the determination of grain volume. For the case of a polycrystal with an average grain size of about 100 µm, a relative deviation of about ≤10% was found between the two techniques. 1. The Impacts of Different Expansion Modes on Performance of Small Solar Energy Firms: Perspectives of Absorptive Capacity Directory of Open Access Journals (Sweden) Hsing Hung Chen 2013-01-01 Full Text Available The characteristics of firm’s expansion by differentiated products and diversified products are quite different. However, the study employing absorptive capacity to examine the impacts of different modes of expansion on performance of small solar energy firms has never been discussed before. Then, a conceptual model to analyze the tension between strategies and corporate performance is proposed to filling the vacancy. After practical investigation, the results show that stronger organizational institutions help small solar energy firms expanded by differentiated products increase consistency between strategies and corporate performance; oppositely, stronger working attitudes with weak management controls help small solar energy firms expanded by diversified products reduce variance between strategies and corporate performance. 2. Key Factors Influencing the Energy Absorption of Dual-Phase Steels: Multiscale Material Model Approach and Microstructural Optimization Science.gov (United States) Belgasam, Tarek M.; Zbib, Hussein M. 2018-06-01 The increase in use of dual-phase (DP) steel grades by vehicle manufacturers to enhance crash resistance and reduce body car weight requires the development of a clear understanding of the effect of various microstructural parameters on the energy absorption in these materials. Accordingly, DP steelmakers are interested in predicting the effect of various microscopic factors as well as optimizing microstructural properties for application in crash-relevant components of vehicle bodies. This study presents a microstructure-based approach using a multiscale material and structure model. In this approach, Digimat and LS-DYNA software were coupled and employed to provide a full micro-macro multiscale material model, which is then used to simulate tensile tests. Microstructures with varied ferrite grain sizes, martensite volume fractions, and carbon content in DP steels were studied. The impact of these microstructural features at different strain rates on energy absorption characteristics of DP steels is investigated numerically using an elasto-viscoplastic constitutive model. The model is implemented in a multiscale finite-element framework. A comprehensive statistical parametric study using response surface methodology is performed to determine the optimum microstructural features for a required tensile toughness at different strain rates. The simulation results are validated using experimental data found in the literature. The developed methodology proved to be effective for investigating the influence and interaction of key microscopic properties on the energy absorption characteristics of DP steels. Furthermore, it is shown that this method can be used to identify optimum microstructural conditions at different strain-rate conditions. 3. Axial Crushing and Energy Absorption of Empty and Foam Filled Jute-glass/ Epoxy Bi-tubes Directory of Open Access Journals (Sweden) 2016-01-01 Full Text Available Experimental work on the axial crushing of empty and polyurethane foam filled bi-tubular composite cone-tube has been carried out. Hand lay-up method was used to fabricate the bi-tubes using woven roving glass, jute and hybrid jute-glass/epoxy materials. The tubes were of 56 mm diameter, and the cones top diameters were 65 mm. Cone semi-apical angles of 5°, 10°, 15°, 20° and 25° were examined. Height of 120 mm was maintained for all the fabricated specimens. Effects of material used, cone semi apical angle and foam filler on the load-displacement relation, maximum load, crush force efficiency, and the specific energy absorption and failure mode were investigated. Results show that the foam filler improved the progressive crushing process, increased the maximum load and the absorbed energy of the bi-tubes. The maximum crushing load and the specific energy absorption increased with increasing the cone semi apical angle up to 20° for the empty bi-tubes and up to 25° for the foam filled bi-tubes. Progressive failure mode with fiber and matrix cracking was observed at the top narrow side of the fractured bi-tubes as well as at the bottom surface of 20° and 25° cone semi-apical angle bi-tubes. 4. Energy dependence of the absorptive potential for sub-Coulomb energy proton bombardment of zirconium and molybdenum isotopes International Nuclear Information System (INIS) Flynn, D.S.; Hershberger, R.L.; Gabbard, F. 1985-01-01 The measured (p,p) and (p,n) excitation functions for /sup 92,94,96/Zr and /sup 95,98,100/Mo were fitted in the energy range 2 3 for all isotopes studied as the proton bombarding energy is increased toward 15 MeV. This result is consistent with results from analyses at higher energies 5. Electron beam absorption in solid and in water phantoms: depth scaling and energy-range relations International Nuclear Information System (INIS) Grosswendt, B.; Roos, M. 1989-01-01 In electron dosimetry energy parameters are used with values evaluated from ranges in water. The electron ranges in water may be deduced from ranges measured in solid phantoms. Several procedures recommended by national and international organisations differ both in the scaling of the ranges and in the energy-range relations for water. Using the Monte Carlo method the application of different procedures for electron energies below 10 MeV is studied for different phantom materials. It is shown that deviations in the range scaling and in the energy-range relations for water may accumulate to give energy errors of several per cent. In consequence energy-range relations are deduced for several solid phantom materials which enable a single-step energy determination. (author) 6. A microscopic description of absorption in high-energy string-brane collisions CERN Document Server D'Appollonio, Giuseppe; Russo, Rodolfo; Veneziano, Gabriele 2016-01-01 We study the collision of a highly energetic light closed string off a stack of Dp-branes at (sub)string-scale impact parameters and in a regime justifying a perturbative treatment. Unlike at larger impact parameters - where elastic scattering and/or tidal excitations dominate - here absorption of the closed string by the brane system, with the associated excitation of open strings living on it, becomes important. As a first step, we study this phenomenon at the disk level, in which the energetic closed string turns into a single heavy open string at rest whose particularly simple properties are described. 7. Variation of energy absorption and exposure buildup factors with incident photon energy and penetration depth for boro-tellurite (B2O3-TeO2) glasses Science.gov (United States) Sayyed, M. I.; Elhouichet, H. 2017-01-01 The gamma ray energy absorption (EABF) and exposure buildup factors (EBF) of (100-x)TeO2-xB2O3 glass systems (where x=5, 10, 15, 20, 22.5 and 25 mol%) have been calculated in the energy region 0.015-15 MeV up to a penetration depth of 40 mfp (mean free path). The five parameters (G-P) fitting method has been used to estimate both EABF and EBF values. Variations of EABF and EBF with incident photon energy and penetration depth have been studied. It was found that EABF and EBF values were higher in the intermediate energy region, for all the glass systems. Furthermore, boro-tellurite glass with 5 mol% B2O3, was found to present the lowest EABF and EBF values, hence it is superior gamma-ray shielding material. The results indicate that the boro-tellurite glasses can be used as radiation shielding materials. 8. Quantum deformed magnon kinematics OpenAIRE Gómez, César; Hernández Redondo, Rafael 2007-01-01 The dispersion relation for planar N=4 supersymmetric Yang-Mills is identified with the Casimir of a quantum deformed two-dimensional kinematical symmetry, E_q(1,1). The quantum deformed symmetry algebra is generated by the momentum, energy and boost, with deformation parameter q=e^{2\\pi i/\\lambda}. Representing the boost as the infinitesimal generator for translations on the rapidity space leads to an elliptic uniformization with crossing transformations implemented through translations by t... 9. Deformations and strain energy in fragments of tempered glass: experimental and numerical investigation DEFF Research Database (Denmark) Nielsen, Jens Henrik; Bjarrum, Marie 2017-01-01 energy and thereby the stress in a fragment post failure. The FE-model have been established in previous work Nielsen (Glass Struct Eng, 2016. doi: 10.1007/s40940-016-0036-z) and is applied here on the specific geometry and initial state of the investigated fragments. This is done by measuring... 10. Low energy dislocation structures due to unidirectional deformation at low temperatures DEFF Research Database (Denmark) Hansen, Niels; Kuhlmann-Wilsdorf, D. 1986-01-01 The line energy of dislocations is {Gb2f(v)/4π} 1n(R/b) with R range of the dislocation stress field from the axis. This equation implies that quasi-uniform distributions are unstable relative to dislocation clusters in which neighboring dislocations mutually screen their stress fields, correspon......The line energy of dislocations is {Gb2f(v)/4π} 1n(R/b) with R range of the dislocation stress field from the axis. This equation implies that quasi-uniform distributions are unstable relative to dislocation clusters in which neighboring dislocations mutually screen their stress fields......, correspondingly leaving the major fraction of the volume free of dislocations. The value of R decreases in the following order: pile-ups to dipolar mats, Taylor lattices, tilt and dipolar walls to dislocation cell structures. This is the same order in which dislocation structures tend to develop with increasing...... dislocation density and hence increased dislocation interactions, leading to the corresponding energy decrease per unit length of dislocation line. Taking into consideration also the longer-range “termination stresses” of finite dislocation boundaries, and minimizing the total energy, explains the size... 11. Theoretical relation between halo current-plasma energy displacement/deformation in EAST Science.gov (United States) Khan, Shahab Ud-Din; Khan, Salah Ud-Din; Song, Yuntao; Dalong, Chen 2018-04-01 In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak. 12. Experimental study of radiative energy transport in dense plasmas by emission and absorption spectroscopy International Nuclear Information System (INIS) Dozieres, Maylis 2016-01-01 This PhD work is an experimental study, based on emission and absorption spectroscopy of hot and dense nanosecond laser-produced plasmas. Atomic physics in such plasmas is a complex subject and of great interest especially in the fields of astrophysics or inertial confinement fusion. On the atomic physics point of view, this means determining parameters such as the average ionization or opacity in plasmas at given electronic temperature and density. Atomic physics codes then need of experimental data to improve themselves and be validated so that they can be predictive for a wide range of plasmas. With this work we focus on plasmas whose electronic temperature varies from 10 eV to more than a hundred and whose density range goes from 10 -5 ato10 -2 g/cm 3 . In this thesis, there are two types of spectroscopic data presented which are both useful and necessary to the development of atomic physics codes because they are both characteristic of the state of the studied plasma: 1) some absorption spectra from Cu, Ni and Al plasmas close to local thermodynamic equilibrium; 2) some emission spectra from non local thermodynamic equilibrium plasmas of C, Al and Cu. This work highlights the different experimental techniques and various comparisons with atomic physics codes and hydrodynamics codes. (author) [fr 13. Precise determination of total absorption coefficients for low-energy gamma-quanta with Moessbauer effect International Nuclear Information System (INIS) Bonchev, T.; Statev, S.; Nejkov, Kh. 1980-01-01 A new method of determining the total absorption coefficient applying the Moessbauer effect is proposed. This method enables the accuracy of the measurement increase. The coefficient is measured with practically no background on using the recoilless part of gamma radiation obtained from the Moessbauer source with and without the sample between the source of the gamma-quanta and the detector. Moessbauer sources and absorbers with a single line and without an isomeric shift are used. The recoilless part of the radiation is obtained by the ''two point'' method as a difference between the numbers of photons corresponding to the stationary source and to the vibrating one with a big mean square velocity, respectively. In the concrete measurements the sources 57 Co and 119 Sn are used. The total absorption coefficient for different samples beginning with water up to plumbum is determined. The mean square error for the mean result in all measurements is less than the mean statistical error for the coefficient. The obtained experimental data give a much smaller deviation from the theoretical data of the last issue of the Stom-Israel Tables than the one expected by their authors 14. Aluminium or copper substrate panel for selective absorption of solar energy Science.gov (United States) Roberts, M. L.; Sharpe, M. H.; Krupnick, A. C. (Inventor) 1979-01-01 A method for making panels which selectively absorb solar energy is disclosed. The panels are comprised of an aluminum substrate, a layer of zinc thereon, a layer of nickel over the zinc layer and an outer layer of solar energy absorbing nickel oxide or a copper substrate with a layer of nickel thereon and a layer of solar energy absorbing nickel oxide distal from the copper substrate. 15. Photoactuators for Direct Optical-to-Mechanical Energy Conversion: From Nanocomponent Assembly to Macroscopic Deformation. Science.gov (United States) Hu, Ying; Li, Zhe; Lan, Tian; Chen, Wei 2016-12-01 Photoactuators with integrated optical-to-mechanical energy conversion capacity have attracted growing research interest in the last few decades due to their unique features of remote control and their wide applications ranging from bionic robots, biomedical devices, and switches to motors. For the photoactuator design, the energy conversion route and structure assembly are two important parts, which directly affect the performance of the photoactuators. In particular, the architectural designs at the molecular, nano-, micro-, and macro- level, are found to play a significant role in accumulating molecular-scale strain/stress to macroscale strain/stress. Here, recent progress on photoactuators based on photochemical and photothermal effects is summarized, followed by a discussion of the important assembly strategies for the amplification of the photoresponsive components at nanoscale to macroscopic scale motions. The application advancement of current photoactuators is also presented. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 16. Modeling of gamma ray energy-absorption buildup factors for thermoluminescent dosimetric materials using multilayer perceptron neural network DEFF Research Database (Denmark) Kucuk, Nil; Manohara, S.R.; Hanagodimath, S.M. 2013-01-01 In this work, multilayered perceptron neural networks (MLPNNs) were presented for the computation of the gamma-ray energy absorption buildup factors (BA) of seven thermoluminescent dosimetric (TLD) materials [LiF, BeO, Na2B4O7, CaSO4, Li2B4O7, KMgF3, Ca3(PO4)2] in the energy region 0.015–15Me......V, and for penetration depths up to 10 mfp (mean-free-path). The MLPNNs have been trained by a Levenberg–Marquardt learning algorithm. The developed model is in 99% agreement with the ANSI/ANS-6.4.3 standard data set. Furthermore, the model is fast and does not require tremendous computational efforts. The estimated BA... 17. Energy and parametric analysis of solar absorption cooling systems in various Moroccan climates Directory of Open Access Journals (Sweden) Y. Agrouaz 2017-03-01 Full Text Available The aim of this work is to investigate the energetic performance of a solar cooling system using absorption technology under Moroccan climate. The solar fraction and the coefficient of performance of the solar cooling system were evaluated for various climatic conditions. It is found that the system operating in Errachidia shows the best average annual solar fraction (of 30% and COP (of 0.33 owing to the high solar capabilities of this region. Solar fraction values in other regions varied between 19% and 23%. Moreover, the coefficient of performance values shows in the same regions a significant variation from 0.12 to 0.33 all over the year. A detailed parametric study was as well carried out to evidence the effect of the operating and design parameters on the solar air conditioner performance. 18. Universality, maximum radiation, and absorption in high-energy collisions of black holes with spin. Science.gov (United States) Sperhake, Ulrich; Berti, Emanuele; Cardoso, Vitor; Pretorius, Frans 2013-07-26 We explore the impact of black hole spins on the dynamics of high-energy black hole collisions. We report results from numerical simulations with γ factors up to 2.49 and dimensionless spin parameter χ=+0.85, +0.6, 0, -0.6, -0.85. We find that the scattering threshold becomes independent of spin at large center-of-mass energies, confirming previous conjectures that structure does not matter in ultrarelativistic collisions. It has further been argued that in this limit all of the kinetic energy of the system may be radiated by fine tuning the impact parameter to threshold. On the contrary, we find that only about 60% of the kinetic energy is radiated for γ=2.49. By monitoring apparent horizons before and after scattering events we show that the "missing energy" is absorbed by the individual black holes in the encounter, and moreover the individual black-hole spins change significantly. We support this conclusion with perturbative calculations. An extrapolation of our results to the limit γ→∞ suggests that about half of the center-of-mass energy of the system can be emitted in gravitational radiation, while the rest must be converted into rest-mass and spin energy. 19. Energy dissipation of Alfven wave packets deformed by irregular magnetic fields in solar-coronal arches Science.gov (United States) Similon, Philippe L.; Sudan, R. N. 1989-01-01 The importance of field line geometry for shear Alfven wave dissipation in coronal arches is demonstrated. An eikonal formulation makes it possible to account for the complicated magnetic geometry typical in coronal loops. An interpretation of Alfven wave resonance is given in terms of gradient steepening, and dissipation efficiencies are studied for two configurations: the well-known slab model with a straight magnetic field, and a new model with stochastic field lines. It is shown that a large fraction of the Alfven wave energy flux can be effectively dissipated in the corona. 20. Solar powered absorption cycle heat pump using phase change materials for energy storage Science.gov (United States) Middleton, R. L. 1972-01-01 Solar powered heating and cooling system with possible application to residential homes is described. Operating principles of system are defined and illustration of typical energy storage and exchange system is provided. 1. Rapid Quantification of Energy Absorption and Dissipation Metrics for PPE Padding Materials Science.gov (United States) 2010-01-22 dampers ,   i.e.,  Hooke’s  Law  springs  and   viscous ...absorbing/dissipating materials. Input forces caused by blast pressures, determined from computational fluid dynamics (CFD) analysis and simulation...simple  lumped-­‐ parameter  elements   –  spring,  k  (energy  storage)   –  damper ,  b  (energy  dissipa/on   Rapid 2. Theoretical analysis of piezoelectric energy harvesting from traffic induced deformation of pavements International Nuclear Information System (INIS) Xiang, H J; Wang, J J; Shi, Z F; Zhang, Z W 2013-01-01 The problem of energy harvesting using piezoelectric transducers for pavement system applications is formulated with a focus on moving vehicle excitations. The pavement behavior is described by an infinite Bernoulli–Euler beam subjected to a moving line load and resting on a Winkler foundation. A closed-form dynamic response of the pavement is determined by a Fourier transform and the residue theorem. The voltage and power outputs of the piezoelectric harvester embedded in the pavements are then obtained by the direct piezoelectric effect. A comprehensive parametric study is conducted to show the effect of damping, the Winkler modulus, and the velocity of moving vehicles on the voltage and power output of the piezoelectric harvester. It is found that the output increases sharply when the velocity of the vehicle is close to the so-called critical velocity. (paper) 3. A new approach to measure the elasticity modulus for ceramics using the deformation energy method International Nuclear Information System (INIS) Foschini, Cesar R.; Souza, Edson A.; Borges, Ana F. S.; Pintao, Carlos A. 2016-01-01 This paper presents an alternative method to measure the modulus of elasticity to traction, E, for relatively limited sample sizes. We constructed a measurement system with a Force sensor (FS) and a Rotation movement sensor (RMS) to obtain a relationship between force (F) and bending (ΔL). It was possible by calculating the strain energy and the work of a constant force to establish a relationship between these quantities; the constant of proportionality in this relationship depends on E, I and L. I and L are the moment of inertia of the uniform cross-section in relation to an oriented axis and length, respectively, of the sample for bending. An expression that could achieve the value of E was deduced to study samples of Y-TZP ceramics. The advantages of this system compared to traditional systems are its low cost and practicality in determining E 4. A new approach to measure the elasticity modulus for ceramics using the deformation energy method Energy Technology Data Exchange (ETDEWEB) Foschini, Cesar R.; Souza, Edson A. [Dept. of EngineeringFeb-UNESPBauru (Brazil); Borges, Ana F. S. [Dept. of MaterialFOB-USP, Bauru (Brazil); Pintao, Carlos A. [Dept. of PhysicsFC-UNESP, Bauru (Brazil) 2016-08-15 This paper presents an alternative method to measure the modulus of elasticity to traction, E, for relatively limited sample sizes. We constructed a measurement system with a Force sensor (FS) and a Rotation movement sensor (RMS) to obtain a relationship between force (F) and bending (ΔL). It was possible by calculating the strain energy and the work of a constant force to establish a relationship between these quantities; the constant of proportionality in this relationship depends on E, I and L. I and L are the moment of inertia of the uniform cross-section in relation to an oriented axis and length, respectively, of the sample for bending. An expression that could achieve the value of E was deduced to study samples of Y-TZP ceramics. The advantages of this system compared to traditional systems are its low cost and practicality in determining E. 5. Comparison of radio frequency energy absorption in ear and eye region of children and adults at 900, 1800 and 2450 MHz International Nuclear Information System (INIS) Keshvari, J; Lang, S 2005-01-01 6. How does the plasmonic enhancement of molecular absorption depend on the energy gap between molecular excitation and plasmon modes: a mixed TDDFT/FDTD investigation. Science.gov (United States) Sun, Jin; Li, Guang; Liang, WanZhen 2015-07-14 A real-time time-dependent density functional theory coupled with the classical electrodynamics finite difference time domain technique is employed to systematically investigate the optical properties of hybrid systems composed of silver nanoparticles (NPs) and organic adsorbates. The results demonstrate that the molecular absorption spectra throughout the whole energy range can be enhanced by the surface plasmon resonance of Ag NPs; however, the absorption enhancement ratio (AER) for each absorption band differs significantly from the others, leading to the quite different spectral profiles of the hybrid complexes in contrast to those of isolated molecules or sole NPs. Detailed investigations reveal that the AER is sensitive to the energy gap between the molecular excitation and plasmon modes. As anticipated, two separate absorption bands, corresponding to the isolated molecules and sole NPs, have been observed at a large energy gap. When the energy gap approaches zero, the molecular excitation strongly couples with the plasmon mode to form the hybrid exciton band, which possesses the significantly enhanced absorption intensity, a red-shifted peak position, a surprising strongly asymmetric shape of the absorption band, and the nonlinear Fano effect. Furthermore, the dependence of surface localized fields and the scattering response functions (SRFs) on the geometrical parameters of NPs, the NP-molecule separation distance, and the external-field polarizations has also been depicted. 7. Comparison of radio frequency energy absorption in ear and eye region of children and adults at 900, 1800 and 2450 MHz Energy Technology Data Exchange (ETDEWEB) Keshvari, J [Radio Technologies Laboratory, Nokia Research Centre, Itaemerenkatu 11-13, 00180 Helsinki FIN-00180 (Finland); Lang, S [Technology Platforms, Nokia Corporation, PO Box 301, FIN-00045 Nokia Group, Linnoitustie 6, 02600 ESPOO (Finland) 2005-09-21 8. Dissipation and accumulation of energy during plastic deformation of Armco -iron and 12Cr18Ni10Ti stainless steel irradiated by neutrons International Nuclear Information System (INIS) Toktogulova, D.; Maksimkin, O.; Gusev, M.; Garner, F. 2007-01-01 9. Molecular dynamics simulation of a nanofluidic energy absorption system: effects of the chiral vector of carbon nanotubes. Science.gov (United States) Ganjiani, Sayed Hossein; Hossein Nezhad, Alireza 2018-02-14 A Nanofluidic Energy Absorption System (NEAS) is a novel nanofluidic system with a small volume and weight. In this system, the input mechanical energy is converted to surface tension energy during liquid infiltration in the nanotube. The NEAS is made of a mixture of nanoporous material particles in a functional liquid. In this work, the effects of the chiral vector of a carbon nanotube (CNT) on the performance characteristics of the NEAS are investigated by using molecular dynamics simulation. For this purpose, six CNTs with different diameters for each type of armchair, zigzag and chiral, and several chiral CNTs with different chiral vectors (different values of indices (m,n)) are selected and studied. The results show that in the chiral CNTs, the contact angle shows the hydrophobicity of the CNT, and infiltration pressure is reduced by increasing the values of m and n (increasing the CNT diameter). Contact angle and infiltration pressure are decreased by almost 1.4% and 9% at all diameters, as the type of CNT is changed from chiral to zigzag and then to armchair. Absorbed energy density and efficiency are also decreased by increasing m and n and by changing the type of CNT from chiral to zigzag and then to armchair. 10. Application of Foldcore Sandwich Structures in Helicopter Subfloor Energy Absorption Structure Science.gov (United States) Zhou, H. Z.; Wang, Z. J. 2017-10-01 The intersection element is an important part of the helicopter subfloor structure. The numerical simulation model of the intersection element is established and the crush simulation is conducted. The simulation results agree well with the experiment results. In order to improve the buffering capacity and energy-absorbing capacity, the intersection element is redesigned. The skin and the floor in the intersection element are replaced with foldcore sandwich structures. The new intersection element is studied using the same simulation method as the typical intersection element. The analysis result shows that foldcore can improve the buffering capacity and the energy-absorbing capacity, and reduce the structure mass. 11. Wave energy absorption by a submerged air bag connected to a rigid float DEFF Research Database (Denmark) Kurniawan, Adi; Chaplin, J. R.; Hann, M. R. 2017-01-01 A new wave energy device features a submerged ballasted air bag connected at the top to a rigid float. Under wave action, the bag expands and contracts, creating a reciprocating air flow through a turbine between the bag and another volume housed within the float. Laboratory measurements are gene......A new wave energy device features a submerged ballasted air bag connected at the top to a rigid float. Under wave action, the bag expands and contracts, creating a reciprocating air flow through a turbine between the bag and another volume housed within the float. Laboratory measurements... 12. Deformed special relativity with an energy barrier of a minimum speed International Nuclear Information System (INIS) Nassif, Claudio 2011-01-01 Full text: This research aims to introduce a new principle of symmetry in the flat space-time by means of the elimination of the classical idea of rest, and by including a universal minimum limit of speed in the quantum world. Such a limit, unattainable by the particles, represents a preferred inertial reference frame associated with a universal background field that breaks Lorentz symmetry. So there emerges a new relativistic dynamics where a minimum speed forms an inferior energy barrier. One of the interesting implications of the existence of such a minimum speed is that it prevents the absolute zero temperature for an ultracold gas, according to the third law of thermodynamics. So we will be able to provide a fundamental dynamical explanation for the third law by means of a connection between such a phenomenological law and the new relativistic dynamics with a minimum speed. In other words we say that our relevant investigation is with respect to the problem of the absolute zero temperature in the thermodynamics of an ideal gas. We have made a connection between the 3 rd law of Thermodynamics and the new dynamics with a minimum speed by means of a relation between the absolute zero temperature (T = 0 deg K) and a minimum average speed (V) for a gas with N particles (molecules or atoms). Since T = 0 deg K is thermodynamically unattainable, we have shown this is due to the impossibility of reaching V from the new dynamics standpoint. (author) 13. Molecular design of photovoltaic materials for polymer solar cells: toward suitable electronic energy levels and broad absorption. Science.gov (United States) Li, Yongfang 2012-05-15 Bulk heterojunction (BHJ) polymer solar cells (PSCs) sandwich a blend layer of conjugated polymer donor and fullerene derivative acceptor between a transparent ITO positive electrode and a low work function metal negative electrode. In comparison with traditional inorganic semiconductor solar cells, PSCs offer a simpler device structure, easier fabrication, lower cost, and lighter weight, and these structures can be fabricated into flexible devices. But currently the power conversion efficiency (PCE) of the PSCs is not sufficient for future commercialization. The polymer donors and fullerene derivative acceptors are the key photovoltaic materials that will need to be optimized for high-performance PSCs. In this Account, I discuss the basic requirements and scientific issues in the molecular design of high efficiency photovoltaic molecules. I also summarize recent progress in electronic energy level engineering and absorption spectral broadening of the donor and acceptor photovoltaic materials by my research group and others. For high-efficiency conjugated polymer donors, key requirements are a narrower energy bandgap (E(g)) and broad absorption, relatively lower-lying HOMO (the highest occupied molecular orbital) level, and higher hole mobility. There are three strategies to meet these requirements: D-A copolymerization for narrower E(g) and lower-lying HOMO, substitution with electron-withdrawing groups for lower-lying HOMO, and two-dimensional conjugation for broad absorption and higher hole mobility. Moreover, better main chain planarity and less side chain steric hindrance could strengthen π-π stacking and increase hole mobility. Furthermore, the molecular weight of the polymers also influences their photovoltaic performance. To produce high efficiency photovoltaic polymers, researchers should attempt to increase molecular weight while maintaining solubility. High-efficiency D-A copolymers have been obtained by using benzodithiophene (BDT), dithienosilole 14. Benefit of energy absorption by the truck in a frontal car-to-truck collision NARCIS (Netherlands) 2000-01-01 EEVC Working Group 14 is investigating the effect of fixing energy absorbing front underrun protection systems (eaFUPS) to trucks instead of rigid devices in order to reduce the injury severity to car occupants in car-to-truck frontal collisions. Three car-to-truck crash tests with cars from 15. Surface deformation effects on stainless steel, Ni, Cu and Mo produced by medium energy He ions irradiation International Nuclear Information System (INIS) Constantinescu, B.; Florescu, V.; Sarbu, C. 1993-01-01 To investigate dose and energy dependence of surface deformation effects (blistering and flaking), different kinds of candidate CTR first wall materials as 12KH18N10T, W-4541, W-4016 and SS-304 stainless steels, Ni, Cu, Mo were irradiated at room temperature with 3.0, 4.7 and 6.8 MeV He + ions at IAP Cyclotron. The effects were investigated by means of a TEMSCAN 200 CX electron microscope and two metallographic Orthoplan Pol Leitz and Olympus microscopes. We observed two dose dependent main phenomena: blistering and flaking (craters). So, blisters occurrence on the irradiated surface is almost instantaneous when a critical dose (number of He ions accumulated in the region at the end of alpha particles range) is reached. Increasing irradiation dose, we reached flaking stage. So, isolated submicronic fissures along grain boundaries were observed on the blister skin, chronologically followed by large (5-20 μm) deep cracks of hundreds of microns in length, blisters opening and, finally, flaking appearance. (author) 8 figs., 1 tab 16. Method and apparatus for simulating atomospheric absorption of solar energy due to water vapor and CO.sub.2 Science.gov (United States) Sopori, Bhushan L. 1995-01-01 A method and apparatus for improving the accuracy of the simulation of sunlight reaching the earth's surface includes a relatively small heated chamber having an optical inlet and an optical outlet, the chamber having a cavity that can be filled with a heated stream of CO.sub.2 and water vapor. A simulated beam comprising infrared and near infrared light can be directed through the chamber cavity containing the CO.sub.2 and water vapor, whereby the spectral characteristics of the beam are altered so that the output beam from the chamber contains wavelength bands that accurately replicate atmospheric absorption of solar energy due to atmospheric CO.sub.2 and moisture. 17. Method and apparatus for simulating atmospheric absorption of solar energy due to water vapor and CO{sub 2} Science.gov (United States) Sopori, B.L. 1995-06-20 A method and apparatus for improving the accuracy of the simulation of sunlight reaching the earth`s surface includes a relatively small heated chamber having an optical inlet and an optical outlet, the chamber having a cavity that can be filled with a heated stream of CO{sub 2} and water vapor. A simulated beam comprising infrared and near infrared light can be directed through the chamber cavity containing the CO{sub 2} and water vapor, whereby the spectral characteristics of the beam are altered so that the output beam from the chamber contains wavelength bands that accurately replicate atmospheric absorption of solar energy due to atmospheric CO{sub 2} and moisture. 8 figs. 18. A Method for Ship Collision Damage and Energy Absorption Analysis and its Validation DEFF Research Database (Denmark) Zhang, Shengming; Pedersen, Preben Terndrup 2017-01-01 For design evaluation, there is a need for a method which is fast, practical and yet accurate enough to deter-mine the absorbed energy and collision damage extent in ship collision analysis. The most well-known sim-plified empirical approach to collision analysis was made probably by Minorsky......, and its limitation is alsowell-recognised. The authors have previously developed simple expressions for the relation between theabsorbed energy and the damaged material volume which take into account the structural arrangements,the material properties and the damage modes. The purpose of the present paper...... is to re-examine thismethod’s validity and accuracy for ship collision damage analysis in ship design assessments by compre-hensive validations with experimental results from the public domain. In total, 20 experimental tests havebeen selected, analysed and compared with the results calculated using... 19. Balancing Power Absorption and Fatigue Loads in Irregular Waves for an Oscillating Surge Wave Energy Converter: Preprint Energy Technology Data Exchange (ETDEWEB) Tom, Nathan M.; Yu, Yi-Hsiang; Wright, Alan D.; Lawson, Michael 2016-06-01 The aim of this paper is to describe how to control the power-to-load ratio of a novel wave energy converter (WEC) in irregular waves. The novel WEC that is being developed at the National Renewable Energy Laboratory combines an oscillating surge wave energy converter (OSWEC) with control surfaces as part of the structure; however, this work only considers one fixed geometric configuration. This work extends the optimal control problem so as to not solely maximize the time-averaged power, but to also consider the power-take-off (PTO) torque and foundation forces that arise because of WEC motion. The objective function of the controller will include competing terms that force the controller to balance power capture with structural loading. Separate penalty weights were placed on the surge-foundation force and PTO torque magnitude, which allows the controller to be tuned to emphasize either power absorption or load shedding. Results of this study found that, with proper selection of penalty weights, gains in time-averaged power would exceed the gains in structural loading while minimizing the reactive power requirement. 20. Muscle tension increases impact force but decreases energy absorption and pain during visco-elastic impacts to human thighs. Science.gov (United States) Tsui, Felix; Pain, Matthew T G 2018-01-23 Despite uncertainty of its exact role, muscle tension has shown an ability to alter human biomechanical response and may have the ability to reduce impact injury severity. The aim of this study was to examine the effects of muscle tension on human impact response in terms of force and energy absorbed and the subjects' perceptions of pain. Seven male martial artists had a 3.9 kg medicine ball dropped vertically from seven different heights, 1.0-1.6 m in equal increments, onto their right thigh. Subjects were instructed to either relax or tense the quadriceps via knee extension (≥60% MVC) prior to each impact. F-scan pressure insoles sampling at 500 Hz recorded impact force and video was recorded at 1000 Hz to determine energy loss from the medicine ball during impact. Across all impacts force was 11% higher, energy absorption was 15% lower and time to peak force was 11% lower whilst perceived impact intensity was significantly lower when tensed. Whether muscle is tensed or not had a significant and meaningful effect on perceived discomfort. However, it did not relate to impact force between conditions and so tensing may alter localised injury risk during human on human type impacts. Copyright © 2017 Elsevier Ltd. All rights reserved. 1. A comprehensive study of the energy absorption and exposure buildup factors of different bricks for gamma-rays shielding Directory of Open Access Journals (Sweden) M.I. Sayyed Full Text Available The present investigation has been performed on different bricks for the purpose of gamma-ray shielding. The values of the mass attenuation coefficient (µ/ρ, energy absorption buildup factor (EABF and exposure buildup factor (EBF were determined and utilized to assess the shielding effectiveness of the bricks under investigation. The mass attenuation coefficients of the selected bricks were calculated theoretically using WinXcom program and compared with MCNPX code. Good agreement between WinXcom and MCNPX results was observed. Furthermore, the EABF and EBF have been discussed as functions of the incident photon energy and penetration depth. It has been found that the EABF and EBF values are very large in the intermediate energy region. The steel slag showed good shielding properties, consequently, this brick is eco-friendly and feasible compared with other types of bricks used for construction. The results in this work should be useful in the construction of effectual shielding against hazardous gamma-rays. Keywords: Brick, Mass attenuation coefficient, Buildup factor, G-P fitting, Radiation shielding 2. Energy dependence of effective atomic numbers for photon energy absorption and photon interaction: Studies of some biological molecules in the energy range 1 keV-20 MeV DEFF Research Database (Denmark) Manohara, S.R.; Hanagodimath, S.M.; Gerward, Leif 2008-01-01 Effective atomic numbers for photon energy absorption, Z(PEA,eff), and for photon interaction, Z(PI,eff), have been calculated by a direct method in the photon-energy region from 1 keV to 20 MeV for biological molecules, such as fatty acids (lauric, myristic, palmitic, stearic, oleic, linoleic......, linolenic, arachidonic, and arachidic acids), nucleotide bases (adenine, guanine, cytosine, uracil, and thymine), and carbohydrates (glucose, sucrose, raffinose, and starch). The Z(PEA, eff) and Z(PI, eff) values have been found to change with energy and composition of the biological molecules. The energy... 3. High energy, widely tunable Si-prism-array coupled terahertz-wave parametric oscillator with a deformed pump and optimal crystal location for angle tuning. Science.gov (United States) Zhang, Ruiliang; Qu, Yanchen; Zhao, Weijiang; Chen, Zhenlei 2017-03-20 A high energy, widely tunable Si-prism-array coupled terahertz-wave parametric oscillator (TPO) has been demonstrated by using a deformed pump. The deformed pump is cut from a beam spot of 2 mm in diameter by a 1-mm-wide slit. In comparison with a small pump spot (1-mm diameter), the THz-wave coupling area for the deformed pump is increased without limitation to the low-frequency end of the tuning range. Besides, the crystal location is specially designed to eliminate the alteration of the output position of the pump during angle tuning, so the initially adjusted nearest pumped region to the THz-wave exit surface is maintained throughout the tuning range. The tuning range is 0.58-2.5 THz for the deformed pump, while its low frequency end is limited at approximately 1.2 THz for the undeformed pump with 2 mm diameter. The highest THz-wave output of 2 μJ, which is 2.25 times as large as that from the pump of 1 mm in diameter, is obtained at 1.15 THz under 38 mJ (300  MW/cm2) pumping. The energy conversion efficiency is 5.3×10-5. 4. Optical absorption of BaF2 crystals with different prehistory when irradiated by high-energy electrons International Nuclear Information System (INIS) Chinkov, E P; Stepanov, S A; Shtan'ko, V F; Ivanova, T S 2016-01-01 The spectra of stable optical absorption of BaF 2 crystals containing uncontrollable impurities after irradiation with 3 MeV electrons are studied at room temperature. The dependence of the efficiency of stable color accumulation in the region of emerging crossluminescence on the absorption coefficients measured near the fundamental absorption edge in unirradiated crystals of various prehistory is traced. (paper) 5. Increase in the energy absorption of pulsed plasma by the formation of tungsten nanostructure Science.gov (United States) Sato, D.; Ohno, N.; Domon, F.; Kajita, S.; Kikuchi, Y.; Sakuma, I. 2017-06-01 The synergistic effects of steady-state and pulsed plasma irradiation to material have been investigated in the device NAGDIS-PG (NAGoya DIvertor Simulator with Plasma Gun). The duration of the pulsed plasma was ~0.25 ms. To investigate the pulsed plasma heat load on the materials, we developed a temperature measurement system using radiation from the sample in a high time resolution. The heat deposited in response to the transient plasma on a tungsten surface was revealed by using this system. When the nanostructures were formed by helium plasma irradiation, the temperature increase on the bulk sample was enhanced. The result suggested that the amount of absorbed energy on the surface was increased by the formation of nanostructures. The possible mechanisms causing the phenomena are discussed with the calculation of a sample temperature in response to the transient heat load. 6. Wave energy absorption by a submerged air bag connected to a rigid float. Science.gov (United States) Kurniawan, A; Chaplin, J R; Hann, M R; Greaves, D M; Farley, F J M 2017-04-01 A new wave energy device features a submerged ballasted air bag connected at the top to a rigid float. Under wave action, the bag expands and contracts, creating a reciprocating air flow through a turbine between the bag and another volume housed within the float. Laboratory measurements are generally in good agreement with numerical predictions. Both show that the trajectory of possible combinations of pressure and elevation at which the device is in static equilibrium takes the shape of an S. This means that statically the device can have three different draughts, and correspondingly three different bag shapes, for the same pressure. The behaviour in waves depends on where the mean pressure-elevation condition is on the static trajectory. The captured power is highest for a mean condition on the middle section. 7. A Method for Ship Collision Damage and Energy Absorption Analysis and its Validation DEFF Research Database (Denmark) Zhang, Shengming; Pedersen, Preben Terndrup 2016-01-01 -examine this method’s validity andaccuracy for ship collision damage analysis in shipdesign assessments by comprehensive validations withthe experimental results from the public domain. Twentyexperimental tests have been selected, analysed andcompared with the results calculated using the proposedmethod. It can......For design evaluation there is a need for a method whichis fast, practical and yet accurate enough to determine theabsorbed energy and collision damage extent in shipcollision analysis. The most well-known simplifiedempirical approach to collision analysis was madeprobably by Minorsky and its...... limitation is also wellrecognized.The authors have previously developedsimple expressions for the relation between the absorbedenergy and the damaged material volume which take intoaccount the structural arrangements, the materialproperties and the damage modes. The purpose of thepresent paper is to re... 8. Balancing Power Absorption and Structural Loading for an Assymmetric Heave Wave-Energy Converter in Regular Waves: Preprint Energy Technology Data Exchange (ETDEWEB) 2016-07-01 The aim of this paper is to maximize the power-to-load ratio of the Berkeley Wedge: a one-degree-of-freedom, asymmetrical, energy-capturing, floating breakwater of high performance that is relatively free of viscosity effects. Linear hydrodynamic theory was used to calculate bounds on the expected time-averaged power (TAP) and corresponding surge restraining force, pitch restraining torque, and power take-off (PTO) control force when assuming that the heave motion of the wave energy converter remains sinusoidal. This particular device was documented to be an almost-perfect absorber if one-degree-of-freedom motion is maintained. The success of such or similar future wave energy converter technologies would require the development of control strategies that can adapt device performance to maximize energy generation in operational conditions while mitigating hydrodynamic loads in extreme waves to reduce the structural mass and overall cost. This paper formulates the optimal control problem to incorporate metrics that provide a measure of the surge restraining force, pitch restraining torque, and PTO control force. The optimizer must now handle an objective function with competing terms in an attempt to maximize power capture while minimizing structural and actuator loads. A penalty weight is placed on the surge restraining force, pitch restraining torque, and PTO actuation force, thereby allowing the control focus to be placed either on power absorption or load mitigation. Thus, in achieving these goals, a per-unit gain in TAP would not lead to a greater per-unit demand in structural strength, hence yielding a favorable benefit-to-cost ratio. Demonstrative results in the form of TAP, reactive TAP, and the amplitudes of the surge restraining force, pitch restraining torque, and PTO control force are shown for the Berkeley Wedge example. 9. Förster resonance energy transfer, absorption and emission spectra in multichromophoric systems. II. Hybrid cumulant expansion. Science.gov (United States) Ma, Jian; Moix, Jeremy; Cao, Jianshu 2015-03-07 We develop a hybrid cumulant expansion method to account for the system-bath entanglement in the emission spectrum in the multi-chromophoric Förster transfer rate. In traditional perturbative treatments, the emission spectrum is usually expanded with respect to the system-bath coupling term in both real and imaginary time. This perturbative treatment gives a reliable absorption spectrum, where the bath is Gaussian and only the real-time expansion is involved. For the emission spectrum, the initial state is an entangled state of the system plus bath. Traditional perturbative methods are problematic when the excitations are delocalized and the energy gap is larger than the thermal energy, since the second-order expansion cannot predict the displacement of the bath. In the present method, the real-time dynamics is carried out by using the 2nd-order cumulant expansion method, while the displacement of the bath is treated more accurately by utilizing the exact reduced density matrix of the system. In a sense, the hybrid cumulant expansion is based on a generalized version of linear response theory with entangled initial states. 10. Ionic Liquid (1-Butyl-3-Metylimidazolium Methane Sulphonate Corrosion and Energy Analysis for High Pressure CO2 Absorption Process Directory of Open Access Journals (Sweden) 2018-05-01 Full Text Available This study explores the possible use of ionic liquids as a solvent in a commercial high-pressure CO2 removal process, to gain environmental and energy benefits. There are two main constraints in realizing this: ionic liquids can be corrosive, specifically when mixed with a water/amine solution with dissolved O2 & CO2; and CO2 absorption within this process is not very well understood. Therefore, scavenging CO2 to ppm levels from process gas comes with several risks. We used 1-butyl-3-methylimidazoium methane sulphonate [bmim][MS] as an ionic liquid because of its high corrosiveness (due to its acidic nature to estimate the ranges of expected corrosion in the process. TAFEL technique was used to determine these rates. Further, the process was simulated based on the conventional absorption–desorption process using ASPEN HYSYS v 8.6. After preliminary model validation with the amine solution, [bmim][MS] was modeled based on the properties found in the literature. The energy comparison was then provided and the optimum ratio of the ionic liquid/amine solution was calculated. 11. Electron energy-loss spectroscopy characterization and microwave absorption of iron-filled carbon-nitrogen nanotubes International Nuclear Information System (INIS) Che Renchao; Liang Chongyun; Shi Honglong; Zhou Xingui; Yang Xinan 2007-01-01 Iron-filled carbon-nitrogen (Fe/CN x ) nanotubes and iron-filled carbon (Fe/C) nanotubes were synthesized at 900 deg. C through a pyrolysis reaction of ferrocene/acetonitrile and ferrocene/xylene, respectively. The differences of structure and composition between the Fe/CN x nanotubes and Fe/C nanotubes were investigated by transmission electron microscopy and electron energy-loss spectroscopy (EELS). It was found that the morphology of Fe/CN x nanotubes is more corrugated than that of the Fe/C nanotubes due to the incorporation of nitrogen. By comparing the Fe L 2,3 electron energy-loss spectra of Fe/CN x nanotubes to those of the Fe/C nanotubes, the electron states at the interface between Fe and the tubular wall of both Fe/CN x nanotubes and Fe/C nanotubes were investigated. At the boundary between Fe and the wall of a CN x nanotube, the additional electrons contributed from the doped 'pyridinic-like' nitrogen might transfer to the empty 3d orbital of the encapsulated iron, therefore leading to an intensity suppression of the iron L 2,3 edge and an intensity enhancement of the carbon K edge. However, such an effect could not be found in Fe/C nanotubes. Microwave absorption properties of both Fe/CN x and Fe/C nanocomposites at 2-18 GHz band were studied 12. Energy transmission transformer for a wireless capsule endoscope: analysis of specific absorption rate and current density in biological tissue. Science.gov (United States) Shiba, Kenji; Nagato, Tomohiro; Tsuji, Toshio; Koshiji, Kohji 2008-07-01 This paper reports on the electromagnetic influences on the analysis of biological tissue surrounding a prototype energy transmission system for a wireless capsule endoscope. Specific absorption rate (SAR) and current density were analyzed by electromagnetic simulator in a model consisting of primary coil and a human trunk including the skin, fat, muscle, small intestine, backbone, and blood. First, electric and magnetic strength in the same conditions as the analytical model were measured and compared to the analytical values to confirm the validity of the analysis. Then, SAR and current density as a function of frequency and output power were analyzed. The validity of the analysis was confirmed by comparing the analytical values with the measured ones. The SAR was below the basic restrictions of the International Commission on Nonionizing Radiation Protection (ICNIRP). At the same time, the results for current density show that the influence on biological tissue was lowest in the 300-400 kHz range, indicating that it was possible to transmit energy safely up to 160 mW. In addition, we confirmed that the current density has decreased by reducing the primary coil's current. 13. A novel application of reactive absorption to break the CO2–ethane azeotrope with low energy requirement International Nuclear Information System (INIS) 2013-01-01 Highlights: • Investigation of RA for the CO 2 –ethane azeotropic process using Hysys simulator. • Optimization of operating parameters to minimize energy demand in the proposed RA process. • Superior performance of the RA process compared to the conventional process. • Enhance in NGL production from 795 to 1120 mole/s compared to the conventional process. - Abstract: Azeotropic separation of ethane and CO 2 using reactive absorption (RA) is studied by Hysys process software. A new configuration of a RA process is proposed using diethanolamine (DEA) to break the azeotrope. Impacts of amine flow rate, amine inlet temperature and feed–inlet location are investigated to achieve an optimum condition of the process in terms of energy demand. The simulation results show that optimum values of amine flow rate, amine temperature and feed–inlet location are 1900 mole/s, 30 °C and 20th stage, respectively. It is found that the process including RA leads to a significant reduction in operating costs, compared to the conventional extractive process 14. Laser absorption and energy transfer in foams of various pore structures and chemical compositions International Nuclear Information System (INIS) Limpouch, J.; Kuba, J.; Borisenko, N.G.; Demchenko, N.N.; Gus'kov, S.Y.; Khalenkov, A.M.; Merkul'ev, Y.A.; Rozanov, V.B.; Kasperczuk, A.; Pisarczyk, T.; Kondrashov, V.N.; Limpouch, J.; Krousky, E.; Masek, K.; Pfeifer, M.; Renner, O.; Nazarov, W.; Pisarczyk, P. 2006-01-01 Interaction of sub-nanosecond intense laser pulses with foams containing fine and large pores has been studied experimentally. The foams included: fine-structured TMPTA (trimethylol propane tri-acrylate) foams, fine-structured TAC (cellulose tri-acetate) foams and rougher agar-agar foams. In all cases, an aluminum foil was placed at the rear side of the foam targets. Laser penetration and energy transport in the foam material are measured via streaked side-on X-ray slit images. Shock wave transition through the foam is detected via streaked optical self-emission from foil attached on the foam rear side. The shock transition time increases with the pore size, foam density, and also with the contents of high Z additions in plastic foams. Foil acceleration is observed via 3-frame interferometry. In the case of TAC foam with a 9.1 mg/cm 3 and small pores (D p = 1-3 μm) minor pre-heating of the foil at the target rear is observed at about 0.25 ns after emission from the front side and at the same time small signal appears on optical streak. Laser is absorbed in the surface layer and then thermal waves propagates into the foam with average speed of 3.4*10 7 cm/s. This wave reaches the foil rear side 1.1 ns after X-ray emission onset, earlier than the main optical emission which appears at 2.1 ns. Comparison of experimental results with numerical simulations and an analytical model is underway 15. Absorption of solar energy heats up our planet's surface and the atmosphere and makes life for us po Science.gov (United States) 2002-01-01 Credit: Image courtesy Barbara Summey, NASA Goddard Visualization Analysis Lab, based upon data processed by Takmeng Wong, CERES Science Team, NASA Langley Research Center Satellite: Terra Sensor: CERES Image Date: 09-30-2001 VE Record ID: 11546 Description: Absorption of solar energy heats up our planet's surface and the atmosphere and makes life for us possible. But the energy cannot stay bound up in the Earth's environment forever. If it did then the Earth would be as hot as the Sun. Instead, as the surface and the atmosphere warm, they emit thermal longwave radiation, some of which escapes into space and allows the Earth to cool. This false-color image of the Earth was produced on September 30, 2001, by the Clouds and the Earth's Radiant Energy System (CERES) instrument flying aboard NASA's Terra spacecraft. The image shows where more or less heat, in the form of longwave radiation, is emanating from the top of Earth's atmosphere. As one can see in the image, the thermal radiation leaving the oceans is fairly uniform. The blue swaths across the central Pacific represent thick clouds, the tops of which are so high they are among the coldest places on Earth. In the American Southwest, which can be seen in the upper righthand corner of the globe, there is often little cloud cover to block outgoing radiation and relatively little water to absorb solar energy. Consequently, the amount of outgoing radiation in the American Southwest exceeds that of the oceans. Also, that region was experiencing an extreme heatwave when these data were acquired. Recently, NASA researchers discovered that incoming solar radiation and outgoing thermal radiation increased in the tropics from the 1980s to the 1990s. (Click to read the press release .) They believe that the reason for the unexpected increase has to do with an apparent change in circulation patterns around the globe, which effectively reduced the amount of water vapor and cloud cover in the upper reaches of the atmosphere 16. Study of a Steel’s Energy Absorption System for Heavy Quadricycles and Nonlinear Explicit Dynamic Analysis of its Behavior under Impact by FEM Science.gov (United States) López Campos, José Ángel; Segade Robleda, Abraham; Vilán Vilán, José Antonio; García Nieto, Paulino José; Blanco Cordero, Javier 2015-01-01 Current knowledge of the behavior of heavy quadricycles under impact is still very poor. One of the most significant causes is the lack of energy absorption in the vehicle frame or its steel chassis structure. For this reason, special steels (with yield stresses equal to or greater than 350 MPa) are commonly used in the automotive industry due to their great strain hardening properties along the plastic zone, which allows good energy absorption under impact. This paper presents a proposal for a steel quadricycle energy absorption system which meets the percentages of energy absorption for conventional vehicles systems. This proposal is validated by explicit dynamics simulation, which will define the whole problem mathematically and verify behavior under impact at speeds of 40 km/h and 56 km/h using the finite element method (FEM). One of the main consequences of this study is that this FEM–based methodology can tackle high nonlinear problems like this one with success, avoiding the need to carry out experimental tests, with consequent economical savings since experimental tests are very expensive. Finally, the conclusions from this innovative research work are given. PMID:28793607 17. Study of a Steel’s Energy Absorption System for Heavy Quadricycles and Nonlinear Explicit Dynamic Analysis of its Behavior under Impact by FEM Directory of Open Access Journals (Sweden) José Ángel López Campos 2015-10-01 Full Text Available Current knowledge of the behavior of heavy quadricycles under impact is still very poor. One of the most significant causes is the lack of energy absorption in the vehicle frame or its steel chassis structure. For this reason, special steels (with yield stresses equal to or greater than 350 MPa are commonly used in the automotive industry due to their great strain hardening properties along the plastic zone, which allows good energy absorption under impact. This paper presents a proposal for a steel quadricycle energy absorption system which meets the percentages of energy absorption for conventional vehicles systems. This proposal is validated by explicit dynamics simulation, which will define the whole problem mathematically and verify behavior under impact at speeds of 40 km/h and 56 km/h using the finite element method (FEM. One of the main consequences of this study is that this FEM–based methodology can tackle high nonlinear problems like this one with success, avoiding the need to carry out experimental tests, with consequent economical savings since experimental tests are very expensive. Finally, the conclusions from this innovative research work are given. 18. Dynamic mechanical analysis and high strain-rate energy absorption characteristics of vertically aligned carbon nanotube reinforced woven fiber-glass composites Science.gov (United States) The dynamic mechanical behavior and energy absorption characteristics of nano-enhanced functionally graded composites, consisting of 3 layers of vertically aligned carbon nanotube (VACNT) forests grown on woven fiber-glass (FG) layer and embedded within 10 layers of woven FG, with polyester (PE) and... 19. Determination of ash content of coal by mass absorption coefficient measurements at two X-ray energies International Nuclear Information System (INIS) Fookes, R.A.; Gravitis, V.L.; Watt, J.S. 1977-01-01 A method for determining the ash content of coal is proposed. It involves measurements proportional to mass absorption coefficients of coal at two X-ray energies. These measurements can be made using X-ray transmission or scatter techniques. Calculations based on transmission of narrow beams of X-rays have shown that ash can be determined to about 1wt%(1 sigma) in coal of widely varying ash content and composition. Experimentally, ash content was determined to 0.67wt% by transmission techniques and 1.0wt% by backscatter techniques in coal samples from the Bulli seam, NSW, Australia, having ash in the range 11-34wt%. For samples with a much wider range of coal composition (7-53wt% ash and 0-25wt% iron in the ash), ash content was determined by backscatter measurements to 1.62wt%. The method produced ash determinations at least as accurate as those produced by the established technique which compensates for variation in iron content of the ash by X-ray fluorescence analysis for iron. Compared with the established technique, it has the advantage of averaging analysis over much larger volumes of coal, but the disadvantage that much more precise measurements of X-ray intensities are required. (author) 20. Plastic deformation NARCIS (Netherlands) Sitter, de L.U. 1937-01-01 § 1. Plastic deformation of solid matter under high confining pressures has been insufficiently studied. Jeffreys 1) devotes a few paragraphs to deformation of solid matter as a preface to his chapter on the isostasy problem. He distinguishes two properties of solid matter with regard to its 1. X-ray absorption spectroscopy and high-energy XRD study of the local environment of copper in antibacterial copper-releasing degradable phosphate glasses OpenAIRE Pickup, David M.; Ahmed, Ifty; Fitzgerald, Victoria; Moss, Rob M.; Wetherall, Karen; Knowles, Jonathan C.; Smith, Mark E.; Newport, Robert J. 2006-01-01 Phosphate-based glasses of the general formula Na2O-CaO-P2O5 are degradable in an aqueous environment, and therefore can act as antibacterial materials through the inclusion of ions such as copper. In this study, CuO and Cu2O were added to Na2O-CaO-P2O5 glasses (1-20 mol% Cu) and X-ray absorption spectroscopy (XAS) and high-energy X-ray diffraction (HEXRD) used to probe the local environment of the copper ions. Copper K-edge X-ray absorption near-edge structure (XANES) spectra confirm the oxi... 2. High-energy X-ray measurements of structural anisotropy and excess free volume in a homogenously deformed Zr-based metallic glass International Nuclear Information System (INIS) Ott, R.T.; Kramer, M.J.; Besser, M.F.; Sordelet, D.J. 2006-01-01 We have used high-energy X-ray scattering to measure the structural anisotropy and excess free volume in a homogeneously deformed Zr-based metallic glass alloy. The scattering results show that bond length anisotropy is present in the samples following isothermal tensile creep deformation. The average atomic bond length in the direction parallel to the tensile loading axis is larger than that in the direction normal to the loading axis. The magnitude of the bond length anisotropy is found to be dependent on the gradient of macroscopic plastic strain along the gauge length. Furthermore, the scattering results show that the excess free volume also increases with increasing macroscopic plastic strain. Results from differential scanning calorimetry analysis of free volume variations along the gauge length of the creep samples are consistent with results from the X-ray scattering experiments 3. Stochasticity of the energy absorption in the electron cyclotron resonance; Estocasticidad de la absorcion de energia en la resonancia electron-ciclotronica Energy Technology Data Exchange (ETDEWEB) Gutierrez T, C. [Departamento de Fisica, ININ, A.P. 18-1027, 11801 Mexico D.F. (Mexico); Hernandez A, O 1998-07-01 The energy absorption mechanism in cyclotron resonance of the electrons is a present problem, since it could be considered from the stochastic point of view or this related with a non-homogeneous but periodical of plasma spatial structure. In this work using the Bogoliubov average method for a multi periodical system in presence of resonances, the drift equations were obtained in presence of a RF field for the case of electron cyclotron resonance until first order terms with respect to inverse of its cyclotron frequency. The absorbed energy equation is obtained on part of electrons in a simple model and by drift method. It is showed the stochastic character of the energy absorption. (Author) 4. Energy structure of fullerenes and carbon nanotubes International Nuclear Information System (INIS) Byszewski, P.; Kowalska, E. 1997-01-01 The absorption spectrum of C 60 can be reasonably well reproduced theoretically with the use of the quantum chemistry calculation methods. It allows investigation of the influence of a deformation of C 60 on the absorption spectrum. The deformation of the electronic density on C 60 can occur under the influence of molecules of good solvent. Similar calculations of the energetic structure of carbon nanotubes does not support the idea that their chirality may strongly influence the energy levels distribution, in particular that it may open the energy gap of nanotubes. (author). 40 refs, 13 figs, 1 tab 5. Performance analysis of single stage libr-water absorption machine operated by waste thermal energy of internal combustion engine: Case study Science.gov (United States) Sharif, Hafiz Zafar; Leman, A. M.; Muthuraman, S.; Salleh, Mohd Najib Mohd; Zakaria, Supaat 2017-09-01 Combined heating, cooling, and power is also known as Tri-generation. Tri-generation system can provide power, hot water, space heating and air -conditioning from single source of energy. The objective of this study is to propose a method to evaluate the characteristic and performance of a single stage lithium bromide-water (LiBr-H2O) absorption machine operated with waste thermal energy of internal combustion engine which is integral part of trigeneration system. Correlations for computer sensitivity analysis are developed in data fit software for (P-T-X), (H-T-X), saturated liquid (water), saturated vapor, saturation pressure and crystallization temperature curve of LiBr-H2O Solution. Number of equations were developed with data fit software and exported into excel work sheet for the evaluation of number of parameter concerned with the performance of vapor absorption machine such as co-efficient of performance, concentration of solution, mass flow rate, size of heat exchangers of the unit in relation to the generator, condenser, absorber and evaporator temperatures. Size of vapor absorption machine within its crystallization limits for cooling and heating by waste energy recovered from exhaust gas, and jacket water of internal combustion engine also presented in this study to save the time and cost for the facilities managers who are interested to utilize the waste thermal energy of their buildings or premises for heating and air conditioning applications. 6. Absorption heat pumps International Nuclear Information System (INIS) Formigoni, C. 1998-01-01 A brief description of the difference between a compression and an absorption heat pump is made, and the reasons why absorption systems have spread lately are given. Studies and projects recently started in the field of absorption heat pumps, as well as criteria usually followed in project development are described. An outline (performance targets, basic components) of a project on a water/air absorption heat pump, running on natural gas or LPG, is given. The project was developed by the Robur Group as an evolution of a water absorption refrigerator operating with a water/ammonia solution, which has been on the market for a long time and recently innovated. Finally, a list of the main energy and cost advantages deriving from the use of absorption heat pumps is made [it 7. Description of the Rigid Triaxial Deformation at Low Energy in 76Ge with the Proton-Neutron Interacting Model IBM2 International Nuclear Information System (INIS) Zhang Da-Li; Ding Bin-Gang 2013-01-01 We investigate properties of the low-lying energy states for 76 Ge within the framework of the proton-neutron interacting model IBM2, considering the validity of the Z = 38 subshell closure 88 Sr 50 as a doubly magic core. By introducing the quadrupole interactions among like bosons to the IBM2 Hamiltonian, the energy levels for both the ground state and γ bands are reproduced well. Particularly, the doublet structure of the γ band and the energy staggering signature fit the experimental data correctly. The ratios of B(E2) transition strengths for some states of the γ band, and the g factors of the 2 1 + , 2 2 + states are very close to the experimental data. The calculation result indicates that the nucleus exhibiting rigid triaxial deformation in the low-lying states can be described rather well by the IBM2 8. The influence of stacking fault energy on the mechanical behavior of Cu and Cu-Al alloys: Deformation twinning, work hardening, and dynamic recovery Science.gov (United States) Rohatgi, Aashish; Vecchio, Kenneth S.; Gray, George T. 2001-01-01 The role of stacking fault energy (SFE) in deformation twinning and work hardening was systematically studied in Cu (SFE ˜78 ergs/cm2) and a series of Cu-Al solid-solution alloys (0.2, 2, 4, and 6 wt pct Al with SFE ˜75, 25, 13, and 6 ergs/cm2, respectively). The materials were deformed under quasi-static compression and at strain rates of ˜1000/s in a Split-Hopkinson pressure bar (SHPB). The quasi-static flow curves of annealed 0.2 and 2 wt pct Al alloys were found to be representative of solid-solution strengthening and well described by the Hall-Petch relation. The quasi-static flow curves of annealed 4 and 6 wt pct Al alloys showed additional strengthening at strains greater than 0.10. This additional strengthening was attributed to deformation twins and the presence of twins was confirmed by optical microscopy. The strengthening contribution of deformation twins was incorporated in a modified Hall-Petch equation (using intertwin spacing as the “effective” grain size), and the calculated strength was in agreement with the observed quasi-static flow stresses. While the work-hardening rate of the low SFE Cu-Al alloys was found to be independent of the strain rate, the work-hardening rate of Cu and the high SFE Cu-Al alloys (low Al content) increased with increasing strain rate. The different trends in the dependence of work-hardening rate on strain rate was attributed to the difference in the ease of cross-slip (and, hence, the ease of dynamic recovery) in Cu and Cu-Al alloys. 9. Total photon absorption International Nuclear Information System (INIS) Carlos, P. 1985-06-01 The present discussion is limited to a presentation of the most recent total photonuclear absorption experiments performed with real photons at intermediate energy, and more precisely in the region of nucleon resonances. The main sources of real photons are briefly reviewed and the experimental procedures used for total photonuclear absorption cross section measurements. The main results obtained below 140 MeV photon energy as well as above 2 GeV are recalled. The experimental study of total photonuclear absorption in the nuclear resonance region (140 MeV< E<2 GeV) is still at its beginning and some results are presented 10. Binding energy of donor impurity states and optical absorption in the Tietz-Hua quantum well under an applied electric field Science.gov (United States) Al, E. B.; Kasapoglu, E.; Sakiroglu, S.; Duque, C. A.; Sökmen, I. 2018-04-01 For a quantum well which has the Tietz-Hua potential, the ground and some excited donor impurity binding energies and the total absorption coefficients, including linear and third order nonlinear terms for the transitions between the related impurity states with respect to the structure parameters and the impurity position as well as the electric field strength are investigated. The binding energies were obtained using the effective-mass approximation within a variational scheme and the optical transitions between any two impurity states were calculated by using the density matrix formalism and the perturbation expansion method. Our results show that the effects of the electric field and the structure parameters on the optical transitions are more pronounced. So we can adjust the red or blue shift in the peak position of the absorption coefficient by changing the strength of the electric field as well as the structure parameters. 11. Investigation of hydrogen-deformation interactions in β-21S titanium alloy using thermal desorption spectroscopy International Nuclear Information System (INIS) Tal-Gutelmacher, E.; Eliezer, D.; Boellinghaus, Th. 2007-01-01 The focus of this paper is the investigation of the combined influence of hydrogen and pre-plastic deformation on hydrogen's absorption/desorption behavior, the microstructure and microhardness of a single-phased β-21S alloy. In this study, thermal desorption analyses (TDS) evaluation of various desorption and trapping parameters provide further insight on the relationships between hydrogen absorption/desorption processes and deformation, and their mutual influence on the microstructure and the microhardness of β-21S alloy. TDS spectra were supported by other experimental techniques, such as X-ray diffraction, scanning and transmission electron microscopy, hydrogen quantity analyses and microhardness tests. Pre-plastic deformation, performed before the electrochemical hydrogenation of the alloy, increased significantly the hydrogen absorption capacity. Its influence was also evident on the notably expanded lattice parameter of β-21S alloy after hydrogenation. However, no hydride precipitation was observed. An interesting softening effect of the pre-deformed hydrogenated alloy was revealed by microhardness tests. TDS demonstrated the significant effect of pre-plastic deformation on the hydrogen evolution process. Hydrogen desorption temperature and the activation energy for hydrogen release increased, additional trap states were observed and the amount of desorbed hydrogen decreased 12. Effects of Weave Styles and Crimp Gradients on Damage Tolerance and Energy-Absorption Capacities of Woven Kevlar/Epoxy Composites Science.gov (United States) 2015-09-01 Capacities of Woven Kevlar /Epoxy Composites Paul V. Cavallaro Ranges, Engineering, and Analysis Department NEWPORT Naval Undersea Warfare Center Division...the Kevlar woven fabrics and technical data and to Core Composites Inc. for fabricating the composite laminates. Reviewed and Approved: 1...Effects of Weave Styles and Crimp Gradients on Damage Tolerance and Energy-Absorption Capacities of Woven Kevlar /Epoxy Composites 5a. CONTRACT NUMBER 5b 13. Light absorption during alkali atom-noble gas atom interactions at thermal energies: a quantum dynamics treatment. Science.gov (United States) Pacheco, Alexander B; Reyes, Andrés; Micha, David A 2006-10-21 The absorption of light during atomic collisions is treated by coupling electronic excitations, treated quantum mechanically, to the motion of the nuclei described within a short de Broglie wavelength approximation, using a density matrix approach. The time-dependent electric dipole of the system provides the intensity of light absorption in a treatment valid for transient phenomena, and the Fourier transform of time-dependent intensities gives absorption spectra that are very sensitive to details of the interaction potentials of excited diatomic states. We consider several sets of atomic expansion functions and atomic pseudopotentials, and introduce new parametrizations to provide light absorption spectra in good agreement with experimentally measured and ab initio calculated spectra. To this end, we describe the electronic excitation of the valence electron of excited alkali atoms in collisions with noble gas atoms with a procedure that combines l-dependent atomic pseudopotentials, including two- and three-body polarization terms, and a treatment of the dynamics based on the eikonal approximation of atomic motions and time-dependent molecular orbitals. We present results for the collision induced absorption spectra in the Li-He system at 720 K, which display both atomic and molecular transition intensities. 14. Measurement of the mass energy-absorption coefficient of air for x-rays in the range from 3 to 60 keV. Science.gov (United States) Buhr, H; Büermann, L; Gerlach, M; Krumrey, M; Rabus, H 2012-12-21 For the first time the absolute photon mass energy-absorption coefficient of air in the energy range of 10 to 60 keV has been measured with relative standard uncertainties below 1%, considerably smaller than those of up to 2% assumed for calculated data. For monochromatized synchrotron radiation from the electron storage ring BESSY II both the radiant power and the fraction of power deposited in dry air were measured using a cryogenic electrical substitution radiometer and a free air ionization chamber, respectively. The measured absorption coefficients were compared with state-of-the art calculations and showed an average deviation of 2% from calculations by Seltzer. However, they agree within 1% with data calculated earlier by Hubbell. In the course of this work, an improvement of the data analysis of a previous experimental determination of the mass energy-absorption coefficient of air in the range of 3 to 10 keV was found to be possible and corrected values of this preceding study are given. 15. Mechanical energy losses in plastically deformed and electron plus neutron irradiated high purity single crystalline molybdenum at elevated temperatures Energy Technology Data Exchange (ETDEWEB) Zelada, Griselda I. [Laboratorio de Materiales, Escuela de Ingenieria Electrica, Facultad de Ciencias Exactas, Ingenieria y Agrimensura, Universidad Nacional de Rosario, Avda. Pellegrini 250, 2000 Rosario (Argentina); Lambri, Osvaldo Agustin [Laboratorio de Materiales, Escuela de Ingenieria Electrica, Facultad de Ciencias Exactas, Ingenieria y Agrimensura, Universidad Nacional de Rosario, Avda. Pellegrini 250, 2000 Rosario (Argentina); Instituto de Fisica Rosario - CONICET, Member of the CONICET& #x27; s Research Staff, Avda. Pellegrini 250, 2000 Rosario (Argentina); Bozzano, Patricia B. [Laboratorio de Microscopia Electronica, Unidad de Actividad Materiales, Centro Atomico Constituyentes, Comision Nacional de Energia Atomica, Avda. Gral. Paz 1499, 1650 San Martin (Argentina); Garcia, Jose Angel [Departamento de Fisica Aplicada II, Facultad de Ciencias y Tecnologia, Universidad del Pais Vasco, Apdo. 644, 48080 Bilbao, Pais Vasco (Spain) 2012-10-15 Mechanical spectroscopy (MS) and transmission electron microscopy (TEM) studies have been performed in plastically deformed and electron plus neutron irradiated high purity single crystalline molybdenum, oriented for single slip, in order to study the dislocation dynamics in the temperature range within one third of the melting temperature. A damping peak related to the interaction of dislocation lines with both prismatic loops and tangles of dislocations was found. The peak temperature ranges between 900 and 1050 K, for an oscillating frequency of about 1 Hz. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) 16. Transfer involving deformed nuclei International Nuclear Information System (INIS) Rasmussen, J.O.; Guidry, M.W.; Canto, L.F. 1985-03-01 Results are reviewed of 1- and 2-neutron transfer reactions at near-barrier energies for deformed nuclei. Rotational angular momentum and excitation patterns are examined. A strong tendency to populating high spin states within a few MeV of the yrast line is noted, and it is interpreted as preferential transfer to rotation-aligned states. 16 refs., 12 figs 17. Influences of thermal deformation of cavity mirrors induced by high energy DF laser to beam quality under the simulated real physical circumstances Science.gov (United States) Deng, Shaoyong; Zhang, Shiqiang; He, Minbo; Zhang, Zheng; Guan, Xiaowei 2017-05-01 The positive-branch confocal unstable resonator with inhomogeneous gain medium was studied for the normal used high energy DF laser system. The fast changing process of the resonator's eigenmodes was coupled with the slow changing process of the thermal deformation of cavity mirrors. Influences of the thermal deformation of cavity mirrors to the outcoupled beam quality and transmission loss of high frequency components of high energy laser were computed. The simulations are done through programs compiled by MATLAB and GLAD software and the method of combination of finite elements and Fox-li iteration algorithm was used. Effects of thermal distortion, misaligned of cavity mirrors and inhomogeneous distribution of gain medium were introduced to simulate the real physical circumstances of laser cavity. The wavefront distribution and beam quality (including RMS of wavefront, power in the bucket, Strehl ratio, diffraction limit β, position of the beam spot center, spot size and intensity distribution in far-field ) of the distorted outcoupled beam were studied. The conclusions of the simulation agree with the experimental results. This work would supply references of wavefront correction range to the adaptive optics system of interior alleyway. 18. Deformation microstructures DEFF Research Database (Denmark) Hansen, N.; Huang, X.; Hughes, D.A. 2004-01-01 Microstructural characterization and modeling has shown that a variety of metals deformed by different thermomechanical processes follows a general path of grain subdivision, by dislocation boundaries and high angle boundaries. This subdivision has been observed to very small structural scales...... of the order of 10 nm, produced by deformation under large sliding loads. Limits to the evolution of microstructural parameters during monotonic loading have been investigated based on a characterization by transmission electron microscopy. Such limits have been observed at an equivalent strain of about 10... 19. Perceptual transparency from image deformation. Science.gov (United States) Kawabe, Takahiro; Maruya, Kazushi; Nishida, Shin'ya 2015-08-18 Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid's surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of "invisible" transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation. 20. Effect of nonaxial and hexadecapole deformation on the hyperfine splitting of energy levels in 238U muonic atoms International Nuclear Information System (INIS) Bagaev, V.I.; Mikhajlov, I.N.; Ortlepp, Kh.G.; Fromm, V.D. 1979-01-01 The effect of nonaxial and hexadecapole deformation on spectra of moun atoms is considered, the model of rigid nonaxial rotator being used. Experimental data on μ -238 U obtained on the JINR synchrocyclotron are presented. The effect of monopolar, quadrupolar and hexadecapolar parts of potential on muon spectrum is studied using a separated beam of negative 105 MeV/c muons, as the contribution of other harmonics is negligible. Wave functions of 238 U nucleus are determined in the framework of the Davydov-Filipov model. The values of charge distribution parameters obtained for 238 U are compared with available ones. The comparison shows that the effect of nuclear polarization on quadrupolar splitting of n→n-1 transitions decreases with the growth of n. Quadrupolar splitting of 4F→3D transitions is sufficiently large for experimental studies. Besides, vacuum polarization, radial charge distribution etc. produce an insignificant effect on the above transitions 1. Evaluation of intensity and energy interaction parameters for the complexation of Pr(III) with selected nucleoside and nucleotide through absorption spectral studies Science.gov (United States) Bendangsenla, N.; Moaienla, T.; David Singh, Th.; Sumitra, Ch.; Rajmuhon Singh, N.; Indira Devi, M. 2013-02-01 The interactions of Pr(III) with nucleosides and nucleotides have been studied in different organic solvents employing absorption difference and comparative absorption spectrophotometry. The magnitudes of the variations in both energy and intensity interaction parameters were used to explore the degree of outer and inner sphere co-ordination, incidence of covalency and the extent of metal 4f-orbital involvement in chemical bonding. Various electronic spectral parameters like Slater-Condon (Fk), Racah (Ek), Lande parameter (ξ4f), Nephelauxatic ratio (β), bonding (b1/2), percentage covalency (δ) and intensity parameters like oscillator strength (P) and Judd Ofelt electronic dipole intensity parameter (Tλ, λ = 2, 4, 6) have been evaluated. The variation of these evaluated parameters were employed to interpret the nature of binding of Pr(III) with different ligands i.e. Adenosine/ATP in presence and absence of Ca2+. 2. Volumetric Heat Generation and Consequence Raise in Temperature Due to Absorption of Neutrons from Thermal up to 14.9 MeV Energies CERN Document Server Massoud, E 2003-01-01 In this work, the heat generation rate and the consequence rise in temperature due to absorption of all neutrons from thermal energies (E<0.025) up to 14.9 MeV in water, paraffin wax, ordinary concrete and heavy concrete and heavy concrete as some selected hydrogenous materials are investigated. The neutron flux distributions are calculated by both ANISN-code and three group method in which the fast neutrons are expressed by the removal cross section concept while the other two groups (epithermal and thermal) are treated by the diffusion equation. The heat generation can be calculated from the neutron macroscopic absorption of each material or mixture multiplied by the corresponding neutron fluxes. The rise in temperature is then calculated by using both of the heat generation and the thermal conductivity of the selected materials. Some results are compared with the available experimental and theoretical data and a good agreement is achieved. 3. Deformation dependence of the isovector giant dipole resonance: The neodymium isotopic chain revisited Directory of Open Access Journals (Sweden) L.M. Donaldson 2018-01-01 Full Text Available Proton inelastic scattering experiments at energy Ep=200 MeV and a spectrometer scattering angle of 0° were performed on 144,146,148,150Nd and 152Sm exciting the IsoVector Giant Dipole Resonance (IVGDR. Comparison with results from photo-absorption experiments reveals a shift of resonance maxima towards higher energies for vibrational and transitional nuclei. The extracted photo-absorption cross sections in the most deformed nuclei, 150Nd and 152Sm, exhibit a pronounced asymmetry rather than a distinct double-hump structure expected as a signature of K-splitting. This behaviour may be related to the proximity of these nuclei to the critical point of the phase shape transition from vibrators to rotors with a soft quadrupole deformation potential. Self-consistent random-phase approximation (RPA calculations using the SLy6 Skyrme force provide a relevant description of the IVGDR shapes deduced from the present data. 4. Deformation dependence of the isovector giant dipole resonance: The neodymium isotopic chain revisited Science.gov (United States) Donaldson, L. M.; Bertulani, C. A.; Carter, J.; Nesterenko, V. O.; von Neumann-Cosel, P.; Neveling, R.; Ponomarev, V. Yu.; Reinhard, P.-G.; Usman, I. T.; Adsley, P.; Brummer, J. W.; Buthelezi, E. Z.; Cooper, G. R. J.; Fearick, R. W.; Förtsch, S. V.; Fujita, H.; Fujita, Y.; Jingo, M.; Kleinig, W.; Kureba, C. O.; Kvasil, J.; Latif, M.; Li, K. C. W.; Mira, J. P.; Nemulodi, F.; Papka, P.; Pellegri, L.; Pietralla, N.; Richter, A.; Sideras-Haddad, E.; Smit, F. D.; Steyn, G. F.; Swartz, J. A.; Tamii, A. 2018-01-01 Proton inelastic scattering experiments at energy Ep = 200 MeV and a spectrometer scattering angle of 0° were performed on 144,146,148,150Nd and 152Sm exciting the IsoVector Giant Dipole Resonance (IVGDR). Comparison with results from photo-absorption experiments reveals a shift of resonance maxima towards higher energies for vibrational and transitional nuclei. The extracted photo-absorption cross sections in the most deformed nuclei, 150Nd and 152Sm, exhibit a pronounced asymmetry rather than a distinct double-hump structure expected as a signature of K-splitting. This behaviour may be related to the proximity of these nuclei to the critical point of the phase shape transition from vibrators to rotors with a soft quadrupole deformation potential. Self-consistent random-phase approximation (RPA) calculations using the SLy6 Skyrme force provide a relevant description of the IVGDR shapes deduced from the present data. 5. Calcium absorption International Nuclear Information System (INIS) Carlmark, B.; Reizenstein, P.; Dudley, R.A. 1976-01-01 The methods most commonly used to measure the absorption and retention of orally administered calcium are reviewed. Nearly all make use of calcium radioisotopes. The magnitude of calcium absorption and retention depends upon the chemical form and amount of calcium administered, and the clinical and nutritional status of the subject; these influences are briefly surveyed. (author) 6. Absorption studies International Nuclear Information System (INIS) Ganatra, R.D. 1992-01-01 Absorption studies were once quite popular but hardly anyone does them these days. It is easier to estimate the blood level of the nutrient directly by radioimmunoassay (RIA). However, the information obtained by estimating the blood levels of the nutrients is not the same that can be obtained from the absorption studies. Absorption studies are primarily done to find out whether some of the essential nutrients are absorbed from the gut or not and if they are absorbed, to determine how much is being absorbed. In the advanced countries, these tests were mostly done to detect pernicious anaemia where vitamin B 12 is not absorbed because of the lack of the intrinsic factor in the stomach. In the tropical countries, ''malabsorption syndrome'' is quire common. In this condition, several nutrients like fat, folic acid and vitamin B 12 are not absorbed. It is possible to study absorption of these nutrients by radioisotopic absorption studies 7. Two photon absorption energy transfer in the light-harvesting complex of photosystem II (LHC-II) modified with organic boron dye Science.gov (United States) Chen, Li; Liu, Cheng; Hu, Rui; Feng, Jiao; Wang, Shuangqing; Li, Shayu; Yang, Chunhong; Yang, Guoqiang 2014-07-01 The plant light-harvesting complexes of photosystem II (LHC-II) play important roles in collecting solar energy and transferring the energy to the reaction centers of photosystems I and II. A two photon absorption compound, 4-(bromomethyl)-N-(4-(dimesitylboryl)phenyl)-N-phenylaniline (DMDP-CH2Br), was synthesized and covalently linked to the LHC-II in formation of a LHC-II-dye complex, which still maintained the biological activity of LHC-II system. Under irradiation with femtosecond laser pulses at 754 nm, the LHC-II-dye complex can absorb two photons of the laser light effectively compared with the wild type LHC-II. The absorbed excitation energy is then transferred to chlorophyll a with an obvious fluorescence enhancement. The results may be interesting and give potentials for developing hybrid photosystems. 8. On the crush behavior of an ultra light multi-cell foam-filled composite structures for energy absorption: Part 2-Numerical simulation International Nuclear Information System (INIS) Taher, Siavash T.; Rizal Zahari; Faizal Mustapha; Ataollahi, Simin 2010-01-01 The present paper is dealing with the implementation of the finite element explicit dynamic analysis code module incorporated ANSYS/ LS-DYNA computer software to the simulation of the crash behavior and energy adsorption characteristics of a novel multi-cell cost-effective crash worthy composite sandwich structure. In a previous paper, the authors developed the concept of the triple-layered foam-filled block and submitted experimental results of the crash behaviour and crash worthiness characteristics of such structure. The obtained numerical results of axial compression model of composite blocks are compared with actual experimental data of crash energy adsorption, load-displacement history and crush zone characteristics, showing very good agreement. Theoretical and experimental results showed good similarities in peak load, average load and energy absorption with and without use of two types of collapse trigger mechanism. (author) 9. Quasilocal energy for three-dimensional massive gravity solutions with chiral deformations of AdS{sub 3} boundary conditions Energy Technology Data Exchange (ETDEWEB) Garbarz, Alan, E-mail: [email protected] [Departamento de Física, Universidad de Buenos Aires FCEN-UBA, IFIBA-CONICET, Ciudad Universitaria, Pabellón I, 1428, Buenos Aires, Argentina and Instituto de Física de La Plata, Universidad Nacional de La Plata IFLP-UNLP, C.C. 67 (Argentina); Giribet, Gaston, E-mail: [email protected], E-mail: [email protected]; Goya, Andrés, E-mail: [email protected], E-mail: [email protected] [Departamento de Física, Universidad de Buenos Aires FCEN-UBA, IFIBA-CONICET, Ciudad Universitaria, Pabellón I, 1428, Buenos Aires (Argentina); Leston, Mauricio, E-mail: [email protected] [Instituto de Astronomía y Física del Espacio IAFE-CONICET, Ciudad Universitaria, C.C. 67 Suc. 28, 1428, Buenos Aires (Argentina) 2015-03-26 We consider critical gravity in three dimensions; that is, the New Massive Gravity theory formulated about Anti-de Sitter (AdS) space with the specific value of the graviton mass for which it results dual to a two-dimensional conformai field theory with vanishing central charge. As it happens with Kerr black holes in four-dimensional critical gravity, in three-dimensional critical gravity the Bañados-Teitelboim-Zanelli black holes have vanishing mass and vanishing angular momentum. However, provided suitable asymptotic conditions are chosen, the theory may also admit solutions carrying non-vanishing charges. Here, we give simple examples of exact solutions that exhibit falling-off conditions that are even weaker than those of the so-called Log-gravity. For such solutions, we define the quasilocal stress-tensor and use it to compute conserved charges. Despite the drastic deformation of AdS{sub 3} asymptotic, these solutions have finite mass and angular momentum, which are shown to be non-zero. 10. A numerical approach to model and predict the energy absorption and crush mechanics within a long-fiber composite crush tube Science.gov (United States) Pickett, Leon, Jr. Past research has conclusively shown that long fiber structural composites possess superior specific energy absorption characteristics as compared to steel and aluminum structures. However, destructive physical testing of composites is very costly and time consuming. As a result, numerical solutions are desirable as an alternative to experimental testing. Up until this point, very little numerical work has been successful in predicting the energy absorption of composite crush structures. This research investigates the ability to use commercially available numerical modeling tools to approximate the energy absorption capability of long-fiber composite crush tubes. This study is significant because it provides a preliminary analysis of the suitability of LS-DYNA to numerically characterize the crushing behavior of a dynamic axial impact crushing event. Composite crushing theory suggests that there are several crushing mechanisms occurring during a composite crush event. This research evaluates the capability and suitability of employing, LS-DYNA, to simulate the dynamic crush event of an E-glass/epoxy cylindrical tube. The model employed is the composite "progressive failure model", a much more limited failure model when compared to the experimental failure events which naturally occur. This numerical model employs (1) matrix cracking, (2) compression, and (3) fiber breakage failure modes only. The motivation for the work comes from the need to reduce the significant cost associated with experimental trials. This research chronicles some preliminary efforts to better understand the mechanics essential in pursuit of this goal. The immediate goal is to begin to provide deeper understanding of a composite crush event and ultimately create a viable alternative to destructive testing of composite crush tubes. 11. Band-head spectra of low-energy single-particle excitations in some well-deformed, odd-mass heavy nuclei within a microscopic approach Energy Technology Data Exchange (ETDEWEB) Koh, Meng-Hock [Universiti Teknologi Malaysia, Skudai, Johor (Malaysia); Univ. Bordeaux, CENBG, UMR5797, Gradignan (France); CNRS, IN2P3, CENBG, UMR5797, Gradignan (France); Duc, Dao Duy [Ton Duc Thang University, Division of Nuclear Physics, Ho Chi Minh City (Viet Nam); Ton Duc Thang University, Faculty of Applied Sciences, Ho Chi Minh City (Viet Nam); Nhan Hao, T.V. [Duy Tan University, Center of Research and Development, Danang (Viet Nam); Hue University, Center for Theoretical and Computational Physics, College of Education, Hue City (Viet Nam); Long, Ha Thuy [Hanoi University of Sciences, Vietnam National University, Hanoi (Viet Nam); Quentin, P. [Universiti Teknologi Malaysia, Skudai, Johor (Malaysia); Univ. Bordeaux, CENBG, UMR5797, Gradignan (France); CNRS, IN2P3, CENBG, UMR5797, Gradignan (France); Ton Duc Thang University, Division of Nuclear Physics, Ho Chi Minh City (Viet Nam); Bonneau, L. [Univ. Bordeaux, CENBG, UMR5797, Gradignan (France); CNRS, IN2P3, CENBG, UMR5797, Gradignan (France) 2016-01-15 In four well-deformed heavy odd nuclei, the energies of low-lying rotational band heads have been determined microscopically within a self-consistent Hartree-Fock-plus-BCS approach with blocking. A Skyrme nucleon-nucleon effective interaction has been used together with a seniority force to describe pairing correlations. Only such states which are phenomenologically deemed to be related to single-particle excitations have been considered. The polarization effects, including those associated with the genuine time-reversal symmetry breaking have been fully taken into account within our model assumptions. The calculated spectra are in reasonably good qualitative agreement with available data for the considered odd-neutron nuclei. This is not so much the case for the odd-proton nuclei. A potential explanation for such a difference in behavior is proposed. (orig.) 12. An overview of the Oil Palm Empty Fruit Bunch (OPEFB potential as reinforcing fibre in polymer composite for energy absorption applications Directory of Open Access Journals (Sweden) Faizi M.K. 2017-01-01 Full Text Available The oil palm empty fruit bunch (OPEFB natural fibres were comprehensively reviewed to assess their potential as reinforcing materials in polymer composites for energy absorption during low-velocity impact. The typical oil palm wastes include trunks, fronds, kernel shells, and empty fruit bunches. This has a tendency to burden the industry players with disposal difficulties and escalates the operating cost. Thus, there are several initiatives have been employed to convert these wastes into value added products. The objective of this study is to review the potential of oil palm empty fruit bunch (OPEFB as natural fibre polymer composite reinforcement to absorb the energy during low-velocity impact as another option for value added products. Initially, this paper reviewed the local oil palm waste issues. Previous research works on OPEFB polymer composite, and their mechanical characterization is appraised. Their potential for energy absorption in low-velocity impact application was also elaborated. The review suggests high potential applications of OPEFB as reinforcing materials in composite structures. Furthermore, it is wisely to utilize the oil palm biomass waste into a beneficial composite, hence, promotes the green environment. 13. Excited state electron and energy relays in supramolecular dinuclear complexes revealed by ultrafast optical and X-ray transient absorption spectroscopy. Science.gov (United States) Hayes, Dugan; Kohler, Lars; Hadt, Ryan G; Zhang, Xiaoyi; Liu, Cunming; Mulfort, Karen L; Chen, Lin X 2018-01-28 The kinetics of photoinduced electron and energy transfer in a family of tetrapyridophenazine-bridged heteroleptic homo- and heterodinuclear copper(i) bis(phenanthroline)/ruthenium(ii) polypyridyl complexes were studied using ultrafast optical and multi-edge X-ray transient absorption spectroscopies. This work combines the synthesis of heterodinuclear Cu(i)-Ru(ii) analogs of the homodinuclear Cu(i)-Cu(i) targets with spectroscopic analysis and electronic structure calculations to first disentangle the dynamics at individual metal sites by taking advantage of the element and site specificity of X-ray absorption and theoretical methods. The excited state dynamical models developed for the heterodinuclear complexes are then applied to model the more challenging homodinuclear complexes. These results suggest that both intermetallic charge and energy transfer can be observed in an asymmetric dinuclear copper complex in which the ground state redox potentials of the copper sites are offset by only 310 meV. We also demonstrate the ability of several of these complexes to effectively and unidirectionally shuttle energy between different metal centers, a property that could be of great use in the design of broadly absorbing and multifunctional multimetallic photocatalysts. This work provides an important step toward developing both a fundamental conceptual picture and a practical experimental handle with which synthetic chemists, spectroscopists, and theoreticians may collaborate to engineer cheap and efficient photocatalytic materials capable of performing coulombically demanding chemical transformations. 14. Three-in-one approach towards efficient organic dye-sensitized solar cells: aggregation suppression, panchromatic absorption and resonance energy transfer Directory of Open Access Journals (Sweden) Jayita Patwari 2017-08-01 Full Text Available In the present study, protoporphyrin IX (PPIX and squarine (SQ2 have been used in a co-sensitized dye-sensitized solar cell (DSSC to apply their high absorption coefficients in the visible and NIR region of the solar spectrum and to probe the possibility of Förster resonance energy transfer (FRET between the two dyes. FRET from the donor PPIX to acceptor SQ2 was observed from detailed investigation of the excited-state photophysics of the dye mixture, using time-resolved fluorescence decay measurements. The electron transfer time scales from the dyes to TiO2 have also been characterized for each dye. The current–voltage (I–V characteristics and the wavelength-dependent photocurrent measurements of the co-sensitized DSSCs reveal that FRET between the two dyes increase the photocurrent as well as the efficiency of the device. From the absorption spectra of the co-sensitized photoanodes, PPIX was observed to be efficiently acting as a co-adsorbent and to reduce the dye aggregation problem of SQ2. It has further been proven by a comparison of the device performance with a chenodeoxycholic acid (CDCA added to a SQ2-sensitized DSSC. Apart from increasing the absorption window, the FRET-induced enhanced photocurrent and the anti-aggregating behavior of PPIX towards SQ2 are crucial points that improve the performance of the co-sensitized DSSC. 15. On the spatio-temporal and energy-dependent response of riometer absorption to electron precipitation: drift-time and conjunction analyses in realistic electric and magnetic fields Science.gov (United States) Kellerman, Adam; Shprits, Yuri; Makarevich, Roman; Donovan, Eric; Zhu, Hui 2017-04-01 Riometers are low-cost passive radiowave instruments located in both northern and southern hemispheres that capable of operating during quiet and disturbed conditions. Many instruments have been operating continuously for multiple solar cycles, making them a useful tool for long-term statistical studies and for real-time analysis and forecasting of space weather. Here we present recent and new analyses of the relationship between the riometer-measured cosmic noise absorption and electron precipitation into the D-region and lower E-region ionosphere. We utilize two techniques: a drift-time analysis in realistic electric and magnetic field models, where a particle is traced from one location to another, and the energy determined by the time delay between similar observations; and a conjunction analysis, where we directly compare precipitated fluxes from THEMIS and Van Allen Probes with the riometer absorption. In both cases we present a statistical analysis of the response of riometer absorption to electron precipitation as a function of MLAT, MLT, and geomagnetic conditions. 16. Folate absorption International Nuclear Information System (INIS) Baker, S.J. 1976-01-01 Folate is the generic term given to numerous compounds of pteroic acid with glutamic acid. Knowledge of absorption is limited because of the complexities introduced by the variety of compounds and because of the inadequacy of investigational methods. Two assay methods are in use, namely microbiological and radioactive. Techniques used to study absorption include measurement of urinary excretion, serum concentration, faecal excretion, intestinal perfusion, and haematological response. It is probably necessary to test absorption of both pteroylmonoglutamic acid and one or more polyglutamates, and such tests would be facilitated by availability of synthesized compounds labelled with radioactive tracers at specifically selected sites. (author) 17. The correlation of local deformation and stress-assisted local phase transformations in MMC foams Energy Technology Data Exchange (ETDEWEB) Berek, H., E-mail: [email protected] [TU Bergakademie Freiberg, Agricolastraße 17, D-09599 Freiberg (Germany); Ballaschk, U.; Aneziris, C.G. [TU Bergakademie Freiberg, Agricolastraße 17, D-09599 Freiberg (Germany); Losch, K.; Schladitz, K. [Fraunhofer ITWM, Fraunhoferplatz 1, D-67663 Kaiserslautern (Germany) 2015-09-15 Cellular structures are of growing interest for industry, and are of particular importance for lightweight applications. In this paper, a special case of metal matrix composite foams (MMCs) is investigated. The investigated foams are composed of austenitic steel exhibiting transformation induced plasticity (TRIP) and magnesia partially stabilized zirconia (Mg-PSZ). Both components exhibit martensitic phase transformation during deformation, thus generating the potential for improved mechanical properties such as strength, ductility, and energy absorption capability. The aim of these investigations was to show that stress-assisted phase transformations within the ceramic reinforcement correspond to strong local deformation, and to determine whether they can trigger martensitic phase transformations in the steel matrix. To this end, in situ interrupted compression experiments were performed in an X-ray computed tomography device (XCT). By using a recently developed registration algorithm, local deformation could be calculated and regions of interest could be defined. Corresponding cross sections were prepared and used to analyze the local phase composition by electron backscatter diffraction (EBSD). The results show a strong correlation between local deformation and phase transformation. - Graphical abstract: Display Omitted - Highlights: • In situ compressive deformation on MMC foams was performed in an XCT. • Local deformation fields and their gradient amplitudes were estimated. • Cross sections were manufactured containing defined regions of interest. • Local EBSD phase analysis was performed. • Local deformation and local phase transformation are correlated. 18. The correlation of local deformation and stress-assisted local phase transformations in MMC foams International Nuclear Information System (INIS) Berek, H.; Ballaschk, U.; Aneziris, C.G.; Losch, K.; Schladitz, K. 2015-01-01 Cellular structures are of growing interest for industry, and are of particular importance for lightweight applications. In this paper, a special case of metal matrix composite foams (MMCs) is investigated. The investigated foams are composed of austenitic steel exhibiting transformation induced plasticity (TRIP) and magnesia partially stabilized zirconia (Mg-PSZ). Both components exhibit martensitic phase transformation during deformation, thus generating the potential for improved mechanical properties such as strength, ductility, and energy absorption capability. The aim of these investigations was to show that stress-assisted phase transformations within the ceramic reinforcement correspond to strong local deformation, and to determine whether they can trigger martensitic phase transformations in the steel matrix. To this end, in situ interrupted compression experiments were performed in an X-ray computed tomography device (XCT). By using a recently developed registration algorithm, local deformation could be calculated and regions of interest could be defined. Corresponding cross sections were prepared and used to analyze the local phase composition by electron backscatter diffraction (EBSD). The results show a strong correlation between local deformation and phase transformation. - Graphical abstract: Display Omitted - Highlights: • In situ compressive deformation on MMC foams was performed in an XCT. • Local deformation fields and their gradient amplitudes were estimated. • Cross sections were manufactured containing defined regions of interest. • Local EBSD phase analysis was performed. • Local deformation and local phase transformation are correlated 19. A novel solar energy integrated low-rank coal fired power generation using coal pre-drying and an absorption heat pump International Nuclear Information System (INIS) Xu, Cheng; Bai, Pu; Xin, Tuantuan; Hu, Yue; Xu, Gang; Yang, Yongping 2017-01-01 Highlights: •An improved solar energy integrated LRC fired power generation is proposed. •High efficient and economic feasible solar energy conversion is achieved. •Cold-end losses of the boiler and condenser are reduced. •The energy and exergy efficiencies of the overall system are improved. -- Abstract: A novel solar energy integrated low-rank coal (LRC) fired power generation using coal pre-drying and an absorption heat pump (AHP) was proposed. The proposed integrated system efficiently utilizes the solar energy collected from the parabolic trough to drive the AHP to absorb the low-grade waste heat of the steam cycle, achieving larger amount of heat with suitable temperature for coal’s moisture removal prior to the furnace. Through employing the proposed system, the solar energy could be partially converted into the high-grade coal’s heating value and the cold-end losses of the boiler and the steam cycle could be reduced simultaneously, leading to a high-efficient solar energy conversion together with a preferable overall thermal efficiency of the power generation. The results of the detailed thermodynamic and economic analyses showed that, using the proposed integrated concept in a typical 600 MW LRC-fired power plant could reduce the raw coal consumption by 4.6 kg/s with overall energy and exergy efficiencies improvement of 1.2 and 1.8 percentage points, respectively, as 73.0 MW th solar thermal energy was introduced. The cost of the solar generated electric power could be as low as 0.044/kW h. This work provides an improved concept to further advance the solar energy conversion and utilisation in solar-hybrid coal-fired power generation. 20. Development of a methodology for low-energy X-ray absorption correction in biological samples using radiation scattering techniques International Nuclear Information System (INIS) Pereira, Marcelo O.; Anjos, Marcelino J.; Lopes, Ricardo T. 2009-01-01 Non-destructive techniques with X-ray, such as tomography, radiography and X-ray fluorescence are sensitive to the attenuation coefficient and have a large field of applications in medical as well as industrial area. In the case of X-ray fluorescence analysis the knowledge of photon X-ray attenuation coefficients provides important information to obtain the elemental concentration. On the other hand, the mass attenuation coefficient values are determined by transmission methods. So, the use of X-ray scattering can be considered as an alternative to transmission methods. This work proposes a new method for obtain the X-ray absorption curve through superposition peak Rayleigh and Compton scattering of the lines L a e L β of Tungsten (Tungsten L lines of an X-ray tube with W anode). The absorption curve was obtained using standard samples with effective atomic number in the range from 6 to 16. The method were applied in certified samples of bovine liver (NIST 1577B) , milk powder and V-10. The experimental measurements were obtained using the portable system EDXRF of the Nuclear Instrumentation Laboratory (LIN-COPPE/UFRJ) with Tungsten (W) anode. (author) 1. Magnetic nanoparticles with high specific absorption rate of electromagnetic energy at low field strength for hyperthermia therapy Science.gov (United States) Shubitidze, Fridon; Kekalo, Katsiaryna; Stigliano, Robert; Baker, Ian 2015-03-01 Magnetic nanoparticles (MNPs), referred to as the Dartmouth MNPs, which exhibit high specific absorption rate at low applied field strength have been developed for hyperthermia therapy applications. The MNPs consist of small (2-5 nm) single crystals of gamma-Fe2O3 with saccharide chains implanted in their crystalline structure, forming 20-40 nm flower-like aggregates with a hydrodynamic diameter of 110-120 nm. The MNPs form stable (>12 months) colloidal solutions in water and exhibit no hysteresis under an applied quasistatic magnetic field, and produce a significant amount of heat at field strengths as low as 100 Oe at 99-164 kHz. The MNP heating mechanisms under an alternating magnetic field (AMF) are discussed and analyzed quantitatively based on (a) the calculated multi-scale MNP interactions obtained using a three dimensional numerical model called the method of auxiliary sources, (b) measured MNP frequency spectra, and (c) quantified MNP friction losses based on magneto-viscous theory. The frequency responses and hysteresis curves of the Dartmouth MNPs are measured and compared to the modeled data. The specific absorption rate of the particles is measured at various AMF strengths and frequencies, and compared to commercially available MNPs. The comparisons demonstrate the superior heating properties of the Dartmouth MNPs at low field strengths (therapy to deeper tumors that were previously non-viable targets, potentially enabling the treatment of some of the most difficult cancers, such as pancreatic and rectal cancers, without damaging normal tissue. 2. Quasar Absorption Studies Science.gov (United States) Mushotzky, Richard (Technical Monitor); Elvis, Martin 2004-01-01 The aim of the proposal is to investigate the absorption properties of a sample of inter-mediate redshift quasars. The main goals of the project are: Measure the redshift and the column density of the X-ray absorbers; test the correlation between absorption and redshift suggested by ROSAT and ASCA data; constrain the absorber ionization status and metallicity; constrain the absorber dust content and composition through the comparison between the amount of X-ray absorption and optical dust extinction. Unanticipated low energy cut-offs where discovered in ROSAT spectra of quasars and confirmed by ASCA, BeppoSAX and Chandra. In most cases it was not possible to constrain adequately the redshift of the absorber from the X-ray data alone. Two possibilities remain open: a) absorption at the quasar redshift; and b) intervening absorption. The evidences in favour of intrinsic absorption are all indirect. Sensitive XMM observations can discriminate between these different scenarios. If the absorption is at the quasar redshift we can study whether the quasar environment evolves with the Cosmic time. 3. Negative optical absorption and up-energy conversion in dendrites of nanostructured silver grafted with α/β-poly(vinylidene fluoride) in small hierarchical structures Science.gov (United States) Phule, A. D.; Ram, S.; Shinde, S. K.; Choi, J. H.; Tyagi, A. K. 2018-04-01 We report that a negative optical absorption arises in a sharp band at 325 nm (energy hν2) in a nanostructured silver (n-Ag) doped poly(vinylidene fluoride) (PVF2) in a hybrid nanocomposite of films (∼100 μm thickness). Two polymorphs α- and β-PVF2 are co-stretched through the n-Ag crystallites in dendrites of hierarchical structures. A critical 0.5 wt% n-Ag dosage promotes this band of extinction coefficient to be enhanced by as much as 2.009 × 103, i.e. a 30% value in the Ag-surface plasmon band 350-650 nm (hν1). An electron donor Ag (4d105s1) bonds to an electron accepter moiety CF2 of PVF2, it tunes a dielectric field and sets up an up-energy conversion of the plasmon band. The FESEM and HRTEM images reveal fcc-Ag dendrites entangled with in-built PVF2 surface layers (2-3 nm thickness). The IR phonon bands show how a α → β-PVF2 transformation propagates onto a nascent n-Ag surface and how it is raised-up in small steps of 0.1 wt% and up to 5.0 wt%. In a model scheme, we illustrate how a rigid core-shell of a capsule conducts a new transfer mechanism of the energy to a cold surface plasmon (core) in a coherent collision, so as to balance a net value hν2 = h(ν3 - ν1). It absorbs light in a weak band at 210 nm (hν3) in a π → π* electron transition in the Cdbnd C bonds of the PVF2 (shell), and results in a negative absorption in a coherent excitation of the energy-carriers. A light-emitter on absorption over a wide range of wavelengths (200-650 nm) offers a unique type of energy-converter. 4. Improvement of Strength and Energy Absorption Properties of Porous Aluminum Alloy with Aligned Unidirectional Pores Using Equal-Channel Angular Extrusion Science.gov (United States) Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke 2018-06-01 Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure. 5. Supersaturation-nucleation behavior of poorly soluble drugs and its impact on the oral absorption of drugs in thermodynamically high-energy forms. Science.gov (United States) Ozaki, Shunsuke; Minamisono, Takuma; Yamashita, Taro; Kato, Takashi; Kushida, Ikuo 2012-01-01 In order to better understand the oral absorption behavior of poorly water-soluble drugs, their supersaturation-nucleation behavior was characterized in fasted state simulated intestinal fluid. The induction time (t(ind)) for nucleation was measured for four model drugs: itraconazole, erlotinib, troglitazone, and PLX4032. Supersaturated solutions were prepared by solvent shift method, and nucleation initiation was monitored by ultraviolet detection. The relationship between t(ind) and degree of supersaturation was analyzed in terms of classical nucleation theory. The defined supersaturation stability proved to be compound specific. Clinical data on oral absorption were investigated for drugs in thermodynamically high-energy forms such as amorphous forms and salts and was compared with in vitro supersaturation-nucleation characteristics. Solubility-limited maximum absorbable dose was proportionate to intestinal effective drug concentrations, which are related to supersaturation stability and thermodynamic solubility. Supersaturation stability was shown to be an important factor in determining the effect of high-energy forms. The characterization of supersaturation-nucleation behavior by the presented method is, therefore, valuable for assessing the potential absorbability of poorly water-soluble drugs. Copyright © 2011 Wiley-Liss, Inc. 6. Improvement of Strength and Energy Absorption Properties of Porous Aluminum Alloy with Aligned Unidirectional Pores Using Equal-Channel Angular Extrusion Science.gov (United States) Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke 2018-04-01 Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure. 7. Mineral absorption and excretion as affected by microbial phytase and their effect on energy metabolism in young piglets NARCIS (Netherlands) Kies, A.K.; Gerrits, W.J.J.; Schrama, J.W.; Heetkamp, M.J.W.; Linden, van der K.L.; Zandstra, T.; Verstegen, M.W.A. 2005-01-01 Positive effects of dietary phytase supplementation on pig performance are observed not only when phosphorus is limiting. Improved energy utilization might be one explanation. Using indirect calorimetry, phytase-induced changes in energy metabolism were evaluated in young piglets with adequate 8. Simulation and evaluation of the absorption edge subtraction technique in energy-resolved X-ray radiography applied to the cultural heritage studies International Nuclear Information System (INIS) Leyva Pernia, Diana; Cabal Rodriguez, Ana E.; Pinnera Hernandez, Ibrahin; Leyva Fabelo, Antonio; Abreu Alfonso, Yamiel; Espen, Piet Van 2011-01-01 In this work the mathematical simulation of photon transport in the matter was used to evaluate the potentials of a new energy-resolved X-ray radiography system. The system is intended for investigations of cultural heritage object, mainly painting. The radiographic system uses polychromatic radiation from an X-ray tube and measures the spectrum transmitted through the object with an energy-dispersive X-ray detector on a pixel-by-pixel basis. Manipulation of the data-set obtained allows constructing images with enhanced contrast for certain elements. Here the use of the absorption edge subtraction technique was emphasized. The simulated results were in good agreement with the experimental data.(author) 9. Mass attenuation and mass energy absorption coefficients for 10 keV to 10 MeV photons; Coefficients d'attenuation massique et d'absorption massique en energie pour les photons de 10 keV a 10 MeV Energy Technology Data Exchange (ETDEWEB) Joffre, H; Pages, L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1968-07-01 In this report are given the elements allowing the definition of the values of mass attenuation coefficients and mass energy absorption coefficients for some elements and mixtures, necessary for the study of tissue equivalent materials, for photons in the energy range 10 keV to 10 MeV. After a short reminding of the definitions of the two coefficients, follows, in table form, a compilation of these coefficients, as a function of energy, for simple elements, for certain mineral compounds, organic compounds, gases and particularly of soft tissues. (author) [French] Dans ce rapport, sont donnes les elements permettant de determiner les valeurs des coefficients d'attenuation massique et d'absorption massique en energie pour certains elements et melanges necessaires a l'etude des materiaux equivalents aux tissus pour les photons dans le domaine d'energie allant de 10 keV a 10 MeV. Apres un bref rappel des definitions des deux coefficients, suit, sous forme de tableaux, un recueil de ces coefficients, en fonction de l'energie, pour les elements simples, certains composes mineraux, composes organiques, gaz, et, particulierement, pour les tissus mous. (auteur) 10. Development of Techniques for Investigating Energy Contributions to Target Deformation and Penetration During Reactive Projectile Hypervelocity Impact Science.gov (United States) 2011-07-01 inertial system, one can use the Bernoulli equation to describe the process [Wil03]. Estimating an stationary adiabatic process with an incompressible...cally the model is based on the conservation of momentum and energy, supplemented by a correctional term. 9It can be seen, that a smaller liner...system vjet minus the penetration velocity22: vjet,theoretical = vjet − u (9) The whole process can now be described by a simplified Bernoulli formula 11. Modeling and numerical simulation of a novel solar-powered absorption air conditioning system driven by a bubble pump with energy storage Institute of Scientific and Technical Information of China (English) QIU Jia; LIANG Jian; CHEN GuangMing; DU RuXu 2009-01-01 This paper presents a novel solar-powered absorption air conditioning system driven by a bubble pump with energy storage. It solves the problem of unreliable solar energy supply by storing the working fluids and hence, functions 24 h per day. First, the working principles are described and the dynamic models for the primary energy storage components are developed. Then, the system is evaluated based on a numerical simulation. Based on the meteorological data of a typical day in a subtropical area, with the area of a solar collector being set at 19.15 m2, whilst the initial charging mass, mass fraction and temperature of the solution are respectively set at 379.5 kg, 54.16% and 34.5 ℃, it is found that the respective coefficients of performance (COP) of the air conditioning system and the en-tire system (including the solar panel) are 0.7771 and 0.4372. In particular, the energy storage density of the system is 206.69 MJ/m3 which is much greater than those of chilled water or hot water storage systems under comparable conditions. This makes the new system much more compact and efficient. Finally, an automatic control strategy is given to achieve the highest COP when solar energy fluctuates. 12. Limiting absorption principle at low energies for a mathematical model of weak interaction: the decay of a boson; Proprietes spectrales et principe d'absorption limite a faible energie pour un modele mathematique d'interaction faible: la desintegration d'un boson Energy Technology Data Exchange (ETDEWEB) Barbarouxa, J.M. [Centre de Physique Theorique, 13 - Marseille (France); Toulon-Var Univ. du Sud, Dept. de Mathematiques, 83 - La Garde (France); Guillot, J.C. [Centre de Mathematiques Appliquees, UMR 7641, Ecole Polytechnique - CNRS, 91 - Palaiseau (France) 2009-09-15 We study the spectral properties of a Hamiltonian describing the weak decay of spin 1 massive bosons into the full family of leptons. We prove that the considered Hamiltonian is self-adjoint, with a unique ground state and we derive a Mourre estimate and a limiting absorption principle above the ground state energy and below the first threshold, for a sufficiently small coupling constant. As a corollary, we prove absence of eigenvalues and absolute continuity of the energy spectrum in the same spectral interval. (authors) 13. Gamma-ray energy absorption in absorbing homogeneous medium. Applications to Oceanography and Geophysics (Gamma-ray spectroscopy from 500 to 1500 keV) International Nuclear Information System (INIS) Lapicque, G. 1980-01-01 The aim of this study is to establish a general algebrical approach for the calculation, without any program, of the full energy peak efficiency of a detecting probe designed to measure the gamma activity of a radio-element in a (semi) infinite homogeneous absorbing medium such as the Sea. The radio-active source may be punctual or, most often, constitute an integral part of the medium. The proposed theory is valid for any purely absorptive process of particles moving along straight trajectories, diffusion effects being allowed for separately. The formulation assumes a spherical detector and calculations are made for models having the same volume as two standard phosphors (10 cm x8 cm and 5 cm x 4.5 cm) in the energy band 0.5 to 1.5 MeV. The parameters are the detector radius and, at energy E 0 , the absorption coefficients in the various media for gamma rays together with the 'peak/total' ratio in the detector. The fact that this latter factor, which varies with each trajectory, cannot be obtained with accuracy, constitutes the main limitation of the formulation. The comparison with experimental results obtained with a 10 cm x 8 cm phosphor at the C.F.R. (Centre des Faibles Radioactivites, Gif-sur-Yvette) and with various data indicates an error of about +-5% for a point source at contact and -30% for a homogeneously distributed source in an infinite medium. This latter value may be interpreted as a superiority of the spherical shape over the cylinder (used in practice), for detectors operating in infinite media. Calculations are made without allowing for the Compton effect, which is found to give an approximate correction of +5% in water for a band width of 10 keV in the MeV region. Finally, the shape of the detecting probe around the detector is shown to be indifferent in the assumption of a constant peak/total ratio [fr 14. Bunionette deformity. Science.gov (United States) Cohen, Bruce E; Nicholson, Christopher W 2007-05-01 The bunionette, or tailor's bunion, is a lateral prominence of the fifth metatarsal head. Most commonly, bunionettes are the result of a widened 4-5 intermetatarsal angle with associated varus of the metatarsophalangeal joint. When symptomatic, these deformities often respond to nonsurgical treatment methods, such as wider shoes and padding techniques. When these methods are unsuccessful, surgical treatment is based on preoperative radiographs and associated lesions, such as hyperkeratoses. In rare situations, a simple lateral eminence resection is appropriate; however, the risk of recurrence or overresection is high with this technique. Patients with a lateral bow to the fifth metatarsal are treated with a distal chevron-type osteotomy. A widened 4-5 intermetatarsal angle often requires a diaphyseal osteotomy for correction. 15. Disruption simulation experiments in a pulsed plasma accelerator - energy absorption and damage evolution on plasma facing materials International Nuclear Information System (INIS) Bolt, H.; Barabash, V.; Gervash, A.; Linke, J.; Lu, L.P.; Ovchinnikov, I.; Roedig, M. 1995-01-01 Plasma accelerators are used as test beds for disruption simulation experiments on plasma facing materials, because the incident energy fluxes and the discharge duration are of similar order as those expected during disruptions in ITER. The VIKA facility was used for the testing of materials under incident energies up to 5 kJ/cm 2 . Different carbon materials, SiC, stainless steel, TZM and tungsten have been tested. From the experimental results a scaling of the ablation with incident energy density was derived. The resulting ablation depth on carbon materials is roughly 2 μm per kJcm -2 of incident energy density. For metals this ablation is much higher due to the partial loss of the melt layer from splashing. For stainless steel an ablation depth of 9.5 μm per kJcm -2 was determined. The result of a linear scaling of the ablation depth with incident energy density is consistent with a previous calorimetric study. (orig.) 16. Gamma absorption meter International Nuclear Information System (INIS) Dincklage, R.D. von. 1984-01-01 The absorption meter consists of a radiation source, a trough for the absorbing liquid and a detector. It is characterized by the fact that there is a foil between the detector and the trough, made of a material whose binding energy of the K electrons is a little greater than the energy of the photons emitted by the radiation source. The source of radiation and foil are replaceable. (orig./HP) [de 17. Narrative absorption DEFF Research Database (Denmark) Narrative Absorption brings together research from the social sciences and Humanities to solve a number of mysteries: Most of us will have had those moments, of being totally absorbed in a book, a movie, or computer game. Typically we do not have any idea about how we ended up in such a state. No... 18. Layered surface structure of gas-atomized high Nb-containing TiAl powder and its impact on laser energy absorption for selective laser melting Science.gov (United States) Zhou, Y. H.; Lin, S. F.; Hou, Y. H.; Wang, D. W.; Zhou, P.; Han, P. L.; Li, Y. L.; Yan, M. 2018-05-01 Ti45Al8Nb alloy (in at.%) is designed to be an important high-temperature material. However, its fabrication through laser-based additive manufacturing is difficult to achieve. We present here that a good understanding of the surface structure of raw material (i.e. Ti45Al8Nb powder) is important for optimizing its process by selective laser melting (SLM). Detailed X-ray photoelectron spectroscopy (XPS) depth profiling and transmission electron microscopy (TEM) analyses were conducted to determine the surface structure of Ti45Al8Nb powder. An envelope structure (∼54.0 nm in thickness) was revealed for the powder, consisting of TiO2 + Nb2O5 (as the outer surface layer)/Al2O3 + Nb2O5 (as the intermediate layer)/Al2O3 (as the inner surface layer)/Ti45Al8Nb (as the matrix). During SLM, this layered surface structure interacted with the incident laser beam and improved the laser absorptivity of Ti45Al8Nb powder by ∼32.21%. SLM experiments demonstrate that the relative density of the as-printed parts can be realized to a high degree (∼98.70%), which confirms good laser energy absorption. Such layered surface structure with appropriate phase constitution is essential for promoting SLM of the Ti45Al8Nb alloy. 19. Novel powder/solid composites possessing low Young’s modulus and tunable energy absorption capacity, fabricated by electron beam melting, for biomedical applications International Nuclear Information System (INIS) Ikeo, Naoko; Ishimoto, Takuya; Nakano, Takayoshi 2015-01-01 Highlights: • We fabricated novel porous composites by electron beam melting. • The composites consist of necked powder and melted solid framework. • Unmelted powder that is usually discarded was mechanically functionalized by necking. • The composites possess controllably low Young’s modulus and excellent toughness. • The composites would be promising for utilization in biomedical applications. - Abstract: A novel, hierarchical, porous composite from a single material composed of necked powder and melted solid, with tunable mechanical properties, is fabricated by electron beam melting and subsequent heat treatment. The composite demonstrates low Young’s modulus (⩽31 GPa) and excellent energy absorption capacity, both of which are necessary for use in orthopedic applications. To the best of our knowledge, this is the first report on the synthesis of a material combining controllably low Young’s modulus and excellent toughness 20. Design, evaluation and recommedation effort relating to the modification of a residential 3-ton absorption cycle cooling unit for operation with solar energy Science.gov (United States) Merrick, R. H.; Anderson, P. P. 1973-01-01 The possible use of solar energy powered absorption units to provide cooling and heating of residential buildings is studied. Both, the ammonia-water and the water-lithium bromide cycles, are considered. It is shown that the air cooled ammonia water unit does not meet the criteria for COP and pump power on the cooling cycle and the heat obtained from it acting as a heat pump is at too low a temperature. If the ammonia machine is water cooled it will meet the design criteria for cooling but can not supply the heating needs. The water cooled lithium bromide unit meets the specified performance for cooling with appreciably lower generator temperatures and without a mechanical solution pump. It is recommeded that in the demonstration project a direct expansion lithium bromide unit be used for cooling and an auxiliary duct coil using the solar heated water be employed for heating. 1. Study of the absorption and energy transfer processes in inorganic luminescent materials in the UV and VUV region International Nuclear Information System (INIS) Mayolet, A. 1995-01-01 In order to find a green emitting phosphor showing high quantum efficiency and a short decay time which can be used in the color Plasma Display Panels developed by Thomson-TTE-TIV company, a VUV spectrophotometer built at IPN Orsay, using the synchrotron radiation from the SUPER-ACO storage ring as an excitation source, allow us the simultaneous recording of the luminescence excitation and diffuse reflectivity spectra of the inorganic compounds in the UV-VUV range. In addition, this experimental set-up enable us to determine the luminescence quantum efficiency of phosphors in the whole energy range of investigation. The chemical synthesis of rare-earth ortho-- and metaborate and rare-earth ortho- and metaphosphate doped with trivalent lanthanide ions cerium, praseodymium, europium and terbium have been made. The energy variation of the thresholds of the luminescence excitation mechanisms in function of the nature and the structure of the host matrix is discussed. We have determined the influence of the nephelauxetic effect and the crystal field intensity on the energy of the f-d inter-configuration transitions. The variation of the luminescence quantum efficiency of the dopant ion is interpreted through the 'impurity bound exciton' model. The systematic comparison of the cerium and terbium trivalent ions spectroscopic properties in the Y(AG)G host lattice series stands to reason that the self-ionized state of the luminescent center plays an important role in the rate of the non radiative relaxation. It is the redox power of the host matrix which imposes to the luminescent center, the energy of this state. (author) 2. Energy, exergy, economic (3E) analyses and multi-objective optimization of vapor absorption heat transformer using NSGA-II technique International Nuclear Information System (INIS) Jain, Vaibhav; Sachdeva, Gulshan 2017-01-01 Highlights: • Study includes energy, exergy and economic analyses of absorption heat transformer. • It addresses multi-objective optimization study using NSGA-II technique. • Total annual cost and total exergy destruction are simultaneously optimized. • Results with multi-objective optimized design are more acceptable than other. - Abstract: Present paper addresses the energy, exergy and economic (3E) analyses of absorption heat transformer (AHT) working with LiBr-H 2 O fluid pair. The heat exchangers namely absorber, condenser, evaporator, generator and solution heat exchanger are designed for the size and cost estimation of AHT. Later, the effect of operating variables is examined on the system performance, size and cost. Simulation studies showed a conflict between thermodynamic and economic performance of the system. The heat exchangers with lower investment cost showed high irreversible losses and vice versa. Thus, the operating variables of systems are determined economically as well as thermodynamically by implementing non-dominated sort genetic algorithm-II (NSGA-II) technique of multi-objective optimization. In present work, if the cost based optimized design is chosen, total exergy destruction is 2.4% higher than its minimum possible value; whereas, if total exergy based optimized design is chosen, total annual cost is 6.1% higher than its minimum possible value. On the other hands, total annual cost and total exergy destruction are only 1.0% and 0.8%, respectively more from their minimum possible values with multi-objective optimized design. Thus, the multi-objective optimized design of the AHT is best outcome than any other single-objective optimized designs. 3. Absorptive products International Nuclear Information System (INIS) Assarsson, P.G.; King, P.A. 1976-01-01 Applications for hydrophile gels produced by the radiation induced cross-linking in aqueous solution of polyethylene oxide and starch, as described in Norwegian patent 133501 (INIS RN 281494), such as sanitary napkins (diapers) and sanitary towels, are discussed. The process itself is also discussed and results, expressed as the percentage of insoluble gel and its absorptive capacity for saline solution as functions of the ratio of polyethylene oxide to starch and the radiation dose, are presented. (JIW) 4. A comprehensive study on energy absorption and exposure buildup factors for some essential amino acids, fatty acids and carbohydrates in the energy range 0.015-15 MeV up to 40 mean free path International Nuclear Information System (INIS) Kurudirek, Murat; Ozdemir, Yueksel 2011-01-01 The gamma ray energy absorption (EABF) and exposure buildup factors (EBF) have been calculated for some essential amino acids, fatty acids and carbohydrates in the energy region 0.015-15 MeV up to a penetration depth of 40 mfp (mean free path). The five parameter geometric progression (G-P) fitting approximation has been used to calculate both EABF and EBF. Variations of EABF and EBF with incident photon energy, penetration depth and weight fraction of elements have been studied. While the significant variations in EABF and EBF for amino acids and fatty acids have been observed at the intermediate energy region where Compton scattering is the main photon interaction process, the values of EABF and EBF appear to be almost the same for all carbohydrates in the continuous energy region. It has been observed that the fatty acids have the largest EABF and EBF at 0.08 and 0.1 MeV, respectively, whereas the maximum values of EABF and EBF have been observed for aminoacids and carbohydrates at 0.1 MeV. At the fixed energy of 1.5 MeV, the variation of EABF with penetration depth appears to be independent of the variations in chemical composition of the amino acids, fatty acids and carbohydrates. Significant variations were also observed between EABF and EBF which may be due to the variations in chemical composition of the given materials. 5. Field Measurements of Water Continuum and Water Dimer Absorption by Active Long Path Differential Optical Absorption Spectroscopy (DOAS) OpenAIRE Lotter, Andreas 2006-01-01 Water vapor plays an important role in Earth's radiative budget since water molecules strongly absorb the incoming solar shortwave and the outgoing thermal infrared radiation. Superimposed on the water monomer absorption, a water continuum absorption has long been recognized, but its true nature still remains controversial. On the one hand, this absorption is explained by a deformation of the line shape of the water monomer absorption lines as a consequence of a molecular collision. One the o... 6. A novel deformation mechanism for superplastic deformation Energy Technology Data Exchange (ETDEWEB) Muto, H.; Sakai, M. (Toyohashi Univ. of Technology (Japan). Dept. of Materials Science) 1999-01-01 Uniaxial compressive creep tests with strain value up to -0.1 for a [beta]-spodumene glass ceramic are conducted at 1060 C. From the observation of microstructural changes between before and after the creep deformations, it is shown that the grain-boundary sliding takes place via cooperative movement of groups of grains rather than individual grains under the large-scale-deformation. The deformation process and the surface technique used in this work are not only applicable to explain the deformation and flow of two-phase ceramics but also the superplastic deformation. (orig.) 12 refs. 7. Particle fracture and plastic deformation in vanadium pentoxide Indian Academy of Sciences (India) Particle fracture and plastic deformation in vanadium pentoxide powders induced by high energy vibrational ball-mill ... Keywords. X-ray diffraction; ball-milling; plastic deformation; microstrain. ... Bulletin of Materials Science | News. 8. Deformation effects in the Si + C and Si + Si reactions Indian Academy of Sciences (India) The possible occurrence of highly deformed configurations is investigated in the. ¼ ... Fusion–fission; nuclear deformation; exclusive light charge particle measurements. .... In hot rotating nuclei formed in heavy-ion reactions, the energy level. 9. Study of electron-beam-evaporated MgO films using electron diffraction, optical absorption and cathodoluminescence Energy Technology Data Exchange (ETDEWEB) Aboelfotoh, M.O.; Ramsey, J.N. 1982-05-21 Reflection high energy electron diffraction, optical absorption and cathodoluminescence were used to study MgO films deposited onto fused silica, single-crystal silicon and LiF substrates at various temperatures. Results showed that some of the same optical absorption and emission bands observed in X- or UV-irradiated, additively colored or mechanically deformed MgO crystals were observed in evaporated MgO films. The peak positions and the relative peak intensities of the optical absorption and emission bands depended on the substrate temperature during film deposition as well as on the structure of the film. The effect of heating the films in air and vacuum on the optical absorption and emission bands is also discussed. 10. Total Absorption Spectroscopy International Nuclear Information System (INIS) Rubio, B.; Gelletly, W. 2007-01-01 The problem of determining the distribution of beta decay strength (B(GT)) as a function of excitation energy in the daughter nucleus is discussed. Total Absorption Spectroscopy is shown to provide a way of determining the B(GT) precisely. A brief history of such measurements and a discussion of the advantages and disadvantages of this technique, is followed by examples of two recent studies using the technique. (authors) 11. The efficiency of utilization of metabolizable energy and apparent absorption of amino acids in sheep given spring- and autumn-harvested dried grass. Science.gov (United States) Macrae, J C; Smith, J S; Dewey, P J; Brewer, A C; Brown, D S; Walker, A 1985-07-01 Three experiments were conducted with sheep given spring-harvested dried grass (SHG) and autumn-harvested dried grass (AHG). The first was a calorimetric trial to determine the metabolizable energy (ME) content of each grass and the efficiency with which sheep utilize their extra ME intakes above the maintenance level of intake. The second examined the relative amounts of extra non-ammonia-nitrogen (NAN) and individual amino acids absorbed from the small intestine per unit extra ME intake as the level of feeding was raised from energy equilibrium (M) to approximately 1.5 M. The third was a further calorimetric trial to investigate the effect of an abomasal infusion of 30 g casein/d on the efficiency of utilization of AHG. The ME content of the SHG (11.8 MJ/kg dry matter (DM] was higher than that of AHG (10.0 MJ/kg DM). The efficiency of utilization of ME for productive purposes (i.e. above the M level of intake; kf) was higher when given SHG (kf 0.54 between M and 2 M) than when given AHG (kf 0.43 between M and 2 M). As the level of intake of each grass was raised from M to 1.5 M there was a greater increment in the amounts of NAN (P less than 0.001) and the total amino acid (P less than 0.05) absorbed from the small intestines when sheep were given the SHG (NAN absorption, SHG 5.4 g/d, AHG 1.5 g/d, SED 0.54; total amino acid absorption SHG 31.5 g/d, AHG 14.3 g/d, SED 5.24). Infusion of 30 g casein/d per abomasum of sheep given AHG at M and 1.5 M levels of intake increased (P less than 0.05) the efficiency of utilization of the herbage from kf 0.45 to kf 0.57. Consideration is given to the possibility that the higher efficiency of utilization of ME in sheep given SHG may be related to the amounts of extra glucogenic amino acids absorbed from the small intestine which provide extra reducing equivalents (NADPH) and glycerol phosphate necessary for the conversion of acetate into fatty acids. 12. Thermodynamic analysis of elastic-plastic deformation International Nuclear Information System (INIS) Lubarda, V. 1981-01-01 The complete set of constitutive equations which fully describes the behaviour of material in elastic-plastic deformation is derived on the basis of thermodynamic analysis of the deformation process. The analysis is done after the matrix decomposition of the deformation gradient is introduced into the structure of thermodynamics with internal state variables. The free energy function, is decomposed. Derive the expressions for the stress response, entropy and heat flux, and establish the evolution equation. Finally, we establish the thermodynamic restrictions of the deformation process. (Author) [pt 13. Dynamics of oxide growth on Pt nanoparticles electrodes in the presence of competing halides by operando energy dispersive X-Ray absorption spectroscopy KAUST Repository Minguzzi, Alessandro 2018-03-17 In this work we studied the kinetics of oxide formation and reduction on Pt nanoparticles in HClO4 in the absence and in the presence of Br− and Cl− ions. The study combines potential step methods (i.e. chronoamperometry and choronocoulometry) with energy dispersive X-ray absorption spectroscopy (ED-XAS), which in principle allows to record a complete XAS spectrum in the timescale of milliseconds. Here, the information on the charge state and on the atomic surrounding of the considered element provided by XAS was exploited to monitor the degree of occupancy of 5d states of Pt in the course of oxide formation and growth, and to elucidate the competing halide adsorption/desorption phenomena. Electrochemical methods and XAS agree on the validity of a log(t) depending growth of Pt oxide, that is significantly delayed in the presence of Cl− and Br− anions. In the proximity of formation of one monolayer, the growth is further slowed down. 14. Dynamics of oxide growth on Pt nanoparticles electrodes in the presence of competing halides by operando energy dispersive X-Ray absorption spectroscopy KAUST Repository Minguzzi, Alessandro; Montagna, Linda; Falqui, Andrea; Vertova, Alberto; Rondinini, Sandra; Ghigna, Paolo 2018-01-01 In this work we studied the kinetics of oxide formation and reduction on Pt nanoparticles in HClO4 in the absence and in the presence of Br− and Cl− ions. The study combines potential step methods (i.e. chronoamperometry and choronocoulometry) with energy dispersive X-ray absorption spectroscopy (ED-XAS), which in principle allows to record a complete XAS spectrum in the timescale of milliseconds. Here, the information on the charge state and on the atomic surrounding of the considered element provided by XAS was exploited to monitor the degree of occupancy of 5d states of Pt in the course of oxide formation and growth, and to elucidate the competing halide adsorption/desorption phenomena. Electrochemical methods and XAS agree on the validity of a log(t) depending growth of Pt oxide, that is significantly delayed in the presence of Cl− and Br− anions. In the proximity of formation of one monolayer, the growth is further slowed down. 15. The reliability of finite element analysis results of the low impact test in predicting the energy absorption performance of thin-walled structures Energy Technology Data Exchange (ETDEWEB) Alipour, R.; Nejadx, Farokhi A.; Izman, S. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia) 2015-05-15 The application of dual phase steels (DPS) such as DP600 in the form of thin-walled structure in automotive components is being continuously increased as vehicle designers utilize modern steel grades and low weight structures to improve structural performance, make automotive light and reinforce crash performance. Preventing cost enhancement of broad investigations in this area can be gained by using computers in structural analysis in order to substitute lots of experiments with finite element analysis (FEA). Nevertheless, it necessitates to be certified that selected method including element type and solution methodology is capable of predicting real condition. In this paper, numerical and experimental studies are done to specify the effect of element type selection and solution methodology on the results of finite element analysis in order to investigate the energy absorption behavior of a DP600 thin-walled structure with three different geometries under a low impact loading. The outcomes indicated the combination of implicit method and solid elements is in better agreement with the experiments. In addition, using a combination of shell element types with implicit method reduces the time of simulation remarkably, although the error of results compared to the experiments increased to some extent. 16. Analysis of specific absorption rate and internal electric field in human biological tissues surrounding an air-core coil-type transcutaneous energy transmission transformer. Science.gov (United States) Shiba, Kenji; Zulkifli, Nur Elina Binti; Ishioka, Yuji 2017-06-01 In this study, we analyzed the internal electric field E and specific absorption rate (SAR) of human biological tissues surrounding an air-core coil transcutaneous energy transmission transformer. Using an electromagnetic simulator, we created a model of human biological tissues consisting of a dry skin, wet skin, fat, muscle, and cortical bone. A primary coil was placed on the surface of the skin, and a secondary coil was located subcutaneously inside the body. The E and SAR values for the model representing a 34-year-old male subject were analyzed using electrical frequencies of 0.3-1.5 MHz. The transmitting power was 15 W, and the load resistance was 38.4 Ω. The results showed that the E values were below the International Commission on Non-ionizing Radiation Protection (ICNIRP) limit for the general public exposure between the frequencies of 0.9 and 1.5 MHz, and SAR values were well below the limit prescribed by the ICNIRP for the general public exposure between the frequencies of 0.3 and 1.2 MHz. 17. Hybrid Solar-Geothermal Energy Absorption Air-Conditioning System Operating with NaOH-H2O—Las Tres Vírgenes (Baja California Sur), “La Reforma” Case OpenAIRE Yuridiana Rocio Galindo-Luna; Efraín Gómez-Arias; Rosenberg J. Romero; Eduardo Venegas-Reyes; Moisés Montiel-González; Helene Emmi Karin Unland-Weiss; Pedro Pacheco-Hernández; Antonio González-Fernández; Jorge Díaz-Salgado 2018-01-01 Solar and geothermal energies are considered cleaner and more useful energy sources that can be used to avoid the negative environmental impacts caused by burning fossil fuels. Several works have reported air-conditioning systems that use solar energy coupled to geothermal renewable energy as a thermal source. In this study, an Absorption Air-Conditioning System (AACS) used sodium hydroxide-water (NaOH-H2O) instead of lithium bromide-water to reduce the cost. Low enthalpy geothermal heat was ... 18. Ternary Nonfullerene Polymer Solar Cells with 12.16% Efficiency by Introducing One Acceptor with Cascading Energy Level and Complementary Absorption. Science.gov (United States) Jiang, Weigang; Yu, Runnan; Liu, Zhiyang; Peng, Ruixiang; Mi, Dongbo; Hong, Ling; Wei, Qiang; Hou, Jianhui; Kuang, Yongbo; Ge, Ziyi 2018-01-01 A novel small-molecule acceptor, (2,2'-((5E,5'E)-5,5'-((5,5'-(4,4,9,9-tetrakis(5-hexylthiophen-2-yl)-4,9-dihydro-s-indaceno[1,2-b:5,6-b']dithiophene-2,7-diyl)bis(4-(2-ethylhexyl)thiophene-5,2-diyl))bis(methanylylidene)) bis(3-hexyl-4-oxothiazolidine-5,2-diylidene))dimalononitrile (ITCN), end-capped with electron-deficient 2-(3-hexyl-4-oxothiazolidin-2-ylidene)malononitrile groups, is designed, synthesized, and used as the third component in fullerene-free ternary polymer solar cells (PSCs). The cascaded energy-level structure enabled by the newly designed acceptor is beneficial to the carrier transport and separation. Meanwhile, the three materials show a complementary absorption in the visible region, resulting in efficient light harvesting. Hence, the PBDB-T:ITCN:IT-M ternary PSCs possess a high short-circuit current density (J sc ) under an optimal weight ratio of donors and acceptors. Moreover, the open-circuit voltage (V oc ) of the ternary PSCs is enhanced with an increase of the third acceptor ITCN content, which is attributed to the higher lowest unoccupied molecular orbital energy level of ITCN than that of IT-M, thus exhibits a higher V oc in PBDB-T:ITCN binary system. Ultimately, the ternary PSCs achieve a power conversion efficiency of 12.16%, which is higher than the PBDB-T:ITM-based PSCs (10.89%) and PBDB-T:ITCN-based ones (2.21%). This work provides an effective strategy to improve the photovoltaic performance of PSCs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 19. Plastic Deformation of Metal Surfaces DEFF Research Database (Denmark) Hansen, Niels; Zhang, Xiaodan; Huang, Xiaoxu 2013-01-01 of metal components. An optimization of processes and material parameters must be based on a quantification of stress and strain gradients at the surface and in near surface layer where the structural scale can reach few tens of nanometers. For such fine structures it is suggested to quantify structural...... parameters by TEM and EBSD and apply strength-structural relationships established for the bulk metal deformed to high strains. This technique has been applied to steel deformed by high energy shot peening and a calculated stress gradient at or near the surface has been successfully validated by hardness... 20. FDTD calculations of specific energy absorption rate in a seated voxel model of the human body from 10 MHz to 3 GHz Energy Technology Data Exchange (ETDEWEB) Findlay, R P; Dimbylow, P J [Health Protection Agency, Chilton, Didcot, Oxon OX11 0RQ (United Kingdom) 2006-05-07 Finite-difference time-domain (FDTD) calculations have been performed to investigate the frequency dependence of the specific energy absorption rate (SAR) in a seated voxel model of the human body. The seated model was derived from NORMAN (NORmalized MAN), an anatomically realistic voxel phantom in the standing posture with arms to the side. Exposure conditions included both vertically and horizontally polarized plane wave electric fields between 10 MHz and 3 GHz. The resolution of the voxel model was 4 mm for frequencies up to 360 MHz and 2 mm for calculations in the higher frequency range. The reduction in voxel size permitted the calculation of SAR at these higher frequencies using the FDTD method. SAR values have been calculated for the seated adult phantom and scaled versions representing 10-, 5- and 1-year-old children under isolated and grounded conditions. These scaled models do not exactly reproduce the dimensions and anatomy of children, but represent good geometric information for a seated child. Results show that, when the field is vertically polarized, the sitting position causes a second, smaller resonance condition not seen in resonance curves for the phantom in the standing posture. This occurs at {approx}130 MHz for the adult model when grounded. Partial-body SAR calculations indicate that the upper and lower regions of the body have their own resonant frequency at {approx}120 MHz and {approx}160 MHz, respectively, when the grounded adult model is orientated in the sitting position. These combine to produce this second resonance peak in the whole-body averaged SAR values calculated. Two resonance peaks also occur for the sitting posture when the incident electric field is horizontally polarized. For the adult model, the peaks in the whole-body averaged SAR occur at {approx}180 and {approx}600 MHz. These peaks are due to resonance in the arms and feet, respectively. Layer absorption plots and colour images of SAR in individual voxels show the 1. FDTD calculations of specific energy absorption rate in a seated voxel model of the human body from 10 MHz to 3 GHz International Nuclear Information System (INIS) Findlay, R P; Dimbylow, P J 2006-01-01 Finite-difference time-domain (FDTD) calculations have been performed to investigate the frequency dependence of the specific energy absorption rate (SAR) in a seated voxel model of the human body. The seated model was derived from NORMAN (NORmalized MAN), an anatomically realistic voxel phantom in the standing posture with arms to the side. Exposure conditions included both vertically and horizontally polarized plane wave electric fields between 10 MHz and 3 GHz. The resolution of the voxel model was 4 mm for frequencies up to 360 MHz and 2 mm for calculations in the higher frequency range. The reduction in voxel size permitted the calculation of SAR at these higher frequencies using the FDTD method. SAR values have been calculated for the seated adult phantom and scaled versions representing 10-, 5- and 1-year-old children under isolated and grounded conditions. These scaled models do not exactly reproduce the dimensions and anatomy of children, but represent good geometric information for a seated child. Results show that, when the field is vertically polarized, the sitting position causes a second, smaller resonance condition not seen in resonance curves for the phantom in the standing posture. This occurs at ∼130 MHz for the adult model when grounded. Partial-body SAR calculations indicate that the upper and lower regions of the body have their own resonant frequency at ∼120 MHz and ∼160 MHz, respectively, when the grounded adult model is orientated in the sitting position. These combine to produce this second resonance peak in the whole-body averaged SAR values calculated. Two resonance peaks also occur for the sitting posture when the incident electric field is horizontally polarized. For the adult model, the peaks in the whole-body averaged SAR occur at ∼180 and ∼600 MHz. These peaks are due to resonance in the arms and feet, respectively. Layer absorption plots and colour images of SAR in individual voxels show the specific regions in which the 2. IBA in deformed nuclei International Nuclear Information System (INIS) Casten, R.F.; Warner, D.D. 1982-01-01 The structure and characteristic properties and predictions of the IBA in deformed nuclei are reviewed, and compared with experiment, in particular for 168 Er. Overall, excellent agreement, with a minimum of free parameters (in effect, two, neglecting scale factors on energy differences), was obtained. A particularly surprising, and unavoidable, prediction is that of strong β → γ transitions, a feature characteristically absent in the geometrical model, but manifest empirically. Some discrepancies were also noted, principally for the K=4 excitation, and the detailed magnitudes of some specific B(E2) values. Considerable attention is paid to analyzing the structure of the IBA states and their relation to geometric models. The bandmixing formalism was studied to interpret both the aforementioned discrepancies and the origin of the β → γ transitions. The IBA states, extremely complex in the usual SU(5) basis, are transformed to the SU(3) basis, as is the interaction Hamiltonian. The IBA wave functions appear with much simplified structure in this way as does the structure of the associated B(E2) values. The nature of the symmetry breaking of SU(3) for actual deformed nuclei is seen to be predominantly ΔK=0 mixing. A modified, and more consistent, formalism for the IBA-1 is introduced which is simpler, has fewer free parameters (in effect, one, neglecting scale factors on energy differences), is in at least as good agreement with experiment as the earlier formalism, contains a special case of the 0(6) limit which corresponds to that known empirically, and appears to have a close relationship to the IBA-2. The new formalism facilitates the construction of contour plots of various observables (e.g., energy or B(E2) ratios) as functions of N and chi/sub Q/ which allow the parameter-free discussion of qualitative trajectories or systematics 3. The effects of the electric and intense laser field on the binding energies of donor impurity states (1s and 2p±) and optical absorption between the related states in an asymmetric parabolic quantum well Science.gov (United States) Kasapoglu, E.; Sakiroglu, S.; Sökmen, I.; Restrepo, R. L.; Mora-Ramos, M. E.; Duque, C. A. 2016-10-01 We have calculated the effects of electric and intense laser fields on the binding energies of the ground and some excited states of conduction electrons coupled to shallow donor impurities as well as the total optical absorption coefficient for transitions between 1s and 2p± electron-impurity states in a asymmetric parabolic GaAs/Ga1-x AlxAs quantum well. The binding energies were obtained using the effective-mass approximation within a variational scheme. Total absorption coefficient (linear and nonlinear absorption coefficient) for the transitions between any two impurity states were calculated from first- and third-order dielectric susceptibilities derived within a perturbation expansion for the density matrix formalism. Our results show that the effects of the electric field, intense laser field, and the impurity location on the binding energy of 1s-impurity state are more pronounced compared with other impurity states. If the well center is changed to be Lc0), the effective well width decreases (increases), and thus we can obtain the red or blue shift in the resonant peak position of the absorption coefficient by changing the intensities of the electric and non-resonant intense laser field as well as dimensions of the well and impurity positions. 4. Absorption factor for cylindrical samples International Nuclear Information System (INIS) Sears, V.F. 1984-01-01 The absorption factor for the scattering of X-rays or neutrons in cylindrical samples is calculated by numerical integration for the case in which the absorption coefficients of the incident and scattered beams are not equal. An extensive table of values having an absolute accuracy of 10 -4 is given in a companion report [Sears (1983). Atomic Energy of Canada Limited, Report No. AECL-8176]. In the present paper an asymptotic expression is derived for the absorption factor which can be used with an error of less than 10 -3 for most cases of interest in both neutron inelastic scattering and neutron diffraction in crystals. (Auth.) 5. Linear absorptive dielectrics Science.gov (United States) Tip, A. 1998-06-01 Starting from Maxwell's equations for a linear, nonconducting, absorptive, and dispersive medium, characterized by the constitutive equations D(x,t)=ɛ1(x)E(x,t)+∫t-∞dsχ(x,t-s)E(x,s) and H(x,t)=B(x,t), a unitary time evolution and canonical formalism is obtained. Given the complex, coordinate, and frequency-dependent, electric permeability ɛ(x,ω), no further assumptions are made. The procedure leads to a proper definition of band gaps in the periodic case and a new continuity equation for energy flow. An S-matrix formalism for scattering from lossy objects is presented in full detail. A quantized version of the formalism is derived and applied to the generation of Čerenkov and transition radiation as well as atomic decay. The last case suggests a useful generalization of the density of states to the absorptive situation. 6. How deformation enhances mobility in a polymer glass Science.gov (United States) Lacks, Daniel 2013-03-01 Recent experiments show that deformation of a polymer glass can lead to orders-of-magnitude enhancement in the atomic level dynamics. To determine why this change in dynamics occurs, we carry out molecular dynamics simulations and energy landscape analyses. The simulations address the coarse-grained polystyrene model of Kremer and co-workers, and the dynamics, as quantified by the van Hove function, are examined as the glass undergoes shear deformation. In agreement with experiment, the simulations find that deformation enhances the atomic mobility. The enhanced mobility is shown to arise from two mechanisms: First, active deformation continually reduces barriers for hopping events, and the importance of this mechanism is modulated by the rate of thermally activated transitions between adjacent energy minima. Second, deformation moves the system to higher-energy regions of the energy landscape, characterized by lower barriers. Both mechanisms enhance the dynamics during deformation, and the second mechanism is also relevant after deformation has ceased. 7. Problem of ''deformed'' superheavy nuclei International Nuclear Information System (INIS) Sobiczewski, A.; Patyk, Z.; Muntian, I. 2000-08-01 Problem of experimental confirmation of deformed shapes of superheavy nuclei situated in the neighbourhood of 270 Hs is discussed. Measurement of the energy E 2+ of the lowest 2+ state in even-even species of these nuclei is considered as a method for this confirmation. The energy is calculated in the cranking approximation for heavy and superheavy nuclei. The branching ratio p 2+ /p 0+ between α decay of a nucleus to this lowest 2+ state and to the ground state 0+ of its daughter is also calculated for these nuclei. The results indicate that a measurement of the energy E 2+ for some superheavy nuclei by electron or α spectroscopy is a promising method for the confirmation of their deformed shapes. (orig.) 8. Geometry and dynamics of particle emission from strongly deformed nuclei International Nuclear Information System (INIS) Aleshin, V.P. 1995-01-01 By using our semiclassical approach to particle evaporation from deformed nuclei, we analyze the heuristic models of particle emission from deformed nuclei which are used in the codes GANES, ALICE, and EVAP. The calculations revealed that the heuristic models are reasonable for particle energy spectra but fail, at large deformations, to describe the angular distributions 9. Deformation mechanisms of nanotwinned Al Energy Technology Data Exchange (ETDEWEB) Zhang, Xinghang [Texas A & M Univ., College Station, TX (United States) 2016-11-10 The objective of this project is to investigate the role of different types of layer interfaces on the formation of high density stacking fault (SF) in Al in Al/fcc multilayers, and understand the corresponding deformation mechanisms of the films. Stacking faults or twins can be intentionally introduced (via growth) into certain fcc metals with low stacking fault energy (such as Cu, Ag and 330 stainless steels) to achieve high strength, high ductility, superior thermal stability and good electrical conductivity. However it is still a major challenge to synthesize these types of defects into metals with high stacking fault energy, such as Al. Although deformation twins have been observed in some nanocrystalline Al powders by low temperature, high strain rate cryomilling or in Al at the edge of crack tip or indentation (with the assistance of high stress intensity factor), these deformation techniques typically introduce twins sporadically and the control of deformation twin density in Al is still not feasible. This project is designed to test the following hypotheses: (1) Certain type of layer interfaces may assist the formation of SF in Al, (2) Al with high density SF may have deformation mechanisms drastically different from those of coarse-grained Al and nanotwinned Cu. To test these hypotheses, we have performed the following tasks: (i) Investigate the influence of layer interfaces, stresses and deposition parameters on the formation and density of SF in Al. (ii) Understand the role of SF on the deformation behavior of Al. In situ nanoindentation experiments will be performed to probe deformation mechanisms in Al. The major findings related to the formation mechanism of twins and mechanical behavior of nanotwinned metals include the followings: 1) Our studies show that nanotwins can be introduced into metals with high stacking fault energy, in drastic contrast to the general anticipation. 2) We show two strategies that can effectively introduce growth twins in 10. Deformation mechanisms of nanotwinned Al International Nuclear Information System (INIS) Zhang, Xinghang 2016-01-01 The objective of this project is to investigate the role of different types of layer interfaces on the formation of high density stacking fault (SF) in Al in Al/fcc multilayers, and understand the corresponding deformation mechanisms of the films. Stacking faults or twins can be intentionally introduced (via growth) into certain fcc metals with low stacking fault energy (such as Cu, Ag and 330 stainless steels) to achieve high strength, high ductility, superior thermal stability and good electrical conductivity. However it is still a major challenge to synthesize these types of defects into metals with high stacking fault energy, such as Al. Although deformation twins have been observed in some nanocrystalline Al powders by low temperature, high strain rate cryomilling or in Al at the edge of crack tip or indentation (with the assistance of high stress intensity factor), these deformation techniques typically introduce twins sporadically and the control of deformation twin density in Al is still not feasible. This project is designed to test the following hypotheses: (1) Certain type of layer interfaces may assist the formation of SF in Al, (2) Al with high density SF may have deformation mechanisms drastically different from those of coarse-grained Al and nanotwinned Cu. To test these hypotheses, we have performed the following tasks: (i) Investigate the influence of layer interfaces, stresses and deposition parameters on the formation and density of SF in Al. (ii) Understand the role of SF on the deformation behavior of Al. In situ nanoindentation experiments will be performed to probe deformation mechanisms in Al. The major findings related to the formation mechanism of twins and mechanical behavior of nanotwinned metals include the followings: 1) Our studies show that nanotwins can be introduced into metals with high stacking fault energy, in drastic contrast to the general anticipation. 2) We show two strategies that can effectively introduce growth twins in 11. Deformation Behavior of Press Formed Shell by Indentation and Its Numerical Simulation Directory of Open Access Journals (Sweden) Minoru Yamashita 2015-01-01 Full Text Available Deformation behavior and energy absorbing performance of the press formed aluminum alloy A5052 shells were investigated to obtain the basic information regarding the mutual effect of the shell shape and the indentor. Flat top and hemispherical shells were indented by the flat- or hemispherical-headed indentor. Indentation force in the rising stage was sharper for both shell shapes when the flat indentor was used. Remarkable force increase due to high in-plane compressive stress arisen by the appropriate tool constraint was observed in the early indentation stage, where the hemispherical shell was deformed with the flat-headed indentor. This aspect is preferable for energy absorption performance per unit mass. Less fluctuation in indentation force was achieved in the combination of the hemispherical shell and similar shaped indentor. The consumed energy in the travel length of the indentor equal to the shell height was evaluated. The increase ratio of the energy is prominent when the hemispherical indentor is replaced by a flat-headed one in both shell shapes. Finite element simulation was also conducted. Deformation behaviors were successfully predicted when the kinematic hardening plasticity was introduced in the material model. 12. Deformation twinning: Influence of strain rate Energy Technology Data Exchange (ETDEWEB) Gray, G.T. III 1993-11-01 Twins in most crystal structures, including advanced materials such as intermetallics, form more readily as the temperature of deformation is decreased or the rate of deformation is increased. Both parameters lead to the suppression of thermally-activated dislocation processes which can result in stresses high enough to nucleate and grow deformation twins. Under high-strain rate or shock-loading/impact conditions deformation twinning is observed to be promoted even in high stacking fault energy FCC metals and alloys, composites, and ordered intermetallics which normally do not readily deform via twinning. Under such conditions and in particular under the extreme loading rates typical of shock wave deformation the competition between slip and deformation twinning can be examined in detail. In this paper, examples of deformation twinning in the intermetallics TiAl, Ti-48Al-lV and Ni{sub 3}A as well in the cermet Al-B{sub 4}C as a function of strain rate will be presented. Discussion includes: (1) the microstructural and experimental variables influencing twin formation in these systems and twinning topics related to high-strain-rate loading, (2) the high velocity of twin formation, and (3) the influence of deformation twinning on the constitutive response of advanced materials. 13. Photoelectric absorption cross sections with variable abundances Science.gov (United States) Balucinska-Church, Monika; Mccammon, Dan 1992-01-01 Polynomial fit coefficients have been obtained for the energy dependences of the photoelectric absorption cross sections of 17 astrophysically important elements. These results allow the calculation of X-ray absorption in the energy range 0.03-10 keV in material with noncosmic abundances. 14. Simulation of rock deformation behavior Directory of Open Access Journals (Sweden) Я. И. Рудаев 2016-12-01 Full Text Available A task of simulating the deformation behavior of geomaterials under compression with account of over-extreme branch has been addressed. The physical nature of rock properties variability as initially inhomogeneous material is explained by superposition of deformation and structural transformations of evolutionary type within open nonequilibrium systems. Due to this the description of deformation and failure of rock is related to hierarchy of instabilities within the system being far from thermodynamic equilibrium. It is generally recognized, that the energy function of the current stress-strain state is a superposition of potential component and disturbance, which includes the imperfection parameter accounting for defects not only existing in the initial state, but also appearing under load. The equation of state has been obtained by minimizing the energy function by the order parameter. The imperfection parameter is expressed through the strength deterioration, which is viewed as the internal parameter of state. The evolution of strength deterioration has been studied with the help of Fokker – Planck equation, which steady form corresponds to rock statical stressing. Here the diffusion coefficient is assumed to be constant, while the function reflecting internal sliding and loosening of the geomaterials is assumed as an antigradient of elementary integration catastrophe. Thus the equation of state is supplemented with a correlation establishing relationship between parameters of imperfection and strength deterioration. While deformation process is identified with the change of dissipative media, coupled with irreversible structural fluctuations. Theoretical studies are proven with experimental data obtained by subjecting certain rock specimens to compression. 15. Dual-energy X-ray analysis using synchrotron computed tomography at 35 and 60 keV for the estimation of photon interaction coefficients describing attenuation and energy absorption. Science.gov (United States) Midgley, Stewart; Schleich, Nanette 2015-05-01 A novel method for dual-energy X-ray analysis (DEXA) is tested using measurements of the X-ray linear attenuation coefficient μ. The key is a mathematical model that describes elemental cross sections using a polynomial in atomic number. The model is combined with the mixture rule to describe μ for materials, using the same polynomial coefficients. Materials are characterized by their electron density Ne and statistical moments Rk describing their distribution of elements, analogous to the concept of effective atomic number. In an experiment with materials of known density and composition, measurements of μ are written as a system of linear simultaneous equations, which is solved for the polynomial coefficients. DEXA itself involves computed tomography (CT) scans at two energies to provide a system of non-linear simultaneous equations that are solved for Ne and the fourth statistical moment R4. Results are presented for phantoms containing dilute salt solutions and for a biological specimen. The experiment identifies 1% systematic errors in the CT measurements, arising from third-harmonic radiation, and 20-30% noise, which is reduced to 3-5% by pre-processing with the median filter and careful choice of reconstruction parameters. DEXA accuracy is quantified for the phantom as the mean absolute differences for Ne and R4: 0.8% and 1.0% for soft tissue and 1.2% and 0.8% for bone-like samples, respectively. The DEXA results for the biological specimen are combined with model coefficients obtained from the tabulations to predict μ and the mass energy absorption coefficient at energies of 10 keV to 20 MeV. 16. Photoelectric atomic absorption cross sections for elements Z = 6 to 54 in the medium energy X-ray range (5 to 25 keV). Pt. 1 International Nuclear Information System (INIS) Hildebrandt, G.; Stephenson, J.D.; Wagenfeld, H. 1975-01-01 Photoelectric atomic absorption cross sections have been calculated by means of hydrogen-like eigenfunctions for the atomic K, L, M and N sub-shells of the elements Z = 6 to 54, using revised screening constants and an extension of the theory. The absorption cross sections have been further separated into dipole and quadrupole components so that the numerical data can also be applied to the Borrmann effect. (orig.) [de 17. Deformation properties of lead isotopes International Nuclear Information System (INIS) Tolokonnikov, S. V.; Borzov, I. N.; Lutostansky, Yu. S.; Saperstein, E. E. 2016-01-01 The deformation properties of a long lead isotopic chain up to the neutron drip line are analyzed on the basis of the energy density functional (EDF) in the FaNDF 0 Fayans form. The question of whether the ground state of neutron-deficient lead isotopes can have a stable deformation is studied in detail. The prediction of this deformation is contained in the results obtained on the basis of the HFB-17 and HFB-27 Skyrme EDF versions and reported on Internet. The present analysis reveals that this is at odds with experimental data on charge radii and magnetic moments of odd lead isotopes. The Fayans EDF version predicts a spherical ground state for all light lead isotopes, but some of them (for example, 180 Pb and 184 Pb) prove to be very soft—that is, close to the point of a phase transition to a deformed state. Also, the results obtained in our present study are compared with the predictions of some other Skyrme EDF versions, including SKM*, SLy4, SLy6, and UNE1. By and large, their predictions are closer to the results arising upon the application of the Fayans functional. For example, the SLy4 functional predicts, in just the same way as the FaNDF 0 functional, a spherical shape for all nuclei of this region. The remaining three Skyrme EDF versions lead to a deformation of some light lead isotopes, but their number is substantially smaller than that in the case of the HFB-17 and HFB-27 functionals. Moreover, the respective deformation energy is substantially lower, which gives grounds to hope for the restoration of a spherical shape upon going beyond the mean-field approximation, which we use here. Also, the deformation properties of neutron-rich lead isotopes are studied up to the neutron drip line. Here, the results obtained with the FaNDF 0 functional are compared with the predictions of the HFB-17, HFB-27, SKM*, and SLy4 Skyrme EDF versions. All of the EDF versions considered here predict the existence of a region where neutron-rich lead isotopes undergo 18. Study of the influence of chemical binding on resonant absorption and scattering of neutrons; Etude de l'influence des liaisons chimiques sur l'absorption et la diffusion des neutrons aux energies de resonance Energy Technology Data Exchange (ETDEWEB) Naberejnev, D.G. [Aix-Marseille-1 Univ., 13 - Marseille (France) 1999-02-01 At present time the problem of taking into account of the crystalline binding in the heavy nuclei resonance range is not correctly treated in nuclear data processing codes. The present work deals separately with resonant absorption and scattering of neutrons. The influence of crystalline binding is considered for both types of reactions in the harmonic crystal frame work. The harmonic crystal model is applied to the study of resonant absorption cross sections to show the inconsistency of the free gas model widely in use in reactor neutronics. The errors due to the use of the latter were found to be non negligible. These errors should be corrected by introducing a more elaborated harmonic crystal model in codes for resonances analysis and on the nuclear data processing stage. Currently the influence of crystalline binding on transfer cross section in the resonance domain is taken into account in a naive manner using the model of the free nucleus at rest in the laboratory system. In this work I present a formalism (Uncoupled Phonon Approximation) which permits to consider in more detail the crystalline structure of the nuclear fuel. This formalism shows new features in comparison with the static model. (author) 19. Study of the influence of chemical binding on resonant absorption and scattering of neutrons; Etude de l'influence des liaisons chimiques sur l'absorption et la diffusion des neutrons aux energies de resonance Energy Technology Data Exchange (ETDEWEB) Naberejnev, D G [Aix-Marseille-1 Univ., 13 - Marseille (France) 1999-02-01 At present time the problem of taking into account of the crystalline binding in the heavy nuclei resonance range is not correctly treated in nuclear data processing codes. The present work deals separately with resonant absorption and scattering of neutrons. The influence of crystalline binding is considered for both types of reactions in the harmonic crystal frame work. The harmonic crystal model is applied to the study of resonant absorption cross sections to show the inconsistency of the free gas model widely in use in reactor neutronics. The errors due to the use of the latter were found to be non negligible. These errors should be corrected by introducing a more elaborated harmonic crystal model in codes for resonances analysis and on the nuclear data processing stage. Currently the influence of crystalline binding on transfer cross section in the resonance domain is taken into account in a naive manner using the model of the free nucleus at rest in the laboratory system. In this work I present a formalism (Uncoupled Phonon Approximation) which permits to consider in more detail the crystalline structure of the nuclear fuel. This formalism shows new features in comparison with the static model. (author) 20. Optical absorption, luminescence, and energy transfer processes studies for Dy3+/Tb3+-codoped borate glasses for solid-state lighting applications Science.gov (United States) Lakshminarayana, G.; Kaky, Kawa M.; Baki, S. O.; Lira, A.; Caldiño, U.; Kityk, I. V.; Mahdi, M. A. 2017-10-01 By using melt quenching technique, good optical quality singly doped Dy3+ or Tb3+ and Dy3+/Tb3+-codoped borate glasses were synthesized and studied by optical absorption, excitation, emission and decay lifetimes curve analysis. Following the absorption spectrum, the evaluated Judd-Ofelt (J-O) intensity parameters (Ωλ (λ = 2, 4 and 6)) were used to calculate the transition probability (AR), the branching ratio (βR), and the radiative lifetime (τR) for different luminescent transitions such as 4I15/2 → 6H15/2, 4F9/2 → 6H15/2, 4F9/2 → 6H13/2, 4F9/2 → 6H11/2 and 4F9/2 → 6H9/2,6F11/2 for the 0.5 mol % singly Dy3+-doped glass. The βR calculated (65%) indicates that for lasing applications, 4F9/2 → 6H13/2 emission transition is highly suitable. For all the Dy3+/Tb3+-codoped glasses, Tb3+: 5D3→7F6 emission decay lifetime curves are found to be non-exponential in nature for different concentrations of Dy3+ codoping. Using the Inokuti-Hirayama model, these nonexponential decay curves were analyzed to identify the nature of the energy transfer (ET) processes and here the electric dipole-dipole interaction is dominant for the ET. Based on the excitation and emission spectra and decay lifetimes curve analysis, the cross relaxation and ET processes between Dy3+ and Tb3+ were confirmed. For the 0.5 mol % Tb3+ and 2.0 mol % Dy3+-codoped glass, the evaluated Tb3+→Dy3+ ET efficiency (η) is found to be 45% under 369 nm excitation. Further, for Tb3+/Dy3+ -codoped glasses, an enhancement of Tb3+ green emission is observed up to 1.5 mol % Dy3+ codoping, and this is due to the non-radiative resonant ET from Dy3+ to Tb3+ upon 395 nm excitation. For singly 0.5 mol % Dy3+ or 0.5 mol % Tb3+-doped glass, the calculated color coordinates (x,y) and correlated color temperatures (CCT) represent the neutral white or warm white light regions, whereas Dy3+/Tb3+-codoped glasses (x,y) and CCT values fall in the yellowish green region with respect to the different Dy3 1. Fine structure in deformed proton emitting nuclei International Nuclear Information System (INIS) Sonzogni, A. A.; Davids, C. N.; Woods, P. J.; Seweryniak, D.; Carpenter, M. P.; Ressler, J. J.; Schwartz, J.; Uusitalo, J.; Walters, W. B. 1999-01-01 In a recent experiment to study the proton radioactivity of the highly deformed 131 Eu nucleus, two proton lines were detected. The higher energy one was assigned to the ground-state to ground-state decay, while the lower energy, to the ground-state to the 2 + state decay. This constitutes the first observation of fine structure in proton radioactivity. With these four measured quantities, proton energies, half-life and branching ratio, it is possible to determine the Nilsson configuration of the ground state of the proton emitting nucleus as well as the 2 + energy and nuclear deformation of the daughter nucleus. These results will be presented and discussed 2. X-ray absorption in atomic potassium International Nuclear Information System (INIS) Gomilsek, Jana Padeznik; Kodre, Alojz; Arcon, Iztok; Nemanic, Vincenc 2008-01-01 A new high-temperature absorption cell for potassium vapor is described. X-ray absorption coefficient of atomic potassium is determined in the energy interval of 600 eV above the K edge where thresholds for simultaneous excitations of 1s and outer electrons, down to [1s2p] excitation, appear. The result represents also the atomic absorption background for XAFS (X-ray absorption fine structure) structure analysis. The K ionization energy in the potassium vapor is determined and compared with theoretical data and with the value for the metal 3. Energy Absorption Contribution and Strength in Female Athletes at Return to Sport After Anterior Cruciate Ligament Reconstruction: Comparison With Healthy Controls. Science.gov (United States) Boo, Marie E; Garrison, J Craig; Hannon, Joseph P; Creed, Kalyssa M; Goto, Shiho; Grondin, Angellyn N; Bothwell, James M 2018-03-01 Female patients are more likely to suffer a second anterior cruciate ligament (ACL) injury after ACL reconstruction (ACLR) and return to sport (RTS) compared with healthy female controls. Few studies have examined the energy absorption contribution (EAC) that could lead to this subsequent injury. The ACLR group would demonstrate an altered EAC between joints (hip, knee, and ankle) but no difference in quadriceps, hip abduction, or hip external rotation (ER) strength at the time of RTS. Cross-sectional study; Level of evidence, 3. A total of 34 female participants (ACLR: n = 17; control: n = 17) were enrolled in the study and matched for age and activity level. Jump landing performance for the initial 50 milliseconds of landing of a lateral-vertical jump was assessed using a 10-camera 3-dimensional motion capture system and 2 force plates. Isokinetic quadriceps strength was measured using a Biodex machine, and hip abduction and ER isometric strength were measured using a handheld dynamometer. All values were normalized to the participant's height and weight. A 1-way multivariate analysis of variance was used to assess between-group differences in the EAC at the hip, knee, and ankle. Two 1-way analyses of variance were used to independently examine quadriceps, hip abduction, and hip ER strength between the groups. Significant differences in the EAC were found between the groups for the involved hip ( P = .002), uninvolved hip ( P = .005), and involved ankle ( P = .023). There were no between-group differences in the EAC for the involved or uninvolved knee or the uninvolved ankle. Patients who underwent ACLR demonstrated significantly decreased quadriceps strength on the involved limb ( P = .02) and decreased hip ER strength on both the involved ( P = .005) and uninvolved limbs ( P = .002). No significant strength differences were found between the groups for the uninvolved quadriceps or for involved or uninvolved hip abduction. At RTS, patients who underwent ACLR 4. Developing a Virtual Rock Deformation Laboratory Science.gov (United States) Zhu, W.; Ougier-simonin, A.; Lisabeth, H. P.; Banker, J. S. 2012-12-01 Experimental rock physics plays an important role in advancing earthquake research. Despite its importance in geophysics, reservoir engineering, waste deposits and energy resources, most geology departments in U.S. universities don't have rock deformation facilities. A virtual deformation laboratory can serve as an efficient tool to help geology students naturally and internationally learn about rock deformation. Working with computer science engineers, we built a virtual deformation laboratory that aims at fostering user interaction to facilitate classroom and outreach teaching and learning. The virtual lab is built to center around a triaxial deformation apparatus in which laboratory measurements of mechanical and transport properties such as stress, axial and radial strains, acoustic emission activities, wave velocities, and permeability are demonstrated. A student user can create her avatar to enter the virtual lab. In the virtual lab, the avatar can browse and choose among various rock samples, determine the testing conditions (pressure, temperature, strain rate, loading paths), then operate the virtual deformation machine to observe how deformation changes physical properties of rocks. Actual experimental results on the mechanical, frictional, sonic, acoustic and transport properties of different rocks at different conditions are compiled. The data acquisition system in the virtual lab is linked to the complied experimental data. Structural and microstructural images of deformed rocks are up-loaded and linked to different deformation tests. The integration of the microstructural image and the deformation data allows the student to visualize how forces reshape the structure of the rock and change the physical properties. The virtual lab is built using the Game Engine. The geological background, outstanding questions related to the geological environment, and physical and mechanical concepts associated with the problem will be illustrated on the web portal. In 5. A measuring method of photo-electric cross section. Application to high-Z elements between 40 keV and 220 keV. Measurement of K absorption edge energy of Au, Th, U, Pu International Nuclear Information System (INIS) Chartier, J.-L. 1977-09-01 This study first describes a bent crystal monochromator developed for the production of monochromatic beams in a continuous energy range from 30 to 250 keV; it is completed by a metrological application of the device (determination of K absorption edge energy of Au, Th, U, Pu). A method and the associated experimental procedure were developed to measure the photo-electric cross section for high-Z elements; the results are presented with a relative uncertainty ranging between 3 and 6%. Finally, the experimental values are compared with values calculated from theories using self-consistent potential models [fr 6. Auxetic hexachiral structures with wavy ligaments for large elasto-plastic deformation Science.gov (United States) Zhu, Yilin; Wang, Zhen-Pei; Hien Poh, Leong 2018-05-01 The hexachiral structure is in-plane isotropic in small deformation. When subjected to large elasto-plastic deformation, however, the hexachiral structure tends to lose its auxeticity and/or isotropy—properties which are desirable in many potential applications. The objective of this study is to improve these two mechanical properties, without significantly compromising the effective yield stress, in the regime with significant material and geometrical nonlinearity effects. It is found that the deformation mechanisms underlying the auxeticity and isotropy properties of a hexachiral structure are largely influenced by the extent of rotation of the central ring in a unit cell. To facilitate the development of this deformation mechanism, an improved design with wavy ligaments is proposed. The improved performance of the proposed hexachiral structure is demonstrated. An initial study on possible applications as a protective material is next carried out, where the improved hexachiral design is shown to exhibit higher specific energy absorption capacity compared to the original design, as well as standard honeycomb structures. 7. Deformations of superconformal theories Energy Technology Data Exchange (ETDEWEB) Córdova, Clay [School of Natural Sciences, Institute for Advanced Study,1 Einstein Drive, Princeton, NJ 08540 (United States); Dumitrescu, Thomas T. [Department of Physics, Harvard University,17 Oxford Street, Cambridge, MA 02138 (United States); Intriligator, Kenneth [Department of Physics, University of California,9500 Gilman Drive, San Diego, La Jolla, CA 92093 (United States) 2016-11-22 We classify possible supersymmetry-preserving relevant, marginal, and irrelevant deformations of unitary superconformal theories in d≥3 dimensions. Our method only relies on symmetries and unitarity. Hence, the results are model independent and do not require a Lagrangian description. Two unifying themes emerge: first, many theories admit deformations that reside in multiplets together with conserved currents. Such deformations can lead to modifications of the supersymmetry algebra by central and non-central charges. Second, many theories with a sufficient amount of supersymmetry do not admit relevant or marginal deformations, and some admit neither. The classification is complicated by the fact that short superconformal multiplets display a rich variety of sporadic phenomena, including supersymmetric deformations that reside in the middle of a multiplet. We illustrate our results with examples in diverse dimensions. In particular, we explain how the classification of irrelevant supersymmetric deformations can be used to derive known and new constraints on moduli-space effective actions. 8. Mechanics of deformable bodies CERN Document Server Sommerfeld, Arnold Johannes Wilhelm 1950-01-01 Mechanics of Deformable Bodies: Lectures on Theoretical Physics, Volume II covers topics on the mechanics of deformable bodies. The book discusses the kinematics, statics, and dynamics of deformable bodies; the vortex theory; as well as the theory of waves. The text also describes the flow with given boundaries. Supplementary notes on selected hydrodynamic problems and supplements to the theory of elasticity are provided. Physicists, mathematicians, and students taking related courses will find the book useful. 9. ''Identical'' bands in normally-deformed nuclei International Nuclear Information System (INIS) Garrett, J.D.; Baktash, C.; Yu, C.H. 1990-01-01 Gamma-ray transitions energies in neighboring odd- and even-mass nuclei for normally-deformed nuclear configurations are analyzed in a manner similar to recent analyses for superdeformed states. The moment of inertia is shown to depend on pair correlations and the aligned angular momentum of the odd nucleon. The implications of this analysis for ''identical'' super-deformed bands are discussed. 26 refs., 9 figs 10. Equilibrium Droplets on Deformable Substrates: Equilibrium Conditions. Science.gov (United States) Koursari, Nektaria; Ahmed, Gulraiz; Starov, Victor M 2018-05-15 Equilibrium conditions of droplets on deformable substrates are investigated, and it is proven using Jacobi's sufficient condition that the obtained solutions really provide equilibrium profiles of both the droplet and the deformed support. At the equilibrium, the excess free energy of the system should have a minimum value, which means that both necessary and sufficient conditions of the minimum should be fulfilled. Only in this case, the obtained profiles provide the minimum of the excess free energy. The necessary condition of the equilibrium means that the first variation of the excess free energy should vanish, and the second variation should be positive. Unfortunately, the mentioned two conditions are not the proof that the obtained profiles correspond to the minimum of the excess free energy and they could not be. It is necessary to check whether the sufficient condition of the equilibrium (Jacobi's condition) is satisfied. To the best of our knowledge Jacobi's condition has never been verified for any already published equilibrium profiles of both the droplet and the deformable substrate. A simple model of the equilibrium droplet on the deformable substrate is considered, and it is shown that the deduced profiles of the equilibrium droplet and deformable substrate satisfy the Jacobi's condition, that is, really provide the minimum to the excess free energy of the system. To simplify calculations, a simplified linear disjoining/conjoining pressure isotherm is adopted for the calculations. It is shown that both necessary and sufficient conditions for equilibrium are satisfied. For the first time, validity of the Jacobi's condition is verified. The latter proves that the developed model really provides (i) the minimum of the excess free energy of the system droplet/deformable substrate and (ii) equilibrium profiles of both the droplet and the deformable substrate. 11. Comparative analysis of performance and techno-economics for a H{sub 2}O-NH{sub 3}-H{sub 2} absorption refrigerator driven by different energy sources Energy Technology Data Exchange (ETDEWEB) Abdullah, Mohammad Omar; Hieng, Tang Chung [Department of Chemical Engineering and Energy Sustainability, 94300 Kota Samarahan, Sarawak, East Malaysia (Malaysia) 2010-05-15 The objectives of the present work are of two-folds. First, it evaluates the transient temperature performance of the H{sub 2}O-NH{sub 3}-H{sub 2} absorption cooling machine system's components under two types of energy sources, i.e. the conventional electric energy from grid (electric) and fuel energy from liquid petroleum gas (LPG). Results obtained have shown that performance of various components under different type of energy sources is almost coherent. For the evaporator, the system with electric supply has shorter starting time, around 6 min earlier than the system run with LPG. Meanwhile, the system powered by LPG produced a lower cooling temperature around -9 C, compared to the system run with electric which produced temperature at around -7 C. Economical study had been carried out subsequently, for three different energy sources, i.e. electric, LPG and solar energy (photovoltaic). From the techno-economical analyzes, it was found that the conventional electric from grid is still the best form of energy source for short-term application, as far as the present location and conditions are concerned. LPG is the next attractive energy source, especially at locations with constant LPG supply; the photovoltaic energy from solar is attractive for long term consideration since it has zero fuel cost and environmentally-friendly, but with the highest initial cost. (author) 12. Ground state properties of exotic nuclei in deformed medium mass region International Nuclear Information System (INIS) Manju; Chatterjee, R.; Singh, Jagjit; Shubhchintak 2017-01-01 The dipole moment, size of the nucleus and other ground state properties of deformed nuclei 37 Mg and 31 Ne are presented. Furthermore with this deformed wave function the electric dipole strength distribution for deformed nuclei 37 Mg and 31 Ne is calculated. This will allow us to investigate the two dimensional scaling phenomenon with two parameters: quadrupole deformation and separation energy 13. κ-deformed Dirac oscillator in an external magnetic field Science.gov (United States) Chargui, Y.; Dhahbi, A.; Cherif, B. 2018-04-01 We study the solutions of the (2 + 1)-dimensional κ-deformed Dirac oscillator in the presence of a constant transverse magnetic field. We demonstrate how the deformation parameter affects the energy eigenvalues of the system and the corresponding eigenfunctions. Our findings suggest that this system could be used to detect experimentally the effect of the deformation. We also show that the hidden supersymmetry of the non-deformed system reduces to a hidden pseudo-supersymmetry having the same algebraic structure as a result of the κ-deformation. 14. Investigation of the neutron emission spectra of some deformed nuclei for (n, xn) reactions up to 26 MeV energy International Nuclear Information System (INIS) Kaplan, A.; Bueyuekuslu, H.; Tel, E.; Aydin, A.; Boeluekdemir, M.H. 2011-01-01 In this study, neutron-emission spectra produced by (n, xn) reactions up to 26 MeV for some deformed target nuclei as 165 Ho, 181 Ta, 184 W, 232 Th and 238 U have been investigated. Also, the mean free path parameter's effect for 9n, xn) neutron-emission spectra has been examined. In the calculations, pre-equilibrium neutron-emission spectra have been calculated by using new evaluated hybrid model and geometry dependent hybrid model, full exciton model and cascade exciton model. The reaction equilibrium component has been calculated by Weisskopf-Ewing model. The obtained results have been discussed and compared with the available experimental data and found agreement with each other. (author) 15. A novel SUSY energy bound-states treatment of the Klein-Gordon equation with PT-symmetric and q-deformed parameter Hulthén potential Science.gov (United States) Aktas, M. 2018-01-01 In this study, we focus on investigating the exact relativistic bound-state spectra for supersymmetric, PT-supersymmetric and non-Hermitian versions of the q-deformed parameter Hulthén potential. The Hamiltonian hierarchy mechanism, namely the factorization method, is adopted within the framework of SUSYQM. This algebraic approach is used in solving the Klein-Gordon equation with the potential cases. The results obtained analytically by executing the straightforward calculations are in consistent forms for certain values of q. Achieving the results may have a particular interest for such applications. That is, they can be involved in determining the quantum structural properties of molecules for ro-vibrational states, and optical spectra characteristics of semiconductor devices with regard to the lattice dynamics. They are also employed to construct the broken or unbroken case of the supersymmetric particle model concerning the interaction between the elementary particles. 16. Intracrystalline deformation of calcite NARCIS (Netherlands) Bresser, J.H.P. de 1991-01-01 It is well established from observations on natural calcite tectonites that intracrystalline plastic mechanisms are important during the deformation of calcite rocks in nature. In this thesis, new data are presented on fundamental aspects of deformation behaviour of calcite under conditions where 17. The Spherical Deformation Model DEFF Research Database (Denmark) Hobolth, Asgar 2003-01-01 Miller et al. (1994) describe a model for representing spatial objects with no obvious landmarks. Each object is represented by a global translation and a normal deformation of a sphere. The normal deformation is defined via the orthonormal spherical-harmonic basis. In this paper we analyse the s... 18. The evolution of internal stress and dislocation during tensile deformation in a 9Cr ferritic/martensitic (F/M) ODS steel investigated by high-energy X-rays International Nuclear Information System (INIS) Zhang, Guangming; Zhou, Zhangjian; Mo, Kun; Miao, Yinbin; Liu, Xiang; Almer, Jonathan; Stubbins, James F. 2015-01-01 An application of high-energy wide angle synchrotron X-ray diffraction to investigate the tensile deformation of 9Cr ferritic/martensitic (F/M) ODS steel is presented. With tensile loading and in-situ X-ray exposure, the lattice strain development of matrix was determined. The lattice strain was found to decrease with increasing temperature, and the difference in Young's modulus of six different reflections at different temperatures reveals the temperature dependence of elastic anisotropy. The mean internal stress was calculated and compared with the applied stress, showing that the strengthening factor increased with increasing temperature, indicating that the oxide nanoparticles have a good strengthening impact at high temperature. The dislocation density and character were also measured during tensile deformation. The dislocation density decreased with increasing of temperature due to the greater mobility of dislocation at high temperature. The dislocation character was determined by best-fit methods for different dislocation average contrasts with various levels of uncertainty. The results shows edge type dislocations dominate the plastic strain at room temperature (RT) and 300 °C, while the screw type dislocations dominate at 600 °C. The dominance of edge character in 9Cr F/M ODS steels at RT and 300 °C is likely due to the pinning effect of nanoparticles for higher mobile edge dislocations when compared with screw dislocations, while the stronger screw type of dislocation structure at 600 °C may be explained by the activated cross slip of screw segments. - Highlights: • The tensile deformation of 9Cr ODS steel was studied by synchrotron irradiation. • The evolution of internal mean stress was calculated. • The evolution of dislocation character was determined by best-fit method. • Edge type dominates plasticity at RT and 300 °C, while screw type dominates at 600 °C. 19. The evolution of internal stress and dislocation during tensile deformation in a 9Cr ferritic/martensitic (F/M) ODS steel investigated by high-energy X-rays Energy Technology Data Exchange (ETDEWEB) Zhang, Guangming [School of Materials Science and Engineering, University of Science and Technology, Beijing, Beijing 100083 (China); Department of Nuclear, Plasma and Radiological Engineering, University of Illinois at Urbana-Champaign, IL 61801 (United States); Zhou, Zhangjian, E-mail: [email protected] [School of Materials Science and Engineering, University of Science and Technology, Beijing, Beijing 100083 (China); Mo, Kun [Nuclear Engineering Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Miao, Yinbin; Liu, Xiang [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois at Urbana-Champaign, IL 61801 (United States); Almer, Jonathan [X-ray Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Stubbins, James F. [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois at Urbana-Champaign, IL 61801 (United States) 2015-12-15 An application of high-energy wide angle synchrotron X-ray diffraction to investigate the tensile deformation of 9Cr ferritic/martensitic (F/M) ODS steel is presented. With tensile loading and in-situ X-ray exposure, the lattice strain development of matrix was determined. The lattice strain was found to decrease with increasing temperature, and the difference in Young's modulus of six different reflections at different temperatures reveals the temperature dependence of elastic anisotropy. The mean internal stress was calculated and compared with the applied stress, showing that the strengthening factor increased with increasing temperature, indicating that the oxide nanoparticles have a good strengthening impact at high temperature. The dislocation density and character were also measured during tensile deformation. The dislocation density decreased with increasing of temperature due to the greater mobility of dislocation at high temperature. The dislocation character was determined by best-fit methods for different dislocation average contrasts with various levels of uncertainty. The results shows edge type dislocations dominate the plastic strain at room temperature (RT) and 300 °C, while the screw type dislocations dominate at 600 °C. The dominance of edge character in 9Cr F/M ODS steels at RT and 300 °C is likely due to the pinning effect of nanoparticles for higher mobile edge dislocations when compared with screw dislocations, while the stronger screw type of dislocation structure at 600 °C may be explained by the activated cross slip of screw segments. - Highlights: • The tensile deformation of 9Cr ODS steel was studied by synchrotron irradiation. • The evolution of internal mean stress was calculated. • The evolution of dislocation character was determined by best-fit method. • Edge type dominates plasticity at RT and 300 °C, while screw type dominates at 600 °C. 20. A transient absorption study of allophycocyanin Indian Academy of Sciences (India) Transient dynamics of allophycocyanin trimers and monomers are observed by using the pump-probe, transient absorption technique. The origin of spectral components of the transient absorption spectra is discussed in terms of both kinetics and spectroscopy. We find that the energy gap between the ground and excited ... 1. Energy Harvesting Through Optical Properties of TiO2 and C- TiO2 Nanofluid for Direct Absorption Solar Collectors OpenAIRE alagappan, subramaniyan; Subramaniyan, A. L.; Lakshmi Priya, S.; Ilangovan, R. 2016-01-01 Nanofluids are tailored suspensions of nanoparticles in a suitable base fluid. The discovery of nanofluids by Stephen choi opened a new heat transfer mechanism. Since then several research has taken place to explore thermal, electrical and magnetic property of nanofluids. Nanofluids showed enhanced electrical and thermal conductivities. The nanofluids are also proved as a potential candidate for direct absorption solar collectors (DASC). The present work investigates the effect of nanopartic... 2. Hybrid Solar-Geothermal Energy Absorption Air-Conditioning System Operating with NaOH-H2O—Las Tres Vírgenes (Baja California Sur, “La Reforma” Case Directory of Open Access Journals (Sweden) Yuridiana Rocio Galindo-Luna 2018-05-01 Full Text Available Solar and geothermal energies are considered cleaner and more useful energy sources that can be used to avoid the negative environmental impacts caused by burning fossil fuels. Several works have reported air-conditioning systems that use solar energy coupled to geothermal renewable energy as a thermal source. In this study, an Absorption Air-Conditioning System (AACS used sodium hydroxide-water (NaOH-H2O instead of lithium bromide-water to reduce the cost. Low enthalpy geothermal heat was derived from two shallow wells, 50 and 55 m deep. These wells are of interest due to the thermal recovery (temperature vs. time of 56.2 °C that was possible at the maximum depth, which can be used for the first stage of the process. These wells were coupled with solar energy as a geothermal energy application for direct uses such as air-conditioning systems. We studied the performance of an absorption cooling system operating with a NaOH-H2O mixture and using a parabolic trough plant coupled with a low enthalpy geothermal heat system as a hybrid heat source, as an alternative process that can help reduce operating costs and carbon dioxide emissions. The numerical heat transfer results showed the maximum convective heat transfer coefficient, as function of fluid velocity, and maximum temperature for a depth higher than 40 m. The results showed that the highest temperatures occur at low fluid velocities of less than or equal to 5.0 m/s. Under these conditions, reaching temperatures between 51.0 and 56.2 °C in the well was possible, which is required of the geothermal energy for the solar energy process. A water stream was used as the working fluid in the parabolic trough collector field. During the evaluation stage, the average experimental storage tank temperature achieved by the parabolic trough plant was 93.8 °C on October 23 and 92.9 °C on October 25, 2017. The numerical simulation used to evaluate the performance of the absorption cycle used a generator 3. The evolution of internal stress and dislocation during tensile deformation in a 9Cr ferritic/martensitic (F/M) ODS steel investigated by high-energy X-rays Energy Technology Data Exchange (ETDEWEB) Zhang, Guangming; Zhou, Zhangjian; Mo, Kun; Miao, Yinbin; Liu, Xiang; Almer, Jonathan; Stubbins, James F. 2015-12-01 An application of high-energy wide angle synchrotron X-ray diffraction to investigate the tensile deformation of 9Cr ferritic/martensitic (F/M) ODS steel is presented. With tensile loading and in-situ Xray exposure, the lattice strain development of matrix was determined. The lattice strain was found to decrease with increasing temperature, and the difference in Young's modulus of six different reflections at different temperatures reveals the temperature dependence of elastic anisotropy. The mean internal stress was calculated and compared with the applied stress, showing that the strengthening factor increased with increasing temperature, indicating that the oxide nanoparticles have a good strengthening impact at high temperature. The dislocation density and character were also measured during tensile deformation. The dislocation density decreased with increasing of temperature due to the greater mobility of dislocation at high temperature. The dislocation character was determined by best-fit methods for different dislocation average contrasts with various levels of uncertainty. The results shows edge type dislocations dominate the plastic strain at room temperature (RT) and 300 C, while the screw type dislocations dominate at 600 C. The dominance of edge character in 9Cr F/M ODS steels at RT and 300 C is likely due to the pinning effect of nanoparticles for higher mobile edge dislocations when compared with screw dislocations, while the stronger screw type of dislocation structure at 600 C may be explained by the activated cross slip of screw segments. 4. Tin Oxide Crystals Exposed by Low-Energy {110} Facets for Enhanced Electrochemical Heavy Metal Ions Sensing: X-ray Absorption Fine Structure Experimental Combined with Density-Functional Theory Evidence. Science.gov (United States) Jin, Zhen; Yang, Meng; Chen, Shao-Hua; Liu, Jin-Huai; Li, Qun-Xiang; Huang, Xing-Jiu 2017-02-21 Herein, we revealed that the electrochemical behaviors on the detection of heavy metal ions (HMIs) would largely rely on the exposed facets of SnO 2 nanoparticles. Compared to the high-energy {221} facet, the low-energy {110} facet of SnO 2 possessed better electrochemical performance. The adsorption/desorption tests, density-functional theory (DFT) calculations, and X-ray absorption fine structure (XAFS) studies showed that the lower barrier energy of surface diffusion on {110} facet was critical for the superior electrochemical property, which was favorable for the ions diffusion on the electrode, and further leading the enhanced electrochemical performance. Through the combination of experiments and theoretical calculations, a reliable interpretation of the mechanism for electroanalysis of HMIs with nanomaterials exposed by different crystal facets has been provided. Furthermore, it provides a deep insight into understanding the key factor to improve the electrochemical performance for HMIs detection, so as to design high-performance electrochemical sensors. 5. Is nucleon deformed? International Nuclear Information System (INIS) Abbas, Afsar 1992-01-01 The surprising answer to this question Is nucleon deformed? is : Yes. The evidence comes from a study of the quark model of the single nucleon and when it is found in a nucleus. It turns out that many of the long standing problems of the Naive Quark Model are taken care of if the nucleon is assumed to be deformed. Only one value of the parameter P D ∼1/4 (which specifies deformation) fits g A (the axial vector coupling constant) for all the semileptonic decay of baryons, the F/D ratio, the pion-nucleon-delta coupling constant fsub(πNΔ), the double delta coupling constant 1 fsub(πΔΔ), the Ml transition moment μΔN and g 1 p the spin structure function of proton 2 . All this gives strong hint that both neutron and proton are deformed. It is important to look for further signatures of this deformation. When this deformed nucleon finds itself in a nuclear medium its deformation decreases. So much that in a heavy nucleus the nucleons are actually spherical. We look into the Gamow-Teller strengths, magnetic moments and magnetic transition strengths in nuclei to study this property. (author). 15 refs 6. D-xylose absorption Science.gov (United States) ... this page: //medlineplus.gov/ency/article/003606.htm D-xylose absorption To use the sharing features on this page, please enable JavaScript. D-xylose absorption is a laboratory test to determine ... 7. Study on the deformations appearing in a high-energy accelerator superconducting magnets as a result of heat and electromagnetic stresses International Nuclear Information System (INIS) Greben', L.I.; Mironov, E.S.; Mustafin, Kh.Kh. 1979-01-01 Techniques for numerical calculations are briefly described and the results of studying the deformation distribution are given in a two-dimensional model of the SPD-3 superconduction dipole magnet. The SPD-3 model incorporates a multilayer winding with an internal diameter of 85 mm and an external diameter of 157 mm placed on a stainless-steel tube having a thichness of 5 mm. The 5-cm stainless steel binding provides strong preliminary compression of the winding. It is shown that on internal surfaces of the winding, variations in radial displacements do not exceed +-0.002 mm. Input of current corresponding to a 4.3 T induction in the aperture center results in additional radial displacements in the magnet. Azimuthal displacements in the winding increase by more than 10 times. In the magnet design version having no internal tube, considerable variations have been observed in values of radial displacements in the winding (to +-0.1 mm) while azimuthal displacements in the winding first layer have rea have reached 0.1 mm. It is shown that such substantial displacements may have a pronounced effect of the field distribution in the aperture 8. The Influence of Deformation on the Surface Structure of Silicon Under Irradiation by^{86}$Kr Ions with Energy 253 MeV CERN Document Server Vlasukova, L A; Hofmann, A; Komarov, F F; Semina, V K; Yuvchenko, V N 2006-01-01 The influence of the previously produced deformation in silicon structure by means of macro-scratch surface covering on the sputtering processes under following irradiation by swift$^{86}$Kr ions is studied. The significant leveling of surface relief of irradiated silicon was observed using atomic force microscopy method (AFM), in particular it takes place for smoothing of micro-scratches produced by mechanical polishing of silicon initial plates. The experimental studies of irradiated surface allowed one to conclude that it is impossible to explain the surface changes only by elastic cascade mechanism as it was calculated using the computer code TRIM-98, because the calculated sputtered layers of silicon at ion fluence$\\Phi_{\\rm Kr} = 1{.}3\\cdot10^{14}$ion/cm$^{2}$should be$\\Delta H_{\\rm Sputtering}^{\\rm Kr} = 5{.}5\\cdot10^{-3 }{\\AA}. Correspondingly, the surface changes should be explained by one of mechanisms of inelastic sputtering. The macro-cracks on the surface were observed near the scratches. I... 9. How To Tackle the Issues in Free Energy Simulations of Long Amphiphiles Interacting with Lipid Membranes: Convergence and Local Membrane Deformations DEFF Research Database (Denmark) Filipe, H. A. L.; Moreno, M. J.; Rog, T. 2014-01-01 One of the great challenges in membrane biophysics is to find a means to foster the transport of drugs across complex membrane structures. In this spirit, we elucidate methodological challenges associated with free energy computations of complex chainlike molecules across lipid membranes....... As an appropriate standard molecule to this end, we consider 7-nitrobenz-2-oxa-1,3-diazol-4-yl-labeled fatty amine, NBD-C-n, which is here dealt with as a homologous series with varying chain lengths. We found the membrane-water interface region to be highly sensitive to details in free energy computations. Despite...... of radius 1.7 nm from the amphiphile. Importantly, the free energy results given by PGC were found to be qualitatively consistent with experimental data, while the PGD results were not. We conclude that with long amphiphiles there is reason for concern with regard to computations of their free energy... 10. Deformation properties of lead isotopes Energy Technology Data Exchange (ETDEWEB) Tolokonnikov, S. V.; Borzov, I. N.; Lutostansky, Yu. S.; Saperstein, E. E., E-mail: [email protected] [National Research Center Kurchatov Institute (Russian Federation) 2016-01-15 The deformation properties of a long lead isotopic chain up to the neutron drip line are analyzed on the basis of the energy density functional (EDF) in the FaNDF{sup 0} Fayans form. The question of whether the ground state of neutron-deficient lead isotopes can have a stable deformation is studied in detail. The prediction of this deformation is contained in the results obtained on the basis of the HFB-17 and HFB-27 Skyrme EDF versions and reported on Internet. The present analysis reveals that this is at odds with experimental data on charge radii and magnetic moments of odd lead isotopes. The Fayans EDF version predicts a spherical ground state for all light lead isotopes, but some of them (for example, {sup 180}Pb and {sup 184}Pb) prove to be very soft—that is, close to the point of a phase transition to a deformed state. Also, the results obtained in our present study are compared with the predictions of some other Skyrme EDF versions, including SKM*, SLy4, SLy6, and UNE1. By and large, their predictions are closer to the results arising upon the application of the Fayans functional. For example, the SLy4 functional predicts, in just the same way as the FaNDF{sup 0} functional, a spherical shape for all nuclei of this region. The remaining three Skyrme EDF versions lead to a deformation of some light lead isotopes, but their number is substantially smaller than that in the case of the HFB-17 and HFB-27 functionals. Moreover, the respective deformation energy is substantially lower, which gives grounds to hope for the restoration of a spherical shape upon going beyond the mean-field approximation, which we use here. Also, the deformation properties of neutron-rich lead isotopes are studied up to the neutron drip line. Here, the results obtained with the FaNDF{sup 0} functional are compared with the predictions of the HFB-17, HFB-27, SKM*, and SLy4 Skyrme EDF versions. All of the EDF versions considered here predict the existence of a region where neutron 11. Absorption and excretion tests International Nuclear Information System (INIS) Berberich, R. 1988-01-01 The absorption and excretion of radiopharmaceuticals is still of interest in diagnostic investigations of nuclear medicine. In this paper the most common methods of measuring absorption and excretion are described. The performance of the different tests and their standard values are discussed. More over the basic possibilities of measuring absorption and excretion including the needed measurement equipments are presented. (orig.) [de 12. Extremely deformable structures CERN Document Server 2015-01-01 Recently, a new research stimulus has derived from the observation that soft structures, such as biological systems, but also rubber and gel, may work in a post critical regime, where elastic elements are subject to extreme deformations, though still exhibiting excellent mechanical performances. This is the realm of ‘extreme mechanics’, to which this book is addressed. The possibility of exploiting highly deformable structures opens new and unexpected technological possibilities. In particular, the challenge is the design of deformable and bi-stable mechanisms which can reach superior mechanical performances and can have a strong impact on several high-tech applications, including stretchable electronics, nanotube serpentines, deployable structures for aerospace engineering, cable deployment in the ocean, but also sensors and flexible actuators and vibration absorbers. Readers are introduced to a variety of interrelated topics involving the mechanics of extremely deformable structures, with emphasis on ... 13. Diffeomorphic Statistical Deformation Models DEFF Research Database (Denmark) Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus 2007-01-01 In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al....... The modifications ensure that no boundary restriction has to be enforced on the parameter space to prevent folds or tears in the deformation field. For straightforward statistical analysis, principal component analysis and sparse methods, we assume that the parameters for a class of deformations lie on a linear...... with ground truth in form of manual expert annotations, and compared to Cootes's model. We anticipate applications in unconstrained diffeomorphic synthesis of images, e.g. for tracking, segmentation, registration or classification purposes.... 14. Low-Absorption Liquid Crystals for Infrared Beam Steering Science.gov (United States) 2015-09-30 controlled the curing temperature at 0oC to obtain small domain size and fast response time is expected. Here, a UV light-emitting diode ( LED ) lamp ...absorption; def.=deformation; w =weak absorption; v.=variable intensity) [B. D. Mistry, A Handbook of Spectroscopic Data: Chemistry- UV , IR, PMR, CNMR and...contributed by the core structure and terminal groups. Due to UV instability of double bonds and carbon-carbon triple bonds, conjugated phenyl rings have 15. The Spherical Deformation Model DEFF Research Database (Denmark) Hobolth, Asgar 2003-01-01 Miller et al. (1994) describe a model for representing spatial objects with no obvious landmarks. Each object is represented by a global translation and a normal deformation of a sphere. The normal deformation is defined via the orthonormal spherical-harmonic basis. In this paper we analyse the s...... a single central section of the object. We use maximum-likelihood-based inference for this purpose and demonstrate the suggested methods on real data.... 16. Variation of low temperature internal friction of microplastic deformation of high purity molybdenum single crystals International Nuclear Information System (INIS) Pal-Val, P.P.; Kaufmann, H.J. 1984-01-01 Amplitude and temperature spectra of ultrasound absorption in weakly deformed high purity molybdenum single crystals of different orientations were measured. The results were discussed in terms of parameter changes related to quasiparticle or dislocation oscillations, respectively, dislocation point defect interactions as well as defect generation at microplastic deformation. (author) 17. Variation of low temperature internal friction of microplastic deformation of high purity molybdenum single crystals Energy Technology Data Exchange (ETDEWEB) Pal-Val, P.P. (AN Ukrainskoj SSR, Kharkov. Fiziko-Tekhnicheskij Inst. Nizkikh Temperatur); Kaufmann, H.J. (Akademie der Wissenschaften der DDR, Berlin) 1984-08-01 Amplitude and temperature spectra of ultrasound absorption in weakly deformed high purity molybdenum single crystals of different orientations were measured. The results were discussed in terms of parameter changes related to quasiparticle or dislocation oscillations, respectively, dislocation point defect interactions as well as defect generation at microplastic deformation. 18. Calcium absorption and achlorhydria International Nuclear Information System (INIS) Recker, R.R. 1985-01-01 Defective absorption of calcium has been thought to exist in patients with achlorhydria. The author compared absorption of calcium in its carbonate form with that in a pH-adjusted citrate form in a group of 11 fasting patients with achlorhydria and in 9 fasting normal subjects. Fractional calcium absorption was measured by a modified double-isotope procedure with 0.25 g of calcium used as the carrier. Mean calcium absorption (+/- S.D.) in the patients with achlorhydria was 0.452 +/- 0.125 for citrate and 0.042 +/- 0.021 for carbonate (P less than 0.0001). Fractional calcium absorption in the normal subjects was 0.243 +/- 0.049 for citrate and 0.225 +/- 0.108 for carbonate (not significant). Absorption of calcium from carbonate in patients with achlorhydria was significantly lower than in the normal subjects and was lower than absorption from citrate in either group; absorption from citrate in those with achlorhydria was significantly higher than in the normal subjects, as well as higher than absorption from carbonate in either group. Administration of calcium carbonate as part of a normal breakfast resulted in completely normal absorption in the achlorhydric subjects. These results indicate that calcium absorption from carbonate is impaired in achlorhydria under fasting conditions. Since achlorhydria is common in older persons, calcium carbonate may not be the ideal dietary supplement 19. Systematics of triaxial deformation in Xe, Ba, and Ce nuclei International Nuclear Information System (INIS) Yan, J.; Vogel, O.; von Brentano, P.; Gelberg, A. 1993-01-01 The (β,γ) deformation parameters of even-even Xe, Ba, and Ce nuclei have been calculated by using the triaxial rotor model. Deformation parameters calculated, on one hand, from decay properties and, on the other hand, from energies are in good agreement. The smooth dependence of the deformation parameters on Z and N is discussed. The results are compared with those extracted from properties of odd-A nuclei 20. Dynamic deformation theory of spherical and deformed light and heavy nuclei with A = 12-240 International Nuclear Information System (INIS) Kumar, Krishna. 1979-01-01 Deformation dependent wave functions are calculated for different types of even-even nuclei (spherical, transitional, deformed; light, medium, heavy) without any fitting parameters. These wave functions are employed for the energies, B(E2)'s, quadrupole and magnetic moments of selected nuclei with A = 12-240 (with special emphasis on 56 Fe, 154 Gd), and for neutron cross sections of 148 Sm, 152 Sm 1. A practical method for determining γ-ray full-energy peak efficiency considering coincidence-summing and self-absorption corrections for the measurement of environmental samples after the Fukushima reactor accident Energy Technology Data Exchange (ETDEWEB) Shizuma, Kiyoshi, E-mail: [email protected] [Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Oba, Yurika; Takada, Momo [Graduate School of Integrated Arts and Sciences, Hiroshima University, Higashi-Hiroshima 739-8521 (Japan) 2016-09-15 A method for determining the γ-ray full-energy peak efficiency at positions close to three Ge detectors and at the well port of a well-type detector was developed for measuring environmental volume samples containing {sup 137}Cs, {sup 134}Cs and {sup 40}K. The efficiency was estimated by considering two correction factors: coincidence-summing and self-absorption corrections. The coincidence-summing correction for a cascade transition nuclide was estimated by an experimental method involving measuring a sample at the far and close positions of a detector. The derived coincidence-summing correction factors were compared with those of analytical and Monte Carlo simulation methods and good agreements were obtained. Differences in the matrix of the calibration source and the environmental sample resulted in an increase or decrease of the full-energy peak counts due to the self-absorption of γ-rays in the sample. The correction factor was derived as a function of the densities of several matrix materials. The present method was applied to the measurement of environmental samples and also low-level radioactivity measurements of water samples using the well-type detector. 2. Viscoelastic deformation of lipid bilayer vesicles† Science.gov (United States) Wu, Shao-Hua; Sankhagowit, Shalene; Biswas, Roshni; Wu, Shuyang; Povinelli, Michelle L. 2015-01-01 Lipid bilayers form the boundaries of the cell and its organelles. Many physiological processes, such as cell movement and division, involve bending and folding of the bilayer at high curvatures. Currently, bending of the bilayer is treated as an elastic deformation, such that its stress-strain response is independent of the rate at which bending strain is applied. We present here the first direct measurement of viscoelastic response in a lipid bilayer vesicle. We used a dual-beam optical trap (DBOT) to stretch 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) giant unilamellar vesicles (GUVs). Upon application of a step optical force, the vesicle membrane deforms in two regimes: a fast, instantaneous area increase, followed by a much slower stretching to an eventual plateau deformation. From measurements of dozens of GUVs, the average time constant of the slower stretching response was 0.225 ± 0.033 s (standard deviation, SD). Increasing the fluid viscosity did not affect the observed time constant. We performed a set of experiments to rule out heating by laser absorption as a cause of the transient behavior. Thus, we demonstrate here that the bending deformation of lipid bilayer membranes should be treated as viscoelastic. PMID:26268612 3. Viscoelastic deformation of lipid bilayer vesicles. Science.gov (United States) Wu, Shao-Hua; Sankhagowit, Shalene; Biswas, Roshni; Wu, Shuyang; Povinelli, Michelle L; Malmstadt, Noah 2015-10-07 Lipid bilayers form the boundaries of the cell and its organelles. Many physiological processes, such as cell movement and division, involve bending and folding of the bilayer at high curvatures. Currently, bending of the bilayer is treated as an elastic deformation, such that its stress-strain response is independent of the rate at which bending strain is applied. We present here the first direct measurement of viscoelastic response in a lipid bilayer vesicle. We used a dual-beam optical trap (DBOT) to stretch 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) giant unilamellar vesicles (GUVs). Upon application of a step optical force, the vesicle membrane deforms in two regimes: a fast, instantaneous area increase, followed by a much slower stretching to an eventual plateau deformation. From measurements of dozens of GUVs, the average time constant of the slower stretching response was 0.225 ± 0.033 s (standard deviation, SD). Increasing the fluid viscosity did not affect the observed time constant. We performed a set of experiments to rule out heating by laser absorption as a cause of the transient behavior. Thus, we demonstrate here that the bending deformation of lipid bilayer membranes should be treated as viscoelastic. 4. Trinuc: a fortran program for the identification, and the energy and momentum determination of the light charged particles (p,d,t) and neutrons emitted after the absorption at the rest of negative pions in light nuclei International Nuclear Information System (INIS) Rui, R. 1982-01-01 Energy, momentum and missing mass spectra, angular distribution of two particles (n,n), (p,p), (n,d) and (n,t) detected in coincidence experiments, have been calculated with this program. At this moment only π - absorption reactions on 12 C nuclei have been studied, even if the program is adaptable to execute these calculations for any kind of target nucleus. The π - + 12 C experiment was performed at the triumf meson facility - Vancouver B.C., Canada -, under the supervision of prof. C. Cernigoi. The instrumental apparatus used to perform the experiment consisted of a beam telescope (ref. 1), of four large are a plastic counters NC (ref. 2) and of a large telescope plastic counter RT (ref.3). The tecniques of time of flight (TOF) e x de (only for charged particles) have been used to deduce the energy and identify the mass of the detected particles 5. Ab Initio Potential Energy Surfaces for Both the Ground (X̃1A′ and Excited (A∼1A′′ Electronic States of HSiBr and the Absorption and Emission Spectra of HSiBr/DSiBr Directory of Open Access Journals (Sweden) Anyang Li 2012-01-01 Full Text Available Ab initio potential energy surfaces for the ground (X̃1A′ and excited (A˜A′′1 electronic states of HSiBr were obtained by using the single and double excitation coupled-cluster theory with a noniterative perturbation treatment of triple excitations and the multireference configuration interaction with Davidson correction, respectively, employing an augmented correlation-consistent polarized valence quadruple zeta basis set. The calculated vibrational energy levels of HSiBr and DSiBr of the ground and excited electronic states are in excellent agreement with the available experimental band origins. In addition, the absorption and emission spectra of HSiBr and DSiBr were calculated using an efficient single Lanczos propagation method and are in good agreement with the available experimental observations. 6. Aerosol Absorption Measurements in MILAGRO. Science.gov (United States) Gaffney, J. S.; Marley, N. A.; Arnott, W. P.; Paredes-Miranda, L.; Barnard, J. C. 2007-12-01 During the month of March 2006, a number of instruments were used to determine the absorption characteristics of aerosols found in the Mexico City Megacity and nearby Valley of Mexico. These measurements were taken as part of the Department of Energy's Megacity Aerosol Experiment - Mexico City (MAX-Mex) that was carried out in collaboration with the Megacity Interactions: Local and Global Research Observations (MILAGRO) campaign. MILAGRO was a joint effort between the DOE, NSF, NASA, and Mexican agencies aimed at understanding the impacts of a megacity on the urban and regional scale. A super-site was operated at the Instituto Mexicano de Petroleo in Mexico City (designated T-0) and at the Universidad Technologica de Tecamac (designated T-1) that was located about 35 km to the north east of the T-0 site in the State of Mexico. A third site was located at a private rancho in the State of Hidalgo approximately another 35 km to the northeast (designated T-2). Aerosol absorption measurements were taken in real time using a number of instruments at the T-0 and T-1 sites. These included a seven wavelength aethalometer, a multi-angle absorption photometer (MAAP), and a photo-acoustic spectrometer. Aerosol absorption was also derived from spectral radiometers including a multi-filter rotating band spectral radiometer (MFRSR). The results clearly indicate that there is significant aerosol absorption by the aerosols in the Mexico City megacity region. The absorption can lead to single scattering albedo reduction leading to values below 0.5 under some circumstances. The absorption is also found to deviate from that expected for a "well-behaved" soot anticipated from diesel engine emissions, i.e. from a simple 1/lambda wavelength dependence for absorption. Indeed, enhanced absorption is seen in the region of 300-450 nm in many cases, particularly in the afternoon periods indicating that secondary organic aerosols are contributing to the aerosol absorption. This is likely due 7. Transmission coefficents in strongly deformed nuclei International Nuclear Information System (INIS) Aleshin, V.P. 1996-01-01 By using our semiclassical approach to particle evaporation from deformed nuclei developed earlier, we analyze here the heuristic methods of taking into account the effects of shape deformations on particle emission. These methods are based on the 'local' transmission coefficients in which the effective barrier depends on the angle with respect to the symmetry axis. The calculations revealed that the heuristic models are reasonable for particle energy spectra but fail, at large deformations, to describe the angular distributions. In A∼160 nuclei with axis ratio in the vicinity of 2:1 at temperatures of 2-3 MeV, the W (90 )/W(0 ) anisotropies of α particles with respect to the nuclear spin are 1.5 to 3 times larger than our approach predicts. The influence of spin alignment on particle energy spectra is discussed shortly. (orig.) 8. Recrystallization of magnesium deformed at low temperatures International Nuclear Information System (INIS) Fromageau, R.; Pastol, J.L.; Revel, G. 1978-01-01 The recrystallization of magnesium was studied after rolling at temperatures ranging between 248 and 373 K. For zone refined magnesium the annealing behaviour as observed by electrical resistivity measurements showed two stages at about 250 K and 400 K due respectively to recrystallization and grain growth. The activation energy associated with the recrystallization stage was 0.75 +- 0.01 eV. In less pure magnesium, with nominal purity 99.99 and 99.9%, the recrystallization stage was decomposed into two substages. Activation energies were determined in relation with deformation temperature and purity. The magnesium of intermediate purity (99.99%) behaved similarly to the lowest purity metal when it was deformed at high temperature and to the purest magnesium when the deformation was made at low temperature. This behaviour was discussed in connection with the theories of Luecke and Cahn. (Auth.) 9. Turbulent effective absorptivity and refractivity International Nuclear Information System (INIS) Rax, J.M. 1984-09-01 The problem of wave propagation in a turbulent magnetized plasma is investigated. Considering small scale, low frequency density fluctuations we solve the Maxwell equations and show that the eikonal approximation remains valid with an effective refractivity and an effective absorptivity taking into account the energy diffusion due to the turbulent motion. Then the result is applied to the problem of lower hybrid waves scattering by drift waves density fluctuations in tokamaks 10. Probing the global potential energy minimum of (CH2O)2: THz absorption spectrum of (CH2O)2 in solid neon and para-hydrogen. Science.gov (United States) Andersen, J; Voute, A; Mihrin, D; Heimdal, J; Berg, R W; Torsson, M; Wugt Larsen, R 2017-06-28 The true global potential energy minimum configuration of the formaldehyde dimer (CH 2 O) 2 , including the presence of a single or a double weak intermolecular CH⋯O hydrogen bond motif, has been a long-standing subject among both experimentalists and theoreticians as two different energy minima conformations of C s and C 2h symmetry have almost identical energies. The present work demonstrates how the class of large-amplitude hydrogen bond vibrational motion probed in the THz region provides excellent direct spectroscopic observables for these weak intermolecular CH⋯O hydrogen bond motifs. The combination of concentration dependency measurements, observed isotopic spectral shifts associated with H/D substitutions and dedicated annealing procedures, enables the unambiguous assignment of three large-amplitude infrared active hydrogen bond vibrational modes for the non-planar C s configuration of (CH 2 O) 2 embedded in cryogenic neon and enriched para-hydrogen matrices. A (semi)-empirical value for the change of vibrational zero-point energy of 5.5 ± 0.3 kJ mol -1 is proposed for the dimerization process. These THz spectroscopic observations are complemented by CCSD(T)-F12/aug-cc-pV5Z (electronic energies) and MP2/aug-cc-pVQZ (force fields) electronic structure calculations yielding a (semi)-empirical value of 13.7 ± 0.3 kJ mol -1 for the dissociation energy D 0 of this global potential energy minimum. 11. [Study on lead absorption in pumpkin by atomic absorption spectrophotometry]. Science.gov (United States) Li, Zhen-Xia; Sun, Yong-Dong; Chen, Bi-Hua; Li, Xin-Zheng 2008-07-01 A study was carried out on the characteristic of lead absorption in pumpkin via atomic absorption spectrophotometer. The results showed that lead absorption amount in pumpkin increased with time, but the absorption rate decreased with time; And the lead absorption amount reached the peak in pH 7. Lead and cadmium have similar characteristic of absorption in pumpkin. 12. Spectroscopic and piezospectroscopic studies of the energy states of boron in silicon International Nuclear Information System (INIS) Lewis, R.A.; Fisher, P.; McLean, N.A. 1994-01-01 The p 3/2 optical absorption spectrum of boron impurity in silicon has been re-examined at high resolution. The precise transition energies measured agree with energies previously reported. In addition, energies for several previously unrecognised transitions are given as well as values for the absorption strengths and line widths. The measured transition energies and absorption strengths correlate very well with several recent calculations of binding energies and oscillator strengths, respectively. This excellent agreement between experiment and theory motivates a renumbering of the spectral lines which is not expected to require future modification. High-resolution piezospectroscopy of the p 3/2 series has also been undertaken. Small stresses were used to minimise the effect of interactions and permit accurate determination of the deformation potential constants. The deformation potential constants are found to be in fair agreement with previous experimental values and good agreement with recent theory. Experimental values for several of these are given for the first time, as are isotropic deformation potential constants of several excited states relative to the ground state. 58 refs., 14 figs 13. Probing the global potential energy minimum of (CH2O)2: THz absorption spectrum of (CH2O)2 in solid neon and para-hydrogen DEFF Research Database (Denmark) Andersen, Jonas; Voute, A.; Mihrin, Dmytro 2017-01-01 )2 embedded in cryogenic neon and enriched para-hydrogen matrices. A (semi)-empirical value for the change of vibrational zero-point energy of 5.5 ± 0.3 kJ mol−1 is proposed for the dimerization process. These THz spectroscopic observations are complemented by CCSD(T)-F12/aug-cc-pV5Z (electronic......The true global potential energy minimum configuration of the formaldehyde dimer (CH2O)2, including the presence of a single or a double weak intermolecular CH⋯O hydrogen bond motif, has been a long-standing subject among both experimentalists and theoreticians as two different energy minima...... conformations of Cs and C2h symmetry have almost identical energies. The present work demonstrates how the class of large-amplitude hydrogen bond vibrational motion probed in the THz region provides excellent direct spectroscopic observables for these weak intermolecular CH⋯O hydrogen bond motifs... 14. Light energy management in peach: utilization, photoprotection , photodamage and recovery. Maximizing light absorption in orchard is not always the best solution OpenAIRE Losciale, Pasquale 2008-01-01 The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or i... 15. Reproducibility of The Random Incidence Absorption Coefficient Converted From the Sabine Absorption Coefficient DEFF Research Database (Denmark) Jeong, Cheol-Ho; Chang, Ji-ho 2015-01-01 largely depending on the test room. Several conversion methods for porous absorbers from the Sabine absorption coefficient to the random incidence absorption coefficient were suggested by considering the finite size of a test specimen and non-uniformly incident energy onto the specimen, which turned out...... resistivity optimization outperforms the surface impedance optimization in terms of the reproducibility.... 16. Deformed baryons: constituent quark model vs. bag model International Nuclear Information System (INIS) Iwamura, Y.; Nogami, Y. 1985-01-01 Recently Bhaduri et al. developed a nonrelativistic constituent quark model for deformed baryons. In that model the quarks move in a deformable mean field, and the deformation parameters are determined by minimizing the quark energy subject to the constraint of volume conservation. This constraint is an ad hoc assumption. It is shown that, starting with a bag model, a model similar to that of Bhaduri et al. can be constructed. The deformation parameters are determined by the pressure balance on the bag surface. There is, however, a distinct difference between the two models with respect to the state dependence of the ''volume''. Implications of this difference are discussed 17. An experimental study of plastic deformation of materials DEFF Research Database (Denmark) Knudsen, Tine The thesis falls in three parts, focusing on different aspects of plastic deformation of metals. Part I investigates the dislocation structures induced by hot deformation and compares these with the structures after cold deformation. In particular, it is shown that the dislocation structures...... after cold deformation by calorimetry and by analysis of the dislocation structure. The stored energy measured by calorimetry is found to be larger than that determined from the dislocation structure by a factor between 1.9 and 2.7, and this factor decreases with the plastic strain. Part III aimed... 18. Autogenous Deformation of Concrete DEFF Research Database (Denmark) Autogenous deformation of concrete can be defined as the free deformation of sealed concrete at a constant temperature. A number of observed problems with early age cracking of high-performance concretes can be attributed to this phenomenon. During the last 10 years , this has led to an increased...... focus on autogenous deformation both within concrete practice and concrete research. Since 1996 the interest has been significant enough to hold international, yearly conferences entirely devoted to this subject. The papers in this publication were presented at two consecutive half-day sessions...... at the American Concrete Institute’s Fall Convention in Phoenix, Arizona, October 29, 2002. All papers have been reviewed according to ACI rules. This publication, as well as the sessions, was sponsored by ACI committee 236, Material Science of Concrete. The 12 presentations from 8 different countries indicate... 19. Interfacial Bubble Deformations Science.gov (United States) Seymour, Brian; Shabane, Parvis; Cypull, Olivia; Cheng, Shengfeng; Feitosa, Klebert Soap bubbles floating at an air-water experience deformations as a result of surface tension and hydrostatic forces. In this experiment, we investigate the nature of such deformations by taking cross-sectional images of bubbles of different volumes. The results show that as their volume increases, bubbles transition from spherical to hemispherical shape. The deformation of the interface also changes with bubble volume with the capillary rise converging to the capillary length as volume increases. The profile of the top and bottom of the bubble and the capillary rise are completely determined by the volume and pressure differences. James Madison University Department of Physics and Astronomy, 4VA Consortium, Research Corporation for Advancement of Science. 20. Analysis of the local structure of InN with a bandgap energy of 0.8 and 1.9 eV and annealed InN using X-ray absorption fine structure measurements Energy Technology Data Exchange (ETDEWEB) Miyajima, Takao [Materials Laboratories, Sony Corporation, 4-14-1 Asahi-cho, Atsugi, Kanagawa 243-0014 (Japan); Kudo, Yoshihiro [Materials Analysis Lab., Sony Corporation, 4-18-1 Okada, Atsugi, Kanagawa 243-0021 (Japan); Wakahara, Akihiro [Deptm. of Electrical and Electronic Engineering, Toyohashi Univ. of Tech., Toyohashi 441-8580 (Japan); Yamaguchi, Tomohiro; Araki, Tsutomu; Nanishi, Yasushi [Deptm. of Photonics, Ritsumeikan Univ., 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577 (Japan) 2006-06-15 We compared the local structure around In atoms in microwave-excited MOCVD- and MBE-grown InN film which indicates an absorption edge at 1.9 and 0.8 eV, respectively. The co-ordination numbers of the 1st-nearest neighbor N atoms and the 2nd-nearest neighbor In atoms for MBE-grown InN were n(N)=3.9{+-}0.5 and n(In)=12.4{+-}0.9, which are close to the ideal value of n(N)=4 and n(In)=12 for InN without defects, respectively. By thermal annealing, the structure of MBE-grown InN was changed from InN to In{sub 2}O{sub 3}, and the absorption edge was changed from 0.8 to 3.5 eV. However, the microwave-excited MOCVD-grown InN had no structure of In{sub 2}O{sub 3}, and had the reduced co-ordination numbers of the 2nd-nearest neighbor In atoms of n(In)=10.6-11.7. From these results, we conclude that the origin of the 1.9-eV absorption edge of InN is the imperfections (defects) of the In lattice sites of InN, rather than the generation of In{sub 2}O{sub 3}, which has a bandgap energy of 3.5 eV. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.) 1. Analysis of the local structure of InN with a bandgap energy of 0.8 and 1.9 eV and annealed InN using X-ray absorption fine structure measurements International Nuclear Information System (INIS) Miyajima, Takao; Kudo, Yoshihiro; Wakahara, Akihiro; Yamaguchi, Tomohiro; Araki, Tsutomu; Nanishi, Yasushi 2006-01-01 We compared the local structure around In atoms in microwave-excited MOCVD- and MBE-grown InN film which indicates an absorption edge at 1.9 and 0.8 eV, respectively. The co-ordination numbers of the 1st-nearest neighbor N atoms and the 2nd-nearest neighbor In atoms for MBE-grown InN were n(N)=3.9±0.5 and n(In)=12.4±0.9, which are close to the ideal value of n(N)=4 and n(In)=12 for InN without defects, respectively. By thermal annealing, the structure of MBE-grown InN was changed from InN to In 2 O 3 , and the absorption edge was changed from 0.8 to 3.5 eV. However, the microwave-excited MOCVD-grown InN had no structure of In 2 O 3 , and had the reduced co-ordination numbers of the 2nd-nearest neighbor In atoms of n(In)=10.6-11.7. From these results, we conclude that the origin of the 1.9-eV absorption edge of InN is the imperfections (defects) of the In lattice sites of InN, rather than the generation of In 2 O 3 , which has a bandgap energy of 3.5 eV. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.) 2. Large scale study on the variation of RF energy absorption in the head and brain regions of adults and children and evaluation of the SAM phantom conservativeness International Nuclear Information System (INIS) Keshvari, J; Kivento, M; Christ, A; Bit-Babik, G 2016-01-01 This paper presents the results of two computational large scale studies using highly realistic exposure scenarios, MRI based human head and hand models, and two mobile phone models. The objectives are (i) to study the relevance of age when people are exposed to RF by comparing adult and child heads and (ii) to analyze and discuss the conservativeness of the SAM phantom for all age groups. Representative use conditions were simulated using detailed CAD models of two mobile phones operating between 900 MHz and 1950 MHz including configurations with the hand holding the phone, which were not considered in most previous studies. The peak spatial-average specific absorption rate (psSAR) in the head and the pinna tissues is assessed using anatomically accurate head and hand models. The first of the two mentioned studies involved nine head-, four hand- and two phone-models, the second study included six head-, four hand- and three simplified phone-models (over 400 configurations in total). In addition, both studies also evaluated the exposure using the SAM phantom. Results show no systematic differences between psSAR induced in the adult and child heads. The exposure level and its variation for different age groups may be different for particular phones, but no correlation between psSAR and model age was found. The psSAR from all exposure conditions was compared to the corresponding configurations using SAM, which was found to be conservative in the large majority of cases. (paper) 3. Large scale study on the variation of RF energy absorption in the head & brain regions of adults and children and evaluation of the SAM phantom conservativeness Science.gov (United States) Keshvari, J.; Kivento, M.; Christ, A.; Bit-Babik, G. 2016-04-01 This paper presents the results of two computational large scale studies using highly realistic exposure scenarios, MRI based human head and hand models, and two mobile phone models. The objectives are (i) to study the relevance of age when people are exposed to RF by comparing adult and child heads and (ii) to analyze and discuss the conservativeness of the SAM phantom for all age groups. Representative use conditions were simulated using detailed CAD models of two mobile phones operating between 900 MHz and 1950 MHz including configurations with the hand holding the phone, which were not considered in most previous studies. The peak spatial-average specific absorption rate (psSAR) in the head and the pinna tissues is assessed using anatomically accurate head and hand models. The first of the two mentioned studies involved nine head-, four hand- and two phone-models, the second study included six head-, four hand- and three simplified phone-models (over 400 configurations in total). In addition, both studies also evaluated the exposure using the SAM phantom. Results show no systematic differences between psSAR induced in the adult and child heads. The exposure level and its variation for different age groups may be different for particular phones, but no correlation between psSAR and model age was found. The psSAR from all exposure conditions was compared to the corresponding configurations using SAM, which was found to be conservative in the large majority of cases. 4. Wood drying project with solar energy and absorption plant; Proyecto de un secador de madera con energia solar termica y una planta de absorcion Energy Technology Data Exchange (ETDEWEB) Corretger, J. M.; Lara, J.; Arnau, J.; Marquez, A. 2004-07-01 Wood drying processes currently are developed in tunnel dryers using an air hot flow through the wood to remove the water. These processes are interesting to dry current wood that does not require special control of the drying velocity. However, could be necessary to control drying velocity at any moment of the process in order to dry some high quality wood. This implies to combine heating processes, cooling and dehumidification processes and humidification processes. The aim of this project is to dry noble woods with a drying complex process, in order to improve the quality of the products and to increase the energy saving by free-cooling operations and advanced control strategies, increased by using solar energy to get cold and hot water. The saving of energy will produce a bill reduction and an important minimization of environmental impact. (Author) 5. Joining by plastic deformation DEFF Research Database (Denmark) Mori, Ken-ichiro; Bay, Niels; Fratini, Livan 2013-01-01 As the scale and complexity of products such as aircraft and cars increase, demand for new functional processes to join mechanical parts grows. The use of plastic deformation for joining parts potentially offers improved accuracy, reliability and environmental safety as well as creating opportuni......As the scale and complexity of products such as aircraft and cars increase, demand for new functional processes to join mechanical parts grows. The use of plastic deformation for joining parts potentially offers improved accuracy, reliability and environmental safety as well as creating... 6. Hydrostatic pressure and temperature effects on the binding energy and optical absorption of a multilayered quantum dot with a parabolic confinement International Nuclear Information System (INIS) Ortakaya, Sami; Kirak, Muharrem 2016-01-01 The influence of hydrostatic pressure, temperature, and impurity on the electronic and optical properties of spherical core/shell/well/shell (CSWS) nanostructure with parabolic confinement potential is investigated theoretically. The energy levels and wave functions of the structure are calculated by using shooting method within the effective-mass approximation. The numerical results show that the ground state donor binding energy as a function layer thickness very sensitively depends on the magnitude of pressure and temperature. Also, we investigate the probability distributions to understand clearly electronic properties. The obtained results show that the existence of the pressure and temperature has great influence on the electronic and optical properties. (paper) 7. Experimental deformation of a mafic rock - interplay between fracturing, reaction and viscous deformation Science.gov (United States) Marti, Sina; Stünitz, Holger; Heilbronner, Renée; Plümper, Oliver; Drury, Martyn 2016-04-01 accommodate strain via dissolution precipitation creep. The transition from dominantly brittle, to dominantly viscous deformation is determined by the onset of diffusive mass transport. In the transitional regime, reaction kinetics are strongly dependent on strain energy and viscously deforming SB form most likely from an initial brittle stage in a dominantly brittle behaving rock. Viscous deformation in our experiments takes place at comparatively low experimental T, providing a realistic phase assemblage and likely deformation mechanism for the lower crust. 8. Effective absorption correction for energy dispersive X-ray mapping in a scanning transmission electron microscope: analysing the local indium distribution in rough samples of InGaN alloy layers. Science.gov (United States) Wang, X; Chauvat, M-P; Ruterana, P; Walther, T 2017-12-01 We have applied our previous method of self-consistent k*-factors for absorption correction in energy-dispersive X-ray spectroscopy to quantify the indium content in X-ray maps of thick compound InGaN layers. The method allows us to quantify the indium concentration without measuring the sample thickness, density or beam current, and works even if there is a drastic local thickness change due to sample roughness or preferential thinning. The method is shown to select, point-by-point in a two-dimensional spectrum image or map, the k*-factor from the local Ga K/L intensity ratio that is most appropriate for the corresponding sample geometry, demonstrating it is not the sample thickness measured along the electron beam direction but the optical path length the X-rays have to travel through the sample that is relevant for the absorption correction. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society. 9. Debris of potassium–magnesium silicate glass generated by femtosecond laser-induced ablation in air: An analysis by near edge X-ray absorption spectroscopy, micro Raman and energy dispersive X-ray spectroscopy International Nuclear Information System (INIS) Grehn, M.; Seuthe, T.; Reinhardt, F.; Höfner, M.; Griga, N.; Eberstein, M.; Bonse, J. 2014-01-01 The redeposited material (debris) resulting from ablation of a potassium–magnesium silicate glass upon scanning femtosecond laser pulse irradiation (130 fs, 800 nm) in air environment is investigated by means of three complementary surface analytical methods. Changes in the electronic band structure of the glass constituent Magnesium (Mg) were identified by X-ray Absorption Near Edge Structure spectroscopy (XANES) using synchrotron radiation. An up-shift of ≈0.8 eV of a specific Magnesium K-edge absorption peak in the spectrum of the redeposited material along with a significant change in its leading edge position was detected. In contrast, the surface left after laser ablation exhibits a downshift of the peak position by ≈0.9 eV. Both observations may be related to a change of the Mg coordinative state of the laser modified/redeposited glass material. The presence of carbon in the debris is revealed by micro Raman spectroscopy (μ-RS) and was confirmed by energy dispersive X-ray spectroscopy (EDX). These observations are attributed to structural changes and chemical reactions taking place during the ablation process. 10. Characteristics of Crushing Energy and Fractal of Magnetite Ore under Uniaxial Compression Science.gov (United States) Gao, F.; Gan, D. Q.; Zhang, Y. B. 2018-03-01 The crushing mechanism of magnetite ore is a critical theoretical problem on the controlling of energy dissipation and machine crushing quality in ore material processing. Uniaxial crushing tests were carried out to research the deformation mechanism and the laws of the energy evolution, based on which the crushing mechanism of magnetite ore was explored. The compaction stage and plasticity and damage stage are two main compression deformation stages, the main transitional forms from inner damage to fracture are plastic deformation and stick-slip. In the process of crushing, plasticity and damage stage is the key link on energy absorption for that the specimen tends to saturate energy state approaching to the peak stress. The characteristics of specimen deformation and energy dissipation can synthetically reply the state of existed defects inner raw magnetite ore and the damage process during loading period. The fast releasing of elastic energy and the work done by the press machine commonly make raw magnetite ore thoroughly broken after peak stress. Magnetite ore fragments have statistical self-similarity and size threshold of fractal characteristics under uniaxial squeezing crushing. The larger ratio of releasable elastic energy and dissipation energy and the faster energy change rate is the better fractal properties and crushing quality magnetite ore has under uniaxial crushing. 11. Unveiling the excited state energy transfer pathways in peridinin-chlorophyll a-protein by ultrafast multi-pulse transient absorption spectroscopy. Science.gov (United States) Redeckas, Kipras; Voiciuk, Vladislava; Zigmantas, Donatas; Hiller, Roger G; Vengris, Mikas 2017-04-01 Time-resolved multi-pulse methods were applied to investigate the excited state dynamics, the interstate couplings, and the excited state energy transfer pathways between the light-harvesting pigments in peridinin-chlorophyll a-protein (PCP). The utilized pump-dump-probe techniques are based on perturbation of the regular PCP energy transfer pathway. The PCP complexes were initially excited with an ultrashort pulse, resonant to the S 0 →S 2 transition of the carotenoid peridinin. A portion of the peridinin-based emissive intramolecular charge transfer (ICT) state was then depopulated by applying an ultrashort NIR pulse that perturbed the interaction between S 1 and ICT states and the energy flow from the carotenoids to the chlorophylls. The presented data indicate that the peridinin S 1 and ICT states are spectrally distinct and coexist in an excited state equilibrium in the PCP complex. Moreover, numeric analysis of the experimental data asserts ICT→Chl-a as the main energy transfer pathway in the photoexcited PCP systems. Copyright © 2017 Elsevier B.V. All rights reserved. 12. Calculation of absorption parameters for selected narcotic drugs in the energy range from 1 keV to 100 GeV Science.gov (United States) Akman, Ferdi; Kaçal, Mustafa Recep; Akdemir, Fatma; Araz, Aslı; Turhan, Mehmet Fatih; Durak, Rıdvan 2017-04-01 The total mass attenuation coefficients (μ/ρ), total molecular (σt,m), atomic (σt,a) and electronic (σt,e) cross sections, effective atomic numbers (Zeff) and electron density (NE) were computed in the wide energy region from 1 keV to 100 GeV for the selected narcotic drugs such as morphine, heroin, cocaine, ecstasy and cannabis. The changes of μ/ρ, σt,m, σt,a, σt,e, Zeff and NE with photon energy for total photon interaction shows the dominance of different interaction process in different energy regions. The variations of μ/ρ, σt,m, σt,a, σt,e, Zeff and NE depend on the atom number, photon energy and chemical composition of narcotic drugs. Also, these parameters change with number of elements, the range of atomic numbers in narcotic drugs and total molecular weight. These data can be useful in the field of forensic sciences and medical diagnostic. 13. Absorption spectra of AA-stacked graphite International Nuclear Information System (INIS) Chiu, C W; Lee, S H; Chen, S C; Lin, M F; Shyu, F L 2010-01-01 AA-stacked graphite shows strong anisotropy in geometric structures and velocity matrix elements. However, the absorption spectra are isotropic for the polarization vector on the graphene plane. The spectra exhibit one prominent plateau at middle energy and one shoulder structure at lower energy. These structures directly reflect the unique geometric and band structures and provide sufficient information for experimental fitting of the intralayer and interlayer atomic interactions. On the other hand, monolayer graphene shows a sharp absorption peak but no shoulder structure; AA-stacked bilayer graphene has two absorption peaks at middle energy and abruptly vanishes at lower energy. Furthermore, the isotropic features are expected to exist in other graphene-related systems. The calculated results and the predicted atomic interactions could be verified by optical measurements. 14. Energy analysis of a diffusion absorption cooling system using lithium nitrate, sodium thiocyanate and water as absorbent substances and ammonia as the refrigerant International Nuclear Information System (INIS) Acuña, A.; Velázquez, N.; Cerezo, J. 2013-01-01 A diffusion absorption cooling system is analyzed to determine the appropriate fluid for the unit, based on the coefficient of performance (COP) and operating conditions, by comparing lithium nitrate (LiNO 3 ), sodium thiocyanate (NaSCN) and water (H 2 O) as absorbent substances and by using ammonia (NH 3 ) as the refrigerant. The presence of crystallization in the system is analyzed as a function of the generator and absorber temperatures. Additionally, the effects on the efficiency of the system from adding the inert gas helium (He) or hydrogen (H 2 ) are studied. A mathematical model is developed and compared to experimental studies reported in the literature. At an evaporator temperature of −15 °C, a generator temperature of 120 °C and absorber and condenser temperatures of 40 °C, the results show that the best performance is achieved by the NH 3 –LiNO 3 –He mixture, with a COP of 0.48. This mixture performs 27–46% more efficient than the NH 3 –NaSCN mixture. The NH 3 –H 2 O mixture is 52–69% less efficient than the NH 3 –LiNO 3 mixture. However, when the evaporator runs at 7.5 °C, the NH 3 –H 2 O–He mixture achieves a more efficient COP than does the NH 3 –LiNO 3 –He mixture, and the NH 3 –NaSCN–He and NH 3 –LiNO 3 –He mixtures achieve the same COP when the evaporator is at 10 °C. At temperatures below 7.5 °C, the NH 3 –NaSCN–He mixture achieves a higher COP than does the NH 3 –H 2 O–He mixture. The NH 3 –LiNO 3 mixture shows crystallization at higher temperatures in the generator than does the NH 3 –NaSCN mixture. Moreover, at the same evaporator temperature, the NH 3 –LiNO 3 mixture works at activation temperatures lower than does the NH 3 –NaSCN mixture. -- Highlights: ► We studied a diffusion absorption cooling system with different working mixtures. ► The NH 3 –LiNO 3 mixture showed more efficiency than NH 3 –H 2 O mixture and NH 3 –NaSCN mixture. ► The generator and absorber temperature 15. Solar absorption surface panel Science.gov (United States) Santala, Teuvo J. 1978-01-01 A composite metal of aluminum and nickel is used to form an economical solar absorption surface for a collector plate wherein an intermetallic compound of the aluminum and nickel provides a surface morphology with high absorptance and relatively low infrared emittance along with good durability. 16. Nutrition and magnesium absorption NARCIS (Netherlands) Brink, E.J. 1992-01-01 The influence of various nutrients present in dairy products and soybean-based products on absorption of magnesium has been investigated. The studies demonstrate that soybean protein versus casein lowers apparent magnesium absorption in rats through its phytate component. However, true 17. Zeeman atomic absorption spectroscopy International Nuclear Information System (INIS) Loos-Vollebregt, M.T.C. de. 1980-01-01 A new method of background correction in atomic absorption spectroscopy has recently been introduced, based on the Zeeman splitting of spectral lines in a magnetic field. A theoretical analysis of the background correction capability observed in such instruments is presented. A Zeeman atomic absorption spectrometer utilizing a 50 Hz sine wave modulated magnetic field is described. (Auth.) 18. Marginally Deformed Starobinsky Gravity DEFF Research Database (Denmark) Codello, A.; Joergensen, J.; Sannino, Francesco 2015-01-01 We show that quantum-induced marginal deformations of the Starobinsky gravitational action of the formR^{2(1 -\\alpha)}$, with$R$the Ricci scalar and$\\alpha\$ a positive parameter, smaller than one half, can account for the recent experimental observations by BICEP2 of primordial tensor modes.... Science.gov (United States) 2010-09-01 ORGANIZATION NAME(S) AND ADDRESS(ES) University of Hawaii ,Institute for Astronomy,640 North A‘ohoku Place, #209 , Hilo ,HI,96720-2700 8. PERFORMING...Advanced Curvature Deformable Mirrors Christ Ftaclas1,2, Aglae Kellerer2 and Mark Chun2 Institute for Astronomy, University of Hawaii 20. Use of LS-DYNA(Registered TradeMark) to Assess the Energy Absorption Performance of a Shell-Based Kevlar(TradeMark)/Epoxy Composite Honeycomb Science.gov (United States) Polanco, Michael 2010-01-01 The forward and vertical impact stability of a composite honeycomb Deployable Energy Absorber (DEA) was evaluated during a full-scale crash test of an MD-500 helicopter at NASA Langley?s Landing and Impact Research Facility. The lower skin of the helicopter was retrofitted with DEA components to protect the airframe subfloor upon impact and to mitigate loads transmitted to Anthropomorphic Test Device (ATD) occupants. To facilitate the design of the DEA for this test, an analytical study was conducted using LS-DYNA(Registered TradeMark) to evaluate the performance of a shell-based DEA incorporating different angular cell orientations as well as simultaneous vertical and forward impact conditions. By conducting this study, guidance was provided in obtaining an optimum design for the DEA that would dissipate the kinetic energy of the airframe while maintaining forward and vertical impact stability.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048099875450134, "perplexity": 3399.2301491431067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370490497.6/warc/CC-MAIN-20200328074047-20200328104047-00173.warc.gz"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/F16/f16dqc.html
f16 Chapter Contents f16 Chapter Introduction NAG C Library Manual NAG Library Function Documentnag_iamax_val (f16dqc) 1  Purpose nag_iamax_val (f16dqc) computes, with respect to absolute value, the largest component of an integer vector, along with the index of that component. 2  Specification #include #include void nag_iamax_val (Integer n, const Integer x[], Integer incx, Integer *k, Integer *i, NagError *fail) 3  Description nag_iamax_val (f16dqc) computes, with respect to absolute value, the largest component, $i$, of an $n$-element integer vector $x$, and determines the smallest index, $k$, such that $i=xk=maxjxj.$ 4  References Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001) Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard University of Tennessee, Knoxville, Tennessee http://www.netlib.org/blas/blast-forum/blas-report.pdf 5  Arguments 1:     nIntegerInput On entry: $n$, the number of elements in $x$. Constraint: ${\mathbf{n}}\ge 0$. 2:     x[$\mathit{dim}$]const IntegerInput Note: the dimension, dim, of the array x must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,1+\left({\mathbf{n}}-1\right)×\left|{\mathbf{incx}}\right|\right)$. On entry: the vector $x$. Element ${x}_{\mathit{i}}$ is stored in ${\mathbf{x}}\left[\left(\mathit{i}-1\right)×\left|{\mathbf{incx}}\right|\right]$, for $\mathit{i}=1,2,\dots ,n$. 3:     incxIntegerInput On entry: the increment in the subscripts of x between successive elements of $x$. Constraint: ${\mathbf{incx}}\ne 0$. 4:     kInteger *Output On exit: $k$, the index, from the set $\left\{0,\left|{\mathbf{incx}}\right|,\dots ,\left({\mathbf{n}}-1\right)×\left|{\mathbf{incx}}\right|\right\}$, of the largest component of $x$ with respect to absolute value. If ${\mathbf{n}}=0$ on input then k is returned as $-1$. 5:     iInteger *Output On exit: $i$, the largest component of $x$ with respect to absolute value. If ${\mathbf{n}}=0$ on input then i is returned as $0$. 6:     failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). 6  Error Indicators and Warnings On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, ${\mathbf{incx}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{incx}}\ne 0$. On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 0$. Not applicable. None. 9  Example This example computes the largest component with respect to absolute value and index of that component for the vector $x= 1,10,11,-2,9T .$ 9.1  Program Text Program Text (f16dqce.c) 9.2  Program Data Program Data (f16dqce.d) 9.3  Program Results Program Results (f16dqce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972391724586487, "perplexity": 3417.463142174127}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112228.39/warc/CC-MAIN-20160428161512-00002-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/exp-tanz-1-complex-analysis.174069/
# Exp(tanz) = 1, complex analysis 1. Jun 15, 2007 ### malawi_glenn 1. The problem statement, all variables and given/known data Find all solutions to: $$e^{\tan z} =1, z\in \mathbb{C}$$ 2. Relevant equations z = x+yi $$\log z = ln|z| + iargz +2\pihi, h\in \mathbb{Z}$$ $$\log e^{z} = x + iy +2\pihi, h\in \mathbb{Z}$$ $$Log e^{z} = x + iy$$ 3. The attempt at a solution I do not really know how to approach this, I tried to beging with writing tan(z) as Alots of cos(x)sinh(y) etc.. But can I do the Log(e^tan(z)) at the left side, then do the log(1) at the right side? I mean the "only" difference is that you get this $$2\pihi, h\in \mathbb{Z}$$ on both sides, so you always reduce both these terms to one: $$2\piUi, U\in \mathbb{Z}$$ What do you think? 2. Jun 15, 2007 ### Dick Ok, taking logs you get tan(z)=2*pi*i*n (yes, no need for separate 2*pi*i*n's on both sides. Why not write tan(z) in terms of complex exponentials, let t=e^(iz) and solve the resulting quadratic for t? Guess I'm not sure where you are having problems. 3. Jun 15, 2007 ### malawi_glenn so if I write $$\tan z =\dfrac{1}{i} \dfrac{t-t^{-1}}{t+t^{-1}} = \dfrac{1}{i} \dfrac{t^{2}-1}{t^{2}+1}$$ then perform the Log(e^tanz) on left side, then log(1) on right side, is that a "legal" act? or do I have to take log - log /// Log - Log ?.. Last edited: Jun 15, 2007 4. Jun 15, 2007 ### malawi_glenn or you mean AFTER taking log's i write tanz as that? 5. Jun 15, 2007 ### malawi_glenn okay this is as far I can get: $$e^{iz} = t$$ $$z = - i \ln |t| + argt$$ $$\tan z = \dfrac{1}{i} \dfrac{t^{2}-1}{t^{2}+1}; t^{2} \neq -1; \Rightarrow t \neq \pm i$$ $$\tan z = \log 1 = 0 +0i + 2\pi ni; n \in \mathbb{Z}$$ $$\Rightarrow \dfrac{1}{i} \dfrac{t^{2}-1}{t^{2}+1} = 2\pi ni$$ $$\Rightarrow \dfrac{t^{2}-1}{t^{2}+1} = -2\pi n$$ $$\Rightarrow t^{2}-1 = -2\pi n(t^{2}+1)$$ ... $$\Rightarrow t = \pm \sqrt{\dfrac{2\pi n-1}{2\pi n+1}} \neq -1$$ $$\Rightarrow n \neq 0$$ $$t_{1} = "+"sqrt(.. \Rightarrow arg(t_{1}) = 0 + 2\pi k, k \in \mathbb{Z}$$ $$t_{2} = "-"sqrt(.. \Rightarrow arg(t_{2}) = \pi + 2\pi k$$ it can be shown that t is always real, if n = -2, we get a negative nominator and a negative denominator, hence the number inside the squareroot is always positive. $$z_{1} = - \dfrac{i}{2} \ln \left( \dfrac{2\pi n-1}{2\pi n+1} \right) + \pi + 2\pi k$$ $$z_{2} = - \dfrac{i}{2} \ln \left( \dfrac{2\pi n-1}{2\pi n+1} \right) + 2\pi k$$ $$z = k\pi$$ and: $$z = \dfrac{i}{2} \ln \left( \dfrac{2\pi n-1}{2\pi n+1} \right) + \frac{\pi}{2} + 2\pi k$$ LOL I am soooo cloose!! =( Last edited: Jun 15, 2007 6. Jun 15, 2007 ### Dick I get t^2=(1-2*pi*n)/(1+2*pi*n). So I get a number that is always negative. So I agree with the pi/2 phase in the book answer. But I get +/-pi/2+2*pi*k which is the same thing as pi/2+pi*k. But I'm also getting -i/2 in front of the log. (But I seem to get the right answer with either sign - this is troubling me). 7. Jun 15, 2007 ### malawi_glenn yes I did a relly sucky thing in getting the t^2.. I should be ashamed! :) if n = 0 then it is positive, right? But can not be zero according to the aswer, now why is that? Last edited: Jun 15, 2007 8. Jun 15, 2007 ### Dick Aggghhh. And get this, the sign in front of the log doesn't matter either. Take for example the cases n=1 and n=-1. The arguments of the logs are just reciprocals of each other. Must be fun to create obfuscated answers for these things. 9. Jun 15, 2007 ### Dick n=0 gives you the real solutions, the k*pi list. Not to hard so see working it as a separate case. 10. Jun 15, 2007 ### malawi_glenn I got it now, thanx again for all the help. I hope i did not made you indignant. 11. Jun 15, 2007 ### Dick I was working to hard to correct my own blunders to become indignant over yours. 12. Jun 15, 2007 ### malawi_glenn I dont understand why the answers must be so "clean".. it only cunfuses and does not get you learn more about the actual subject. 13. Jun 16, 2007 ### Dick It can be helpful to numerically evaluate a few cases of a solution for small n,k etc and then compare with yours. It's pretty clear when they don't agree - and if they do it can help to make it clearer why. Similar Discussions: Exp(tanz) = 1, complex analysis
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336061835289001, "perplexity": 2127.0119393581595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817999.51/warc/CC-MAIN-20180226025358-20180226045358-00581.warc.gz"}
http://mathhelpforum.com/math-software/84746-log-scaling.html
1. ## Log Scaling Hi guys when i use the command semilogx(variable1,variable2) or semilogy(variable1,variable2) to polt a function where (Y-axis) is in a logarithmic scale i get 50% of the required curve , because the 1st half of the curve is a stright line and the other is what i need can you help me , i want to get the full correct curve. 2. Originally Posted by Zamorano Hi guys when i use the command semilogx(variable1,variable2) or semilogy(variable1,variable2) to polt a function where (Y-axis) is in a logarithmic scale i get 50% of the required curve , because the 1st half of the curve is a stright line and the other is what i need can you help me , i want to get the full correct curve. What software? CB 3. Matlab and the command i used was semilogx(variablex,variabley) semilogy(variablex,variabley),loglog(variablex,var iabley) don't work either 4. Originally Posted by Zamorano Matlab and the command i used was semilogx(variablex,variabley) semilogy(variablex,variabley),loglog(variablex,var iabley) don't work either What data are you trying to plot? CB 5. BC thank you very much for helping me the big mistake i did was my loop increment the loop was from 1 to 1*10^6 , the increment statred from 1000 so , the the curve started from 0 then 1000 , 2000, 3000,....... for this reason, the interval between 0 and 1000 was a stright line
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478113055229187, "perplexity": 1683.9780754036724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170562.59/warc/CC-MAIN-20170219104610-00179-ip-10-171-10-108.ec2.internal.warc.gz"}
https://nuit-blanche.blogspot.com/2016/09/l1-pca-online-sparse-pca-and.html
Friday, September 30, 2016 L1-PCA, Online Sparse PCA and Discretization and Minimization of the L1 Norm on Manifolds Coming back to some of the themes around Matrix Factorizations, the L1 norm and phase transitions: Iteratively Reweighted Least Squares Algorithms for L1-Norm Principal Component Analysis by Young Woong Park, Diego Klabjan Principal component analysis (PCA) is often used to reduce the dimension of data by selecting a few orthonormal vectors that explain most of the variance structure of the data. L1 PCA uses the L1 norm to measure error, whereas the conventional PCA uses the L2 norm. For the L1 PCA problem minimizing the fitting error of the reconstructed data, we propose an exact reweighted and an approximate algorithm based on iteratively reweighted least squares. We provide convergence analyses, and compare their performance against benchmark algorithms in the literature. The computational experiment shows that the proposed algorithms consistently perform best. Online Learning for Sparse PCA in High Dimensions: Exact Dynamics and Phase Transitions by Chuang Wang, Yue M. Lu We study the dynamics of an online algorithm for learning a sparse leading eigenvector from samples generated from a spiked covariance model. This algorithm combines the classical Oja's method for online PCA with an element-wise nonlinearity at each iteration to promote sparsity. In the high-dimensional limit, the joint empirical measure of the underlying sparse eigenvector and its estimate provided by the algorithm is shown to converge weakly to a deterministic, measure-valued process. This scaling limit is characterized as the unique solution of a nonlinear PDE, and it provides exact information regarding the asymptotic performance of the algorithm. For example, performance metrics such as the cosine similarity and the misclassification rate in sparse support recovery can be obtained by examining the limiting dynamics. A steady-state analysis of the nonlinear PDE also reveals an interesting phase transition phenomenon. Although our analysis is asymptotic in nature, numerical simulations show that the theoretical predictions are accurate for moderate signal dimensions. Consistent Discretization and Minimization of the L1 Norm on Manifolds by Alex Bronstein, Yoni Choukroun, Ron Kimmel, Matan Sela The L1 norm has been tremendously popular in signal and image processing in the past two decades due to its sparsity-promoting properties. More recently, its generalization to non-Euclidean domains has been found useful in shape analysis applications. For example, in conjunction with the minimization of the Dirichlet energy, it was shown to produce a compactly supported quasi-harmonic orthonormal basis, dubbed as compressed manifold modes. The continuous L1 norm on the manifold is often replaced by the vector l1 norm applied to sampled functions. We show that such an approach is incorrect in the sense that it does not consistently discretize the continuous norm and warn against its sensitivity to the specific sampling. We propose two alternative discretizations resulting in an iteratively-reweighed l2 norm. We demonstrate the proposed strategy on the compressed modes problem, which reduces to a sequence of simple eigendecomposition problems not requiring non-convex optimization on Stiefel manifolds and producing more stable and accurate results. Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there ! Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8119519352912903, "perplexity": 643.0355801819168}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583795042.29/warc/CC-MAIN-20190121152218-20190121174218-00185.warc.gz"}
https://www.physicsforums.com/threads/nice-little-proof-can-any-one-do-it.334920/
# Nice little proof, can any one do it? 1. Sep 5, 2009 ### rusticle A linear transformation F is said to be one-to-one if it satisfies the following condition: if F(u) = F(v) then u = v. Prove that F is one-to-one if and only if Ker(F) = {0}. 2. Sep 5, 2009 3. Sep 5, 2009 ### arildno a) IF one-to one.. You are to show that this implies the kernel of F contains just the 0 element. Now, you can prove that this must be true, by way of contradiction: ASSUME that the i) linear transformation F is both ii) one-to-one AND has iii) non-zero vectors in its kernel. Show that i)+iii) implies that F is NOT one-to-one! 4. Sep 5, 2009 ### rusticle hmmm yeah my friend posed this proof to me and asked me to try and find a good solution to it if i could, to be honest i have idea! its possible that i just lack the algebra to do so 5. Sep 5, 2009 ### snipez90 Um, this proof doesn't really require any algebra, just the basic properties that define a linear transformation. For the reverse direction, suppose F(u) = F(v) so that F(u) - F(v) = F(u-v). For the forward direction, you could also prove that F(0) = 0, which means that {0} is contained in Ker(F), so you just have to prove that Ker(F) is contained in {0}. Let v be in Ker(F), find out what this means and remember that you should have proved 0 = F(0). 6. Sep 6, 2009 ### rusticle ah thanks snipez90, pretty sure ive got it out! Similar Discussions: Nice little proof, can any one do it?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936296701431274, "perplexity": 953.4313732007308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891926.62/warc/CC-MAIN-20180123111826-20180123131826-00102.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/following-rearrangement-reaction-order-ch3nc-ch3cnin-table-kinetics-data-following-values--q1005084
## Activation Energy, Temperature, and Catalysts The following rearrangement reaction is first order: CH3NC -- CH3CN In a table of kinetics data, we find the following values listed for this reaction: A = 3.98 x 10^13 s-1, Ea = 160. kJ/mol. (a) Calculate the value of the specific rate constant at room temperature, 25 degrees Celsius. (b) Calculate the value of the specific rate constant at 115 degrees Celsius.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8741334080696106, "perplexity": 1368.2839772660318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705976722/warc/CC-MAIN-20130516120616-00021-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/136808/transitive-subset-of-set-of-natural-numbers
# Transitive subset of set of natural numbers Let $A$ be a nonempty subset of $\omega$, the set of natural numbers. I want to prove this statement: If $\bigcup A=A$ then $n\in A \implies n^+\in A$. Help... - When you're viewing the natural numbers as the sets of smaller natural numbers, it is good style to explicitly call them "finite ordinals" or something like that. Otherwise, readers are likely to think that you're only considering the arithmetic properies of the numbers (rather than one specific set-theoretic implementation of them). –  Henning Makholm Apr 25 '12 at 15:03 Also closely related: this question by the OP. –  Asaf Karagila Apr 25 '12 at 16:18 @asaf: I read your proof. You showed that A is fully ordered by membership assuming that w is well ordered by membership, but is there another way to prove that UA is an element of w or is w, not assuming that w is well ordered? –  Katlus Apr 25 '12 at 16:43 The book im studying has not even defined what is ordinal.. Thus i wrote 'element of w or w' rather than ordinal.. –  Katlus Apr 25 '12 at 16:45 Katlus, I have no way of knowing what the book have taught you or haven't taught you. At least tell us what book you are studying from, so people knowing it might have a better way of helping you. This extends to my next point, you have not supplied any definition of $\omega$ or otherwise a natural number. There are several which are equivalent, I cannot know which one you were given as the basic definition and which one will come later. –  Asaf Karagila Apr 25 '12 at 17:30 $\bigcup A\subseteq A$ says that $A$ is transitive and is therefore an ordinal. Now if $A$ were a successor ordinal $\alpha+1 = \alpha\cup\{\alpha\}$, then $\alpha\in A$ but $\bigcup A = (\bigcup\alpha)\cup\alpha \not\ni \alpha$. Thus $A$ must be either $0$ or $\omega$. Or more directly: Assume $n\in A$. Then $n\in\bigcup A$, that is, there exits $y$ such that $n\in y \in A$. Then $n^+ \le y$. In the case $n^+=y$ we have $n\in A$ directly. Otherwise $n^+\in y\in A$ so $n^+\in\bigcup A$. - Thanks! But i want to show that A=w by showing that A is a successor set. Your proof is really good but it shows that A=w first, then show that A is successor set. Can you help me how to show that A is a successor set directly? By the way, I have proved that empty set is an element of A. –  Katlus Apr 25 '12 at 15:25 See edit. ${}{}{}$ –  Henning Makholm Apr 25 '12 at 15:34 This problem is at right after introducing elementary properties of natural numbers while showing that 'w is well ordered by membership' is at 3 chapters later. –  Katlus Apr 25 '12 at 16:26 This is first time im studying set theory.. It feels weird to study further and come back to the problem i couldnt prove and prove it with the concept not in that chapter... –  Katlus Apr 25 '12 at 16:34 Oh yes, this can be pretty confusing! Here is how I think of it: $\bigcup A = A$ really means $\bigcup A \subseteq A$ and $\bigcup A \supseteq A$. Now assume $a \in A$. From $A \subseteq \bigcup A$, we get $a \in \bigcup A$, which, looking at the definition of $\bigcup A$, means that $a$ is an element of some element of $A$, say $b$. That is, $a \in b \in A$. But $b$ is a natural number, and since $a \in b$, $b$ is a $strictly$ $bigger$ natural than $a$. If $b = a^+$, we have $a^+ \in A$, and we are done. Otherwise, $a^+$ is an element of $b$. But then from $\bigcup A \subseteq A$, we get $a^+ \in A$ again. And looking at the proof, we see that $A$ must have actually been $\omega$ all along! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351660013198853, "perplexity": 260.90947654445966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989042.37/warc/CC-MAIN-20150728002309-00197-ip-10-236-191-2.ec2.internal.warc.gz"}
https://hal-cea.archives-ouvertes.fr/cea-01564700
# non-BPS walls of marginal stability * Corresponding author Abstract : We explore the properties of non-BPS multi-centre extremal black holes in ungauged $N$=2 supergravity coupled to vector multiplets, as described by solutions to the composite non-BPS linear system. After setting up an explicit description that allows for arbitrary non-BPS charges to be realised at each centre, we study the structure of the resulting solutions. Using these results, we prove that the binding energy of the composite is always positive and we show explicitly the existence of walls of marginal stability for generic choices of charges. The two-centre solutions only exist on a hypersurface of dimension $n_v$+1 in moduli space, with an $n_v$-dimensional boundary, where the distance between the centres diverges and the binding energy vanishes. Keywords : Document type : Journal articles Domain : Cited literature [26 references] https://hal-cea.archives-ouvertes.fr/cea-01564700 Contributor : Emmanuelle de Laborderie <> Submitted on : Wednesday, July 19, 2017 - 10:23:22 AM Last modification on : Tuesday, December 8, 2020 - 10:19:03 AM ### File 1309.3236.pdf Files produced by the author(s) ### Citation Guillaume Bossard, Stefanos Katmadas. non-BPS walls of marginal stability. Journal of High Energy Physics, Springer, 2013, 2013, pp.179. ⟨10.1007/JHEP10(2013)179⟩. ⟨cea-01564700⟩ Record views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027374148368835, "perplexity": 2452.012491124026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547475.44/warc/CC-MAIN-20210124075754-20210124105754-00304.warc.gz"}
http://mathoverflow.net/questions/27413/when-are-ehrhart-functions-of-compact-convex-sets-polynomials?answertab=votes
# When are Ehrhart functions of compact convex sets polynomials? Given a lattice $L$ and a subset $P\subset \mathbb R^d$, we define for each positive integer $t$ $$f_P(L,t)=|(tP\cap L)|$$ the number of lattice points in $tP$. Let's say $P$ is nice if $f_P(L,t)$ is a polynomial. We know that if $P$ is a convex polytope with vertices in $L$ then $P$ is nice and $f_P(L,t)$ is its Ehrhart polynomial. My question is about some converse of this statement. Are there some mild assumptions (for example convexity etc.) on $P$, under which if $f_P(L,t)$ is a polynomial with respect to at least some lattice $L$ then $P$ must be a convex polytope? Or a weaker question: Is any polynomial arising this way also the Ehrhart polynomial of some polytope? P.S. I haven't thought much about this question so I apologize if it is well-known or it has an obvious negative answer. Also feel free to retag. Richard Stanley suggested the following in the comments (edited to take into account a trivial family of counter-examples): Could the following be true? It seems more in line with the question. Let $P$ be a compact convex $n$-dimensional set in $\mathbb R^n$. Suppose that the Ehrhart function $f_P(t)$ is a polynomial for positive integers $t$. Then $P$ is a translation of a rational polytope. Edit: I would also be interested in a slightly weaker statement: Suppose a convex set has positive curvature almost everywhere, must the Ehrhart function necessarily be non-polynomial? For example given an arbitrary lattice, what would be the easiest way to see that a circle doesnt have a polynomial Ehrhart function? - A bit of nitpicking: Ehrhart polynomial is defined for $\textit{convex}$ polytopes. –  Victor Protsak Jun 8 '10 at 1:26 I guess you know that the lattice point counting function of a rational polytope is in general a quasi-polynomial, i.e., possibly given by a different polynomial on each residue class modulo $N$ for some fixed positive integer $N$. I did a little bit of work on this in the very special case of n-simplices in $\mathbb{R}^{n+}$. Even in this very simple case the question of how many distinct polynomials arise was (and is) not obvious to me. –  Pete L. Clark Jun 8 '10 at 1:27 On the other hand, a finite polyhedral complex in $\mathbb{R}^n$ whose cells are convex lattice polytopes need not be convex, but its lattice volume is still given by a polynomial, so the conclusion shouldn't be "P is a convex polytope". –  Victor Protsak Jun 8 '10 at 1:45 Could the following be true? It seems more in line with the question. Let $P$ be a compact convex $n$-dimensional set in $\mathbb{R}^n$. Suppose that the Ehrhart function $f_P(t)$ is a polynomial for positive integers $t$. Then $P$ is a rational polytope. –  Richard Stanley Jun 10 '10 at 0:22 The answer to the question in my previous comment is no, as shown by the 1-dimensional polytope with vertices $\alpha$ and $1+\alpha$, where $\alpha$ is irrational. However, it is possible that the only counterexamples are of a similar trivial nature. Thus one can change the conclusion to: $P$ is the translate of a rational polytope (and only very special rational polytopes, since in general the translate of a rational polytope does not have a quasipolynomial Ehrhart function). –  Richard Stanley Jun 19 '10 at 2:04 Just to remark that for a rational polytope whose vertices are not integral, the function $f_P(t)$ could still be a polynomial (and not just a quasipolynomial). A large class of examples is provided by degenerations of flag varieties $G/B$. There are many degenerations, each corresponding to a representation of the longest word $w\in W$ in the Weil group as the shortest product of standard reflections. All of these correspond to rational polytopes. They all have the same Erhart function. Some of them are integral but others are not. For more details, see R. Chiriv`ı, LS algebras and application to Schubert varieties, Transform. Groups 5 (2000), no. 3, 245–264, or Alexeev-Brion Toric degenerations of spherical varieties. - Also MR2096742, De Loera, Jesús A., McAllister, Tyrrell B.(1-CAD) Vertices of Gelfand-Tsetlin polytopes. Discrete Comput. Geom. 32 (2004), no. 4, 459--470. The counting problem is weight multiplicity (Kostka number). According the review, this was the first example of an infinite family of non-integral rational polytopes with polynomial Ehrhart polynomials. –  Victor Protsak Jun 8 '10 at 2:53 I believe that the strong form of the conjecture is false. In lieu of a simple counterexample, let me point you towards a centrally symmetric 10-gon $\hat P$ in arXiv:0801.2812, Figure 6. It is a bit of a mess to explain exactly what it is, but it has something to do with Picard lattice of a toric DM stack. It need not be rational or a translate of rational. The key properties of $\hat P$ are (1) It is centrally symmetric. (2) The midpoints of all the sides are lattice points. As a result, opposite sides are lattice translates of each other. As a result, generic translates of $\hat P$ have the same number of lattice points. Indeed, as you move the polytope in a plane along a general curve as soon as a point appears on one side of it, another point exits from the opposite side. This implies that the opposite sides of $\hat P$ glue together to give a "no-gaps" cover of the torus $\mathbb R^2/L$ (preimage of a generic point has the same cardinality $k$). Then if one takes a $t$-multiple of it, one gets a "no-gaps" cover of $\mathbb R^2/tL$, and will thus have $kt^2$ points in $t(\hat P+ c)$ for a generic shift $c$. I assume that this construction can be simplified to give something more explicit and palatable, so long as the property that the opposite sides are lattice translates of each other is satisfied. It clearly requires flat sides to be able to glue them together on the torus, so this idea is not going to work for the positive curvature problem. - If you want to dive into some Ehrhart theory then I highly recommend you pick up Computing the Continuous Discretely: Integer-Point Enumeration in Polyhedra by Matthias Beck and Sinai Robins. Here is the website for the text with a free but nonprintable version: http://math.sfsu.edu/beck/ccd.html - Of course I have no desire to disrespect the authors' wishes, but I have to ask: what does "nonprintable version" mean? It seems like a regular pdf with regular formatting to me. –  Pietro KC Jun 11 '10 at 17:29 Nonprintable means that it has DRM built into it. Of course, it is a bit silly because it is up to the PDF software to decide if it wants to obey it. So for example, the print option is not there in acrobat, but it would be in xpdf. –  Steven Sam Aug 5 '11 at 5:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569055795669556, "perplexity": 252.3329007644296}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098987.83/warc/CC-MAIN-20150627031818-00299-ip-10-179-60-89.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/178502/writing-a-non-integer-power-in-terms-of-integer-powers
# Writing a non integer power in terms of integer powers I would like to write $x^{2.5}$ in terms of $x$ to the power of integers, is there any way to do this. Taylor series etc. don't work when they depend on derivatives. If it is not possible, do you have or know a proof. Thanks ### EDIT: to clarify, I mean that I want to write $x^{2.5}$ in terms of a series of $x^{\mathrm{integer}}$'s, for example: $1 + x^2 + x^{3}$. I tried to use Taylor series but since it depends on derivatives of $x^{2.5}$ but they do not have integer powers... - What's wrong with Taylor series? Do you want a finite expression? –  tomasz Aug 3 '12 at 17:41 What do you mean by "in terms of?" It's gonna be very hard to do this. In particular, integer powers are well-defined in the complex plane, but you get problems with non-integer powers, which need to be multi-valued. (The most obvious is the square root function, which can take +/- values, but if the exponent is irrational, you actually get an infinite number of possible values for exponentiation.) –  Thomas Andrews Aug 3 '12 at 17:42 I think he means taylor series about $0$, since at some point, the derivatives become undefined at $x=0$. –  Thomas Andrews Aug 3 '12 at 17:48 Yeah, I think that was the concern. A work-around to this is to use the Taylor series of $\sqrt{1+x}$ (or more generally, $\sqrt{c+x}$). –  Hugh Denoncourt Aug 3 '12 at 18:08 What does Taylor series etc wont work when they depend on derivatives. mean? When doesn't Taylor depend on derivatives? Can you give an example to clarify that? –  draks ... Aug 3 '12 at 18:11 $x^{2.5}$ can't be represented by a series in integer powers of $x$, because it is not analytic (in fact not meromorphic) at $0$: instead, it has a branch point there. - Could you provide a link to some information on this? –  John Echo Aug 3 '12 at 18:41 @EoinMurray In general, non-integer powers are "multivalued" functions in a complex sense. Branch points are points of discontinuity where the function "jumps" from one value to another. For example, $z^{1/2}$ is the basic example of a multivalued function. $z^{2.5} = z^2z^{.5}$ is also therefore multivalued. The term "analytic function" means a function that can be written with a power series expansion. Analyticity can be shown to not hold if certain conditions on the magnitude of the derivative do not hold. –  Arkamis Aug 3 '12 at 18:49 Here are some helpful wiki links: en.wikipedia.org/wiki/… en.wikipedia.org/wiki/Branch_point –  Arkamis Aug 3 '12 at 18:50 I'll quickly show in practice why Taylor series doesn't work in this case as one would think. Taylor's theorem says, that for a sufficiently smooth function we have $$f(x) = \sum_{k=0}^{n-1} \frac{\mathrm d^k}{dx^k}f(a)\frac{(x-a)^k}{k!} +R_n$$ for any $n\in \mathbb N$ and some remainder $R_n$. Surely we can apply this theorem to $x^{2.5}$. It's $$\frac{\mathrm d^k}{dx^k}x^{2.5} = 2.5^{\underline k} x^{2.5-k}$$where $2.5^{\underline k}$ denotes falling factorial. So for example evaluating the series for $a=1$ we get $$x^{2.5} = \sum_{k=1}^{n-1} \frac{2.5^{\underline k}}{k!}(x-1)^k + R_k$$ However we get into trouble if the think, well let's do $n\to \infty$, forget about the Remainder and call $$(x+1)^{2.5} = \sum_{k=1}^{\infty} \frac{2.5^{\underline k}}{k!}x^k$$ the series we looked for. We have to be sure first that the series isn't all crap and the $R_n$ contains the useful information! For "nice" functions, we will have $R_n \to 0$, but not in this case. By Lagrange form, there is some $\xi$ such that $$R_n = \frac{2.5^{\underline n}}{n!}\xi^{2.5-n}(x-1)^n.$$ Now does this go to $0$? No, because $2.5^{\underline n}=2.5\cdot 1.5 \cdot \ldots \cdot (-998.5) \cdot (-999.5) \cdot ...$ behaves as bad as $n!$. So while we can use the series as above, it wont't probably converge to $x^{2.5}$ as we wished. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9307271838188171, "perplexity": 389.32786033977436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989507.42/warc/CC-MAIN-20150728002309-00148-ip-10-236-191-2.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/264731/one-more-step-for-showing-rx-y-cong-rxy/264732
# One more step for showing $R[x,y]\cong R[x][y]$ I'm trying to show $R[x,y]\cong R[x][y]$ using the following proposition: Let $\varphi:R\to R'$ be a ring homomorphism. Given $\alpha_1,\alpha_2,\cdots,\alpha_n\in R'$, there exists a unique homomorphism $\Phi:R[x_1,x_2,\cdots,x_n]\to R'$ such that $$\Phi|_{R}=\varphi, \quad \Phi(x_i)=\alpha_i,i=1,2,\cdots,n.$$ Consider the inclusion map $f:R\to R[x][y]$ which is a homomorphism. By the proposition above, there is a unique homomorphism $g:R[x,y]\to R[x][y]$ such that $$g|_R=f$$ and $g(x)=x$, $g(y)=y$. It suffices to show that $g$ is a homomorphism. Instead of showing that $g$ is 1-1 and onto, I'm trying to construct an inverse of $g$. Consider the inclusion map $h:R[x]\to R[x,y]$. Using the proposition again, we have a unique homomorphism $l:R[x][y]\to R[x,y]$ such that $$l_{R[x]}=h$$ and $l(y)=y$. It follows that we have a homomorphism $$l\circ g:R[x,y]\to R[x,y]$$ such that $$l\circ g|_{R}=id_{R[x,y]}|_R$$ and $l\circ g(x)=x$, $l\circ g(y)=y$. We also have a similar argument with $$g\circ l: R[x][y]\to R[x][y].$$ I've only shown that $l\circ g$ agrees with the identity map $id_{R[x,y]}$ on $R$ and $\{x,y\}$. My question: How can I show that $l\circ g$ is the identity map $id_{R[x,y]}$? (Then similarly, I would be able to show $g\circ l=id_{R[x][y]}$). [EDIT:] I saw a proof in Artin's Algebra, but I didn't understand how the underscored sentence works. - The maps $l$ and $g$ are both ring homomorphisms, so the map $l\circ g$ is a ring homomorphism. Thus as $R[x,y]$ is generated by $R$, $x$ and $y$, and $l\circ g$ fixes these, it is the identity. If you want to check this more carefully, you can write a general element of $R[x,y]$ in terms of $x$, $y$ and elements of $R$, apply $l\circ g$ to it, and then expand the expression using the properties of ring homomorphisms until the map $l\circ g$ is only being applied to $x$, $y$ and elements of $R$. - Hmm, this is much clearer than the proof in Artin's Algebra. – Jack Dec 24 '12 at 21:04 Oh, I sort of like his better, having seen it - if I'd realised I'd have written my answer that way. It's a third application of the theorem you started with, this time using the "unique" part. The map $l\circ g$ extends the inclusion $R\to R[x,y]$ to a map $R[x,y]\to R[x,y]$ by mapping $x\mapsto x$ and $y\mapsto y$, but so does the identity. As this extension is unique, the two maps are the same. (This is what Artin means be "uniqueness of the substitution homomorphism".) – Matthew Pressland Dec 24 '12 at 21:07 Really this theorem is a universal property defining $R[x_1,\dotsc,x_n]$, and objects defined up to universal property are unique up to unique isomorphism. So what you're really doing is showing that $R[x,y]$ and $R[x][y]$ both have this universal property, so they must be isomorphic (and then reproducing a bit of the proof of this general fact in this case, where you show that the two maps you found are mutual inverses). Don't worry if that doesn't make sense to you, it's just some wider context. – Matthew Pressland Dec 24 '12 at 21:11 Ah, now I see how the "unique" part is applied. Thanks! What do you mean by the "universal property" defining $R[x_1,\cdots,x_n]$? – Jack Dec 24 '12 at 21:22 The details are a little too long for a comment, but I may go back and put it into the answer, once I've thought about exactly the right way to say it. Loosely though, one way to define $R[x_1,\dotsc,x_n]$ is to say that it's the unique object such that the theorem you quoted is true. If you've ever seen the universal property of free groups, it's a similar idea to that. If you haven't, you should probably understand that example before this one, so I wouldn't worry for now. – Matthew Pressland Dec 24 '12 at 21:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597840309143066, "perplexity": 95.11176300290221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823387.9/warc/CC-MAIN-20160723071023-00303-ip-10-185-27-174.ec2.internal.warc.gz"}
http://openstudy.com/updates/5591815fe4b0a590d096ec00
Here's the question you clicked on: 55 members online • 0 viewing ## cassieforlife5 one year ago How to find final momentum of inelastic collision? I'm doing an activity for physics and I have to find the final momentum after two items collide in an inelastic collision. I have the two initial momentums and two final momentums, but the sheet is only looking for one final momentum. Do I add the two final momentums to get the final momentum? Delete Cancel Submit • This Question is Closed 1. IrishBoy123 • one year ago Best Response You've already chosen the best response. 1 yes just remember momentum is a vector so it has direction 2. IrishBoy123 • one year ago Best Response You've already chosen the best response. 1 and it should equal the total initial momentum (or your experiment has discovered a new law)!!! 3. Michele_Laino • one year ago Best Response You've already chosen the best response. 0 I think that you have to make the vector sum of the two final momentums. Please also check that vector sum has to be equal to the vector sum of the two initial momentums 4. asib1214 • one year ago Best Response You've already chosen the best response. 0 @Michele_Laino ##### 1 Attachment 5. asib1214 • one year ago Best Response You've already chosen the best response. 0 how do i find the amplitde 6. ybarrap • one year ago Best Response You've already chosen the best response. 1 Convert this to a sine wave with Amplitude on y-axis, time on x-axis: |dw:1435610998159:dw| $$A$$ is amplitude (in cm) $$T$$ is period (in seconds) or number of seconds per cycle Frequency is $$\frac{1}{T}$$ Hz or number of cycles per second Peak-to-Peak is 18 cm so amplitude is half of this It takes 3 seconds to move the block 3 cycles so, $$3T=6~\text{seconds}$$ Solve for T In one cycle, the block has moved 18 cm In 3 cycles it has moved three times that distance Does this make sense? 7. anonymous • one year ago Best Response You've already chosen the best response. 0 @asib1214 The amplitude in your problem is the distance the weight has moved from rest to the furthest stretched out point of the spring. This is also the same distance from the block's resting point to the point where the spring is fully compressed. In your picture the distance between the points where the spring is fully compressed and fully stretched is 18cm. ybarrap gave an excellent picture to help explain this idea mathematically. Halfway between where the spring is fully compressed and fully stretched is the natural "resting point" of the mass and thats why you need to divide 18 by 2 for your answer. 8. IrishBoy123 • one year ago Best Response You've already chosen the best response. 1 hang on a minute!!! one thread has simply appeared inside another magic? or maybe a bit rude? 9. asib1214 • one year ago Best Response You've already chosen the best response. 0 it's aight i've created my own equation to find the amplitude....A= total distance X N (number of cycles)/total time....so it'll be 18 cm X 3 cycles/ 6 seconds = 9 cm 10. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9950559139251709, "perplexity": 1878.8055171511003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00150-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.gasmodel.com/background.htm
# Generalized Autoregressive Score models ## Background Generalized Autoregressive Score models were proposed in their full generality in Creal, Koopman and Lucas (2008) as developed at the time at VU University Amsterdam; see the GAS papers section of the site. Simultaneously, Harvey and Chakravarty (2008) in Cambridge developed a score driven model specifically for volatilities, called the Beta-t-(E)GARCH model, built on exactly the same philosophy. The Beta-t-(E)GARCH is a special case of a GAS model.[1] The idea is very simple. Consider a conditional observation density $$p(y_t | f_t)$$ for observations $$y_t$$ and a time varying parameter $$f_t$$. Assume the parameter $$f_t$$ follows the recursion $$f_{t+1} = \omega + \beta f_t + \alpha S(f_t) \left[\frac{\partial \, log \, p(y_t | f_t)}{\partial f_t} \right],$$ where $$S(f_t)$$ is a scaling function for the score of the log observation density. The key novelty in this expression is the use of the scaled score to drive the time variation in the parameter $$f_t$$. It links the shape of the conditional observation density directly to the dynamics of $$f_t$$ itself. For example, if $$p$$ is the normal density and $$f_t$$ its variance, then by a convenient choice of scaling we obtain the familiar GARCH model. If $$p$$ is a Student's t density, however, we do NOT obtain the t-GARCH model! Instead, the score of the t distribution causes the volatility dynamics not to react too fiercely to large values of $$|y_t|$$. This makes sense: such large values might easily be due to the fat-tailed nature of the data, and should not be fully attributed to increases in the variance. An empirical example of this effect is given in the figure on this page. If $$p$$ is the exponential distribution with mean $$f_t$$, then we obtain the ACD model. In fact, many well-known models fall in the GAS framework, and many new interesting models with time varying parameters can now easily be devised. Given the data up to time t, $$f_{t+1}$$ is known. The model is thus observation driven in the terminology of Cox (1981) and the likelihood is known in closed-form through a standard prediction error decomposition. Additional lag structures and other dynamics can be easily added to the transition equation for $$f_{t+1}$$ above. More details can be found in the original 2008 working paper, a shorter version of which is published as Creal et al. (2013) in the Journal of Applied Econometrics. The Amsterdam and Cambridge groups have pursued the research agenda on score driven models and have repeatedly combined forces by organizing workshops on the theme. Contributions from other research team across the world are warmly welcomed. Please let us know if we missed your paper on GAS or score driven models by sending an email to Andre Lucas ([email protected]) or Siem Jan Koopman ([email protected]). [1]: The GAS model also goes by the name of Dynamic Conditional Score (DCS) model, Score Driven (SD) model, or Dynamic Score (DySco) model.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421409964561462, "perplexity": 763.8287623140022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589022.38/warc/CC-MAIN-20180715222830-20180716002830-00453.warc.gz"}
https://anhngq.wordpress.com/2013/01/01/the-harmonic-slicing/
# Ngô Quốc Anh ## January 1, 2013 ### The harmonic slicing Filed under: Uncategorized — Ngô Quốc Anh @ 17:46 As we have already seen in this post, the Einstein equations are essentially hyperbolic if one assumes that the harmonic condition holds, i.e. $\square_g x^\alpha=0$. This type of condition was first introduced by De Donder in 1921 and have played an important role in theoretical developments, notably in the Choquet-Bruhat work of the well-posedness of the Cauchy problem for $3+1$ Einstein equations. The harmonic slicing is defined by requiring that the harmonic condition holds only for the coordinate $x^0=t$, i.e. $\square_g t=0$, leaving freedom to choose any coordinate $(x^\alpha)$, $\alpha>0$ in each hypersurface $M$. Using the formula for the d’Alembertian operator, which is the Laplace-Beltrami operator in the Minkowski space, we obtain $\displaystyle\frac{1}{{\sqrt { - \det g} }}\frac{\partial }{{\partial {x^\mu }}} \Big(\sqrt { - \det g} {g^{\mu \nu }}\underbrace {\frac{{\partial t}}{{\partial {x^\nu }}}}_{\delta _\nu ^0}\Big) = 0,$ that is to say $\displaystyle\frac{\partial }{{\partial {x^\mu }}}(\sqrt { - \det g} {g^{\mu 0}}) = 0.$ Thanks to the identity $\sqrt { - \det g} = N\sqrt {\det \gamma }$, one can see that $\displaystyle\frac{\partial }{{\partial t}}(N\sqrt {\det \gamma } {g^{00}}) + \frac{\partial }{{\partial {x^i}}}(N\sqrt {\det \gamma } {g^{i0}}) = 0.$ Since $\displaystyle {g^{00}} = - \frac{1}{{{N^2}}}, \quad {g^{i0}} = \frac{{{\beta ^i}}}{{{N^2}}}$ we find that $\displaystyle - \frac{\partial }{{\partial t}}\Big(\frac{{\sqrt {\det \gamma } }}{N}\Big) + \frac{\partial }{{\partial {x^i}}}\Big(\frac{{\sqrt {\det \gamma } }}{N}{\beta ^i}\Big) = 0.$ By expanding, we obtain $\displaystyle\frac{{\partial N}}{{\partial t}} - {\beta ^i}\frac{{\partial N}}{{\partial {x^i}}} - N\left( {\frac{1}{{\sqrt {\det \gamma } }}\frac{\partial }{{\partial t}}(\sqrt {\det \gamma } ) - \frac{1}{{\sqrt {\det \gamma } }}\frac{\partial }{{\partial {x^i}}}(\sqrt {\det \gamma } {\beta ^i})} \right) = 0.$ In view of the divergence for vector field, there holds $\displaystyle\frac{1}{{\sqrt {\det \gamma } }}\frac{\partial }{{\partial {x^i}}}(\sqrt {\det \gamma } {\beta ^i}) = \text{div}_\gamma \beta = {\nabla _\gamma } \cdot \vec\beta .$ Thanks to the evolution equation of the spatial metric, i.e. $\displaystyle\frac{\partial }{{\partial t}}{\gamma _{ij}} = - 2N{K_{ij}} + {\mathcal L_{\vec \beta} }{\gamma _{ij}},$ we find that $\displaystyle {\gamma ^{ij}}\left( {\frac{{\partial {\gamma _{ij}}}}{{\partial t}} - {\mathcal L_{\vec \beta} }{\gamma _{ij}}} \right) = - 2N\underbrace {{\gamma ^{ij}}{K_{ij}}}_K.$ Since $\mathcal L_{\vec \beta}\gamma_{ij}=D_i\beta_j + D_j\beta_i$, we obtain $\displaystyle \text{div}_\gamma \vec\beta - NK = \frac{1}{2}{\gamma ^{ij}}\frac{{\partial {\gamma _{ij}}}}{{\partial t}},$ equivalently, $\displaystyle \text{div}_\gamma\vec\beta - NK = \frac{1}{{\sqrt {\det \gamma } }}\frac{\partial }{{\partial t}}(\sqrt {\det \gamma } ).$ By collecting all above formulas, we find that $\displaystyle\frac{{\partial N}}{{\partial t}} - {\beta ^i}\frac{{\partial N}}{{\partial {x^i}}} + K{N^2} = 0.$ Using the formula for the Lie derivative for scalar functions, we get that $\displaystyle\left( {\frac{\partial }{{\partial t}} - {\mathcal L_{\vec \beta} }} \right)N = - K{N^2}.$ Hence, we have got an evolution equation for the lapse function $N$. Thanks to $\displaystyle {K_{ij}} = - \frac{1}{{2N}}\left( {\frac{\partial }{{\partial t}} - {\mathcal L_{\vec\beta} }} \right){\gamma_{ij}},$ the above equation is nothing but $\displaystyle \frac{1}{N}\left( {\frac{\partial }{{\partial t}} - {L_\beta }} \right)N = \frac{1}{2}{\gamma ^{ij}}\left( {\frac{\partial }{{\partial t}} - {\mathcal L_{\vec\beta} }} \right){\gamma _{ij}} = \frac{1}{{\sqrt {\det \gamma } }}\left( {\frac{\partial }{{\partial t}} - {\mathcal L_{\vec\beta} }} \right)(\sqrt {\det \gamma } ),$ which is $\displaystyle\left( {\frac{\partial }{{\partial t}} - {\mathcal L_{\vec\beta} }} \right)\log \left( {\frac{N}{{\sqrt {\det \gamma } }}} \right) = 0.$ Thus, for the scalar density $\alpha$ depending on the shift vector $\vec\beta$, the lapse function $N$ is $\displaystyle N = \alpha \sqrt {\det \gamma }.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722941517829895, "perplexity": 377.9940562023325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867559.54/warc/CC-MAIN-20180526151207-20180526171207-00473.warc.gz"}
https://en.wikipedia.org/wiki/Tournament_(graph_theory)
# Tournament (graph theory) Tournament A tournament on 4 vertices Vertices $n$ Edges $\binom{n}{2}$ A tournament is a directed graph (digraph) obtained by assigning a direction for each edge in an undirected complete graph. That is, it is an orientation of a complete graph, or equivalently a directed graph in which every pair of distinct vertices is connected by a single directed edge. Many of the important properties of tournaments were first investigated by Landau in order to model dominance relations in flocks of chickens. Current applications of tournaments include the study of voting theory and social choice theory among other things. The name tournament originates from such a graph's interpretation as the outcome of a round-robin tournament in which every player encounters every other player exactly once, and in which no draws occur. In the tournament digraph, the vertices correspond to the players. The edge between each pair of players is oriented from the winner to the loser. If player $a$ beats player $b$, then it is said that $a$ dominates $b$. ## Paths and cycles Any tournament on a finite number $n$ of vertices contains a Hamiltonian path, i.e., directed path on all $n$ vertices (Rédei 1934). This is easily shown by induction on $n$: suppose that the statement holds for $n$, and consider any tournament $T$ on $n+1$ vertices. Choose a vertex $v_0$ of $T$ and consider a directed path $v_1,v_2,\ldots,v_n$ in $T\setminus \{v_0\}$. Now let $i \in \{0,\ldots,n\}$ be maximal such that for every $j \leq i$ there is a directed edge from $v_j$ to $v_0$. $v_1,\ldots,v_i,v_0,v_{i+1},\ldots,v_n$ is a directed path as desired. This argument also gives an algorithm for finding the Hamiltonian path. More efficient algorithms, that require examining only $\ O(n \log n)$ of the edges, are known.[1] This implies that a strongly connected tournament has a Hamiltonian cycle (Camion 1959). More strongly, every strongly connected tournament is vertex pancyclic: for each vertex v, and each k in the range from three to the number of vertices in the tournament, there is a cycle of length k containing v.[2] Moreover, if the tournament is 4‑connected, each pair of vertices can be connected with a Hamiltonian path (Thomassen 1980). ## Transitivity A transitive tournament on 8 vertices. A tournament in which $((a \rightarrow b)$ and $(b \rightarrow c))$ $\Rightarrow$ $(a \rightarrow c)$ is called transitive. In a transitive tournament, the vertices may be totally ordered by reachability. ### Equivalent conditions The following statements are equivalent for a tournament T on n vertices: 1. T is transitive. 2. T is acyclic. 3. T does not contain a cycle of length 3. 4. The score sequence (set of outdegrees) of T is {0,1,2,...,n − 1}. 5. T has exactly one Hamiltonian path. ### Ramsey theory Transitive tournaments play a role in Ramsey theory analogous to that of cliques in undirected graphs. In particular, every tournament on n vertices contains a transitive subtournament on $1+\lfloor\log_2 n\rfloor$ vertices.[3] The proof is simple: choose any one vertex v to be part of this subtournament, and form the rest of the subtournament recursively on either the set of incoming neighbors of v or the set of outgoing neighbors of v, whichever is larger. For instance, every tournament on seven vertices contains a three-vertex transitive subtournament; the Paley tournament on seven vertices shows that this is the most that can be guaranteed (Erdős & Moser 1964). However, Reid & Parker (1970) showed that this bound is not tight for some larger values of n. Erdős & Moser (1964) proved that there are tournaments on n vertices without a transitive subtournament of size $2+2\lfloor\log_2 n\rfloor$ Their proof uses a counting argument: the number of ways that a k-element transitive tournament can occur as a subtournament of a larger tournament on n labeled vertices is $\binom{n}{k}k!2^{\binom{n}{2}-\binom{k}{2}},$ and when k is larger than $2+2\lfloor\log_2 n\rfloor$, this number is too small to allow for an occurrence of a transitive tournament within each of the $2^{\binom{n}{2}}$ different tournaments on the same set of n labeled vertices. A player who wins all games would naturally be the tournament's winner. However, as the existence of non-transitive tournaments shows, there may not be such a player. A tournament for which every player loses at least one game is called a 1-paradoxical tournament. More generally, a tournament T=(V,E) is called k-paradoxical if for every k-element subset S of V there is a vertex v0 in $V\setminus S$ such that $v_0 \rightarrow v$ for all $v \in S$. By means of the probabilistic method, Paul Erdős showed that for any fixed value of k, if |V| ≥ k22kln(2 + o(1)), then almost every tournament on V is k-paradoxical.[4] On the other hand, an easy argument shows that any k-paradoxical tournament must have at least 2k+1 − 1 players, which was improved to (k + 2)2k−1 − 1 by Esther and George Szekeres (1965). There is an explicit construction of k-paradoxical tournaments with k24k−1(1 + o(1)) players by Graham and Spencer (1971) namely the Paley tournament. ### Condensation The condensation of any tournament is itself a transitive tournament. Thus, even for tournaments that are not transitive, the strongly connected components of the tournament may be totally ordered.[5] ## Score sequences and score sets The score sequence of a tournament is the nondecreasing sequence of outdegrees of the vertices of a tournament. The score set of a tournament is the set of integers that are the outdegrees of vertices in that tournament. Landau's Theorem (1953) A nondecreasing sequence of integers $(s_1, s_2, \cdots, s_n)$ is a score sequence if and only if : 1. $0 \le s_1 \le s_2 \le \cdots \le s_n$ 2. $s_1 + s_2 + \cdots + s_i \ge {i \choose 2}, \mbox{for }i = 1, 2, \cdots, n - 1$ 3. $s_1 + s_2 + \cdots + s_n = {n \choose 2}.$ Let $s(n)$ be the number of different score sequences of size $n$. The sequence $s(n)$ (sequence A000571 in OEIS) starts as: 1, 1, 1, 2, 4, 9, 22, 59, 167, 490, 1486, 4639, 14805, 48107, ... Winston and Kleitman proved that for sufficiently large n: $s(n) > c_1 4^n n^{-{5 \over 2}},$ where $c_1 = 0.049.$ Takács later showed, using some reasonable but unproven assumptions, that $s(n) < c_2 4^n n^{-{5 \over 2}},$ where $c_2 < 4.858.$ Together these provide evidence that: $s(n) \in \Theta (4^n n^{-{5 \over 2}}).$ Here $\Theta$ signifies an asymptotically tight bound. Yao showed that every nonempty set of nonnegative integers is the score set for some tournament.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 47, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9193811416625977, "perplexity": 342.7672937167414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/warc/CC-MAIN-20150728002301-00117-ip-10-236-191-2.ec2.internal.warc.gz"}
http://youngscientistjournal.org/youngscientistjournal/article/analysis-of-danger-posed-by-near-earth-asteroid-1994-pn
# Analysis of Danger Posed by Near-Earth Asteroid 1994 PN ## ABSTRACT In space, many near-earth asteroids are considered as “killer asteroids” because of their possibility of colliding with the Earth. This paper details the methods of data collection and analysis, orbit modelling, and model optimization for the Near-Earth asteroid (NEA) 1994 PN in order to determine its level of danger to the Earth. Telescopes at the Leitner Family Observatory (LFO) in New Haven, Connecticut; Mayhill, New Mexico; and Sierra Nevada Mountains, California were operated both locally and remotely to take images and collect astrometric and photometric data on 1994 PN. The data was then filtered through the Method of Gauss to calculate the preliminary orbit of the asteroid. Data was then iterated through a genetic algorithm program to optimize the orbit model to achieve best. Finally, a Rebound long-term integrator extrapolated the optimized orbit model to ascertain the orbit of 1994 PN in the distant future. Our study found that 1994 PN has osculating orbit elements of semi-major axis (a) 2.351 AU, eccentricity (e, in AU) <0.405, -0.169, -0.315>, inclination angle (i) 45.8o, longitude of the ascending node (Ω) 113.1o, and argument of the perihelion (ω) 234o. These elements indicate a moderately elliptical orbit significantly offset from the Earth’s plane of orbit. The final iteration of the model optimization procedure yielded an uncertainty in our results of rmsRA = 0.033o and an rmsDec of 0.039o. The simulation shows that there is a high possibility of collision between 1994 PN and earth. ### INTRODUCTION. #### The Value of Studying Near-Earth Asteroids. Near-Earth asteroids (NEAs) are usually main-belt asteroids that, through gravitational perturbations, experience close encounters with the Earth. Due to their relative proximity, further gravitational disturbances can cause collisions. Historically, NEAs and other earth-crossing objects (such as comets) have produced Earth’s approximately 120 meteorite impact craters, including the impact at the K/T boundary approximately 65 million years ago which resulted in a mass extinction of 90% of all species [1]. In addition, NEAs could be readily accessed for resource extraction. Metallic asteroids contain precious metals (e.g. gold, platinum, and platinum-group metals) and semiconductors (used in the manufacture of technology products) in concentrations higher than what is available on Earth [6]. NEAs also carry non-metallic materials, such as nitrogen, oxygen, water, clays, hydrated salts, and hydrocarbons. Water, believed to be found on approximately 50% of NEAs, is a particularly useful and readily extracted resource [6]. #### Current Tracking Methods. The first step in collecting data on and determining the orbit of a specific NEA is to be able to reliably track the asteroid through its orbit to effectively direct the telescope for imaging. One ongoing effort to track NEAs is the Near-Earth Asteroid Tracking (NEAT) system, which has operated autonomously at the Maui Space Surveillance Site since December 1995. NEAT has contributed to the data available on more than 1,500 minor planets and detected more than 26,000 main-belt asteroids. In addition, NEAT is an efficient detector of near-earth asteroids larger than 1 km in diameter. The data NEAT has collected has enhanced the body of knowledge used to evaluate the hazard of Near-Earth objects and calculate the target location of an object for future space missions [5]. NASA also has a joint effort with the Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL) to create the “NEA Scout” program, a concept that would use inexpensive CubeSats, a type of “miniature or nanosatellites created fir research use, to encounter NEAs. The NEAScout program seeks to explore less-known small NEAs (diameters in the 1-100 m range) and provide information for the Asteroid Redirect Mission (ARM), a planetary defense program, as well as identify all asteroid threats for the Asteroid Grand Challenge [7]. Recently, new methods of tracking asteroids have arisen, such as the “synthetic tracking” methodology presented by Shao [7]. Synthetic tracking is a useful tool for documenting small, dim, and quickly moving NEAs. It relies on the use of high speed cameras and data processing by employing a sophisticated “shift and add” technique to synthetically layer images on top of one another to produce a simulated long-exposure shot of the NEA under observation [7]. #### Mechanics of Orbit Determination. Orbits of a celestial object can be determined from its initial heliocentric position vector, r, and its velocity vector, . The force of gravity acts between other celestial objects and the one of interest. For this project, the forces of gravity on the asteroid by objects other than the Sun, Earth, Jupiter, and Mars are negligible. The acceleration due to gravity is determined by Newton’s Gravitational Law: =  . Furthermore, total energy is conserved throughout the asteroid’s orbit, and the asteroid behaves according to Kepler’s Laws of Planetary Motion. The resulting motion follows the Conic Section Law with the Sun at one focus and perturbations by other objects [3]. As the position vector (r) of the asteroid changes, its net acceleration will change, thus affecting its velocity (). Computer integrators with short time steps can accurately approximate the asteroid’s orbit to a reasonable accuracy using these physical laws. To solve for r and , the fundamental vector triangle is considered. The distance from the observer to the asteroid is defined as  (the range); the unit vector pointing from the observer to the asteroid is defined as ; the vector pointing from the observer to the sun is defined as R; the vector pointing from the sun to the asteroid is defined as r. (Figure 1). By performing the Method of Gauss around a middle observation to solve for , the preliminary vector orbital elements in ecliptic coordinates (a coordinate system with the plane of earth’s equator as the x-y plane, and earth’s axis of rotation as the z-axis) are determined: the position vector r and the velocity vector . These two vectors are needed for initial orbit determination (defining the asteroid’s orbit by the classical orbital elements; Figure 2). ##### Figure 2. A depiction of the classical orbital elements With an initial orbit, the model was optimized (fitted) to our observed data using the genetic algorithm process.  With various mapping techniques and a standard to evaluate fitness, a genetic algorithm can be used to evolve a solution to match various demands, including optimization of a model [4]. The genetic algorithm program used here generated asteroid candidates in a Gaussian distribution of position and velocity vectors around the initial asteroid defined by r2 and 2from the Method of Gauss. For each of the candidates, coordinate positions Right Ascension (RA; a measure of celestial “longitude”) and Declination (Dec; a measure of celestial “latitude”) were interpolated to all of the observation dates. Each candidate’s set of predicted RA and Dec data was then compared with our observation data (the asteroid’s observed RA and Dec). By calculating the root mean square (rms; a measure of the predicted data’s deviation from the observed data), the best fit candidate with the lowest rms was found within this generation. Its values of r and become the center of the Gaussian distribution for the next generation of asteroid candidates. After several generations, the candidate with the minimum rms was found, and its values of r and became the basis for the “optimized” orbit model. ### MATERIAL AND METHODS. #### Data Collection. The local 16-inch telescope at the Leitner Family Observatory in New Haven, CT was used. In order to maximize data collection in a limited amount of time, remote telescopes (T21 and T24 in New Mexico and California, respectively, available on itelescope.net) were also employed. For both the local and New Mexico observations, exposure times of 60 seconds were used because these two telescopes have a smaller light-collecting surface area; four sets of nine images in alternating series of luminance and red filters were taken. The luminance filter images were used for astrometry (the process of calculating the object’s RA and Dec coordinates using reference stars) because the luminance filter allows more light passage than other filters, so the asteroid appears brighter and the asteroid’s centroid (pixel location on the image) more easily measured. The red filter images were used for photometry (the process of calculating the object’s brightness) in order to minimize the signal-to-noise ratio. For the California observations, exposure times of 30 seconds were utilized (the California telescope has a larger collecting lens, and longer exposure times would have saturated the pixels of the CCD camera); the same setup of four sets of nine images in alternating luminance and red filters was used. Eight total collections of images were taken in these three locations throughout the four weeks spanning July 10 and August 7, 2016. #### Astrometry. Images from each evening were first analyzed using Maxim DL (an astronomical image processing tool) in order to locate the asteroid. Each series of nine images from the same filter was first calibrated by subtracting the flat-field, then aligned, and finally combined in groups of three. Then, Maxim DL’s “blink” function was used to rapidly switch between the combined images at 0.2 to 0.5 seconds per frame and locate the asteroid. During the blink process, the asteroid would appear as a small Gaussian, moving relative to thebackground stars. The located asteroid was marked with a white ring using Maxim DL’s “annotate” function. Once the asteroid was found and marked, the combined images were uploaded to astrometry.net. This online software matched the reference stars in the images, along with their RA and Dec coordinates, to known coordinates from star catalogs. With these reference coordinates, the astrometry.net program then attached RA and Dec coordinates to each pixel value in the image. A new image was created with overlaid RA and Dec coordinates, and the image and its information were downloaded as a .fits file. Then, DS9 (an astronomical imaging and data visualization application) was used to find the ICRS (International Celestial Reference System; a standardized celestial coordinate system) RA and Dec of the centroid of the asteroid. #### Photometry. Each photograph was further analyzed using DS9’s optical catalog UCAC4 to calculate the asteroid’s apparent magnitude. Arbitrary stars with known red magnitudes (“rmag”; a measure of the apparent brightness of an object’s red wavelengths only, as seen from Earth) were selected as reference stars in the Maxim DL photometry program. Then, the red magnitude of the asteroid in one image was calculated within the Maxim DL program using the reference magnitudes. Several red magnitudes were averaged to produce a single apparent magnitude for our asteroid on a particular night of observation. This process was repeated for each of the remaining series of observations; the resulting apparent magnitudes, as well as the RA-Dec coordinates found in the above section “Astrometry,” comprise the observed data of the asteroid detailed in Supplement Table 1. #### Using Photometry to Find Size. From the asteroid’s apparent magnitude, the asteroid’s diameter can be determined. The equation  yields a magnitude “increment”. Adding the ∆M to the apparent magnitude yields the absolute magnitude. Using the calculated absolute magnitude, we can use the following equation to calculate the diameter of the asteroid:  , where D is diameter in km, p is the albedo (guessed due to the inability to measure it directly with the equipment available), and H is the apparent magnitude. #### Using Astrometry for Initial Orbit Determination. The RA and Dec coordinates of the asteroid at particular times was recorded according to the procedure outlined in the “Data Collection” and “Astrometry” sections. To determine the initial orbit (which would later be fitted to observed data using genetic algorithm model optimization techniques), it is necessary to find the topocentric range vector, ρ, the vector from the observer’s location to the asteroid. The most straightforward way to accomplish this is through parallax calculations conducted in significantly distant locations during overlapping times. However, due to weather conditions, we could not acquire simultaneous data. Therefore, the Method of Gauss was employed to find the heliocentric position (r) and velocity () vectors of the asteroid. Three data points were used in Method of Gauss calculations to solve for the position and velocity vectors at the time of the middle observation (T) of the set of three observations. For detailed description of Method of Gauss, please see supplement material. ### RESULTS. To validate our observation data, we compared our calculated (observed) values of RA and Dec and orbital elements on July 23, 2016 (JD 2457592.6545) to those of JPL’s ephemeris for the same time. The comparison showed that our observation data is accurate (Supplement Table 2). Thus, we consider our observed values acceptable for calculation and data analysis (within 0.1 to 0.01 of the ephemeris values, except for the measurements of apparent magnitude). The uncertainty in our measurements is the value of rms, which equals 0.0025. For the three sets of observation data required in the method of Gauss calculation, we used our data collected at time JD 2457586.7092, JD 2457592, and JD 2457597 (observations about 5 to 6 days apart from each other). We tried to maximize the time gap between these three observations to maximize the change in the asteroid’s position vector. After performing genetic algorithm model optimization, the optimum heliocentric position vector was found to be: r = <0.234, -1.402, -0.241> And the optimum heliocentric velocity vector was: = <0.592, 0.598, -0.504> With these vectors, we calculated the classical osculating orbital elements for JD 2457592.6773, summarized below in Table 1. a (AU) e (AU) i 2.351 <0.405, -0.169, -0.315> 45.8 113.1 234.4 ##### Table 1. Calculated results of classical osculating orbital elements for JD 2457592.6773. Long-term integration of the asteroid’s orbit using the Rebound computer integrator to predict future orbital paths showed periodically varying distances between 1994 PN and the Earth (Figure 3). However, as shown in Figure 4, the asteroid will come dangerously close to the Earth (on the order of hundredths of an AU) around year 4157. Although step-wise computerized integrators do accumulate some error, our results are justified because of the small (less than 0.01) value of the rms obtained after genetic algorithm model optimization and the accuracy of our calculated and observed data (Supplement Table 2). Furthermore, the periodicity of the asteroid’s distance from Earth and the dramatically close encounter in the year 4157 show overall trends in the orbit of 1994 PN—trends which would hold due to their intrinsic natures despite fluctuations in exact motion and imperfect orbital integration. We can expect with little uncertainty (based on the small rms) that the orbit of NEA 1994 PN will bring it close to Earth (less than 1.75 AU) about every 49 years and extremely close to Earth around the year 4147. ### DISCUSSION. #### Potential Damage of Collision. 1994 PN has a calculated diameter of 1.06 to 1.84 km, so it would do considerable damage if it crashed into the Earth. Using the calculator provided by Earth Impact Effect Program run by Purdue University and Imperial College London, we estimated the crater diameter caused by such an impact to be 13.6 km. However, the impact would not have enough momentum to cause a mass extinction or widespread death. #### Explanation of Brightness Fluctuations. In addition to periodic distance-to-Earth fluctuations between 2 AU and 4 AU, the data table of our observations also indicates changing brightness levels (see “AP Mag” column of Supplement Table 1). This occurrence can be most likely explained by asteroid tumbling during orbit. Apparent magnitude measurements that change over time reveal the way the asteroid is rotating (or “tumbling” through space), and thus, the shape of the asteroid. For example, if the apparent magnitude remains relatively constant for long periods of time, then the asteroid might be rotating on an approximate axis of symmetry. On the other hand, if the apparent magnitude fluctuates greatly so that the asteroid becomes dramatically brighter or dimmer as time passes, then the asteroid might be oblong-shaped and rotating so that we see the small face of the ovoid illuminated by the Sun (small reflective surface area; dim), and then the long face illuminated by the Sun (large reflective surface area; bright). Therefore, if the asteroid was rotating along the different ovoid faces, the observed apparent magnitude would fluctuate over time. #### Improvements Upon the Study. Given more time, we could capitalize on the changing magnitudes by observing the asteroid with radio waves and constructing a model of its shape. This project could also be extended by analyzing the asteroid’s reflection spectrum to determine its surface composition and economic viability for exploration and resource collection. To further improve upon the study, we would like to try parallax observations again for initial orbit determination. On the evening of July 25, 2016, simultaneous observations were attempted in collaboration with Blasy et. al., during which we would observe remotely from the New Mexico and Blasy et. al. would observe using the local Leitner Family Observatory telescope. However, due to inclement weather which delayed data collection at the local observation station, we were unable to gather images taken at coinciding times; the first image from the local observations was taken five minutes after the last image from the New Mexico observations. #### Implications of Composite Sinusoidal Behavior of Distance to Earth. From our data table of observations (Supplement Table 1), we ran a long-term integration that extends into the next 4,500 years. The integration showed the asteroid’s distance to Earth as an approximate composite sinusoidal function of time over the next several hundred years. This vacillating orbit result indicates external gravitational perturbations acting upon the asteroid by other planets. Closer at hand, though, we noted that within the first few hundred years, the distance generally ranges between 0.8 AU to 4 AU. Around the year 2310, however, the predicted distance between Earth and 1994 PN decreases to about 0.3 AU. Afterwards, close encounters (distance to Earth less than 1 AU) are predicted to show up frequently. The closest encounter we predicted was 0.02 AU away from the Earth in the year 4154 (Figure 4). Although an asteroid at 0.02 AU may not collide with Earth immediately, it is situated at a comparably close distance to Earth (in relation to the remainder of its periodic orbital patterns), and could be perturbed by nearby celestial objects outside the scope of this investigation onto a collision course with Earth. The integration program predicted minimum distances that gradually decrease for the first 300 years with each passing cycle of the asteroid’s sinusoidal distance behavior, culminating in the dangerously close encounter in 4147. Thus, it is reasonable to conclude that 1994 PN has a high possibility of colliding with the Earth in the future. Supplements. ### ACKNOWLEDGMENT. We would like to thank our Academic Director, Dr. Michael D. Faison, for mentoring our research this summer at YSPA, and Dr. Mary Loveless for helping us edit this paper. We would also like thank Teaching Fellow Jesse Feddersen for helping us with the Python programming and introducing us to data science techniques and Teaching Assistants Daksha Rajagopalan and Alex Thomas for supporting our research. ### REFERENCES. ##### M. Shao, et al, “Finding Very Small Near-Earth Astroird Using Synthetic Tracking,” The Astrophysical Journal.782 (2014). Posted by on Wednesday, May 24, 2017 in May 2017.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014377951622009, "perplexity": 1730.0558522749648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515235.25/warc/CC-MAIN-20210118185230-20210118215230-00067.warc.gz"}
https://physics.stackexchange.com/questions/596485/why-isnt-magnetic-flux-a-dimensionless-quantity
# Why isn't magnetic flux a dimensionless quantity? I know how magnetic flux is mathematically defined, and it clearly has dimensions of kg m^2 A^-1 S^-2, but I've read magnetic flux being described as "number of magnetic field lines passing through an area". Doesn't that imply magnetic flux is a pure number? And if this is incorrect, what would the intuitive definition (as opposed to B.A) for flux be? • it clearly has dimensions of kg A^-1 S^-2 No, it doesn’t. See Wikipedia. – G. Smith Nov 26 '20 at 20:49 • typo. Sorry about that – OVERWOOTCH Nov 26 '20 at 21:22 • number of magnetic field lines passing through an area There is a field line passing through each point. This means that the number of field lines through any finite area is infinite. – G. Smith Nov 26 '20 at 21:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344690203666687, "perplexity": 560.0836161503091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00050.warc.gz"}
http://math.stackexchange.com/questions/67444/uniform-continuity-of-c1a-b-and-ca-b
# Uniform continuity of $C^1((a,b))$ and $C((a,b))$? If one considers $C^1([a,b])$, then immediately one has $f$ is uniformly continuous for all $f\in C^1([a,b])$ since $[a,b]$ is compact in ${\mathbb R}$. When it comes to an open interval, things may be different. For example $f(x)=1/x$, $f\in C^1((0,1))$ but $f$ is not uniformly continuous on $(0,1)$. One property of $f$ is that both $f$ and $f'$ are unbounded on $(0,1)$, which, for example, is different from that of $g(x)=\sqrt{x}$. Here comes my first question: • Is the following statement true? For $f\in C^1((a,b))$, $f$ is uniformly continuous if and only if $f$ is bounded on $(a,b)$. It seems that one may give the "only if" part from the answer to this question. For the "if" part, I can't come up with a counterexample. What's more, • if we consider $C((a,b))$, then what would be the relationship between uniform continuity and boundedness of these functions? - No. A counterexample is $x\mapsto \sin\frac{1}{x}$ on $(0,1)$. It is bounded and $C^\infty$ on this interval, but not uniformly continuous. You should be able to prove that $f$ is uniformly continuous on $(a,b)$ if and only if it is continuous and $\lim_{x\to a_+} f(x)$ and $\lim_{x\to b_-} f(x)$ exist (and are finite). with the hypothesis given above I'm tempted to argue like this: Let $\epsilon>0$. Since $f\in C'((a,b))$, there is a bound $M>0$ for $f'$. Take $\delta=\epsilon/M$ then by them MVT: $|f(x)-f(y)|=|f'(c)||x-y|\leq M|x-y|$. But your example clearly shows that the theorem don't holds. What is the mistake in my proof? –  leo Sep 25 '11 at 16:09 @leo: I don't think "there is necessarily a bound $M$ for $f'$" since the interval is not closed. –  Jack Sep 25 '11 at 16:15 @Jack: I understand that "a function $f$ of class $C^1$" as $f'$ exist and is continuous in the domain (here $(a,b)$), then continuity implies boundedness. Am I wrong? –  leo Sep 25 '11 at 16:18 @leo, continuity in an open interval does not imply boundedness, as Jack's example $x\mapsto 1/x$ on $(0,1)$ shows. –  Henning Makholm Sep 25 '11 at 16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830083250999451, "perplexity": 116.21413155829163}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986444.39/warc/CC-MAIN-20150728002306-00322-ip-10-236-191-2.ec2.internal.warc.gz"}
http://physical-thought.blogspot.com/2008/09/symmetries-conserved-quantities-and.html
## Wednesday, 24 September 2008 ### Symmetries, Conserved Quantities and Noether's Theorem Following on from my posts on Lagrangian and Hamiltonian Mechanics [1, 2, 3, 4], I'd like to discuss one of the most amazing topics in physics. The consideration of the symmetries of physical laws and how those symmetries relate to conserved quantities, the fundamentally beautiful mathematics lying beneath, and the extent to which we can develop theories of the world around us from such simple concepts; these things continually inspire me and provoke my interest. I'll start by showing that the formulation of Lagrangian and Hamiltonian Mechanics, thus far, allows us to determine several conservation laws. Consider, for example, the homogeneity of space. Space is homogeneous if the motion (or time-evolution) of a particle (or system thereof) is independent of absolute position. That is, the potential does not vary with absolute position (it can still vary with the vector distance between two particles, as an interaction potential, for example!) If we make a transformation \mathbf{r}\rightarrow\mathbf{r}+\delta\mathbf{r} , then the Lagrangian will also transform as L \rightarrow L+\delta L . For a single particle, we can Taylor expand as follows: L(\mathbf{r}+\delta\mathbf{r},\mathbf{v}) = L(\mathbf{r},\mathbf{v})+\frac{\partial L}{\partial x}\delta x+\frac{\partial L}{\partial y}\delta y+\frac{\partial L}{\partial z}\delta z Which we can use to write \delta L = \frac{\partial L}{\partial \mathbf{r}}\cdot\delta\mathbf{r} \frac{\partial L}{\partial \mathbf{r}} is a vector quantity; each component is the derivative of L with respect to the corresponding coordinate of r. For a single particle, then, \frac{\partial L}{\partial \mathbf{r}}=\nabla L . Homogeneity of space requires that \delta L = 0 . Since \delta \mathbf{r} is arbitrary (and therefore not necessarily zero), we have that \frac{\partial L}{\partial q_i} = 0 ~~~~~~ (\star) This holds only if L does not depend on absolute position, otherwise there would be a contribution \delta L from many of the possible choices of \delta\mathbf{r} . Spatial dependence of e.g. V(x) implies spatial variation of L, and momentum would not be conserved. The Euler-Lagrange Equation applies for each coordinate in the vector r. The sum of these Euler-Lagrange Equations (ELEs) means that (\star) requires that: \begin{multiline*}\frac{d}{dt}\frac{\partial L}{\partial \dot{q}_i} = 0 \\\Rightarrow p_i = \frac{\partial L}{\partial \dot{q}_i} ~~\mathrm{remains~constant}\end{multiline*} We have, therefore, demonstrated the conservation of momentum as a result of requiring translational invariance. That is, any canonical momenta whose conjugate coordinates do not appear explicitly in the Lagrangian are conserved. Turning once again to time symmetries, let us re-derive the conservation of energy. If the Lagrangian is homogeneous in time, i.e. L(q,\dot{q})~~\mathrm{not}~~L(q,\dot{q},t) , then: \frac{dL}{dt} = \sum_i \frac{\partial L}{\partial q_i}\dot{q}_i + \sum_i\frac{\partial L}{\partial \dot{q}_i}\ddot{q}_i As L does not depend explicitly on time, there is no term \frac{\partial L}{\partial t} on the RHS. Sunstituting \frac{\partial L}{\partial q_i} from the ELE, \frac{dL}{dt} = \sum_i\dot{q}_i\frac{d}{dt}\frac{\partial L}{\partial\dot{q}_i}+\sum_i\frac{\partial L}{\partial\dot{q}_i}\ddot{q}_i = \sum_i\frac{d}{dt}\left(\dot{q}_i\frac{\partial L}{\partial\dot{q}_i}\right) \begin{multiline*}\Rightarrow \frac{d}{dt}\sum_i\left( \dot{q}_i\frac{\partial L}{\partial\dot{q}_i}-L \right) = 0 \\\Rightarrow H = \sum_i \dot{q}_i\frac{\partial L}{\partial \dot{q}_i} - L~~~\mathrm{remains~constant} The conservation of energy holds for any motion in a non-time-varying external field V(x) We turn now to the isotropy of space, and show that angular momentum is conserved due to rotational invariance of the Lagrangian. Consider rotation by an angle |\delta\theta| (with a direction given by \delta\theta ) about a vector. For small rotations, \mathbf{r}\rightarrow \mathbf{r}+\delta\mathbf{r} , with \delta\mathbf{r} = \delta\theta\times r . Each component of the velocity is also transformed by this rotation, \delta\mathbf{v}=\delta\theta\times\mathbf{v} . For a single body, we now impose the requirement that the Lagrangian be unchanged under such a rotation (i.e. we require space to be isotropic). \delta L = \sum_i\left( \frac{\partial L}{\partial q_i}\cdot\delta r_i + \frac{\partial L}{\partial\dot{q}_i}\cdot\delta v_i \right) = 0 We can replace \frac{\partial L}{\partial v_i} by the vector canonical momentum p_i , and \frac{\partial L}{\partial q_i} by \dot{p}_i , leaving: \begin{multiline*}\left(\dot{\mathbf{p}}\cdot\delta\mathbf{r} + \mathbf{p}\cdot\delta\mathbf{v}\right) = 0 \\\Rightarrow \dot{\mathbf{p}}\cdot\left(\delta\theta\times \mathbf{r}\right) + \mathbf{p}\cdot\left(\delta\theta\times\mathbf{v}\right) = 0\end{multiline*} Since \mathbf{a}\cdot\left(\mathbf{b}\times\mathbf{c}\right) = \mathbf{b}\cdot\left(\mathbf{c}\times\mathbf{a}\right) , \begin{multiline*}\delta\theta\cdot\left(\left[\mathbf{r}\times\dot{\mathbf{p}}\right]+\left[\mathbf{v}\times\mathbf{p}\right]\right)=0\\\Rightarrow\delta\theta\cdot\frac{d}{dt}\left(\mathbf{r}\times\mathbf{p}\right)=0\end{multiline*} Since \delta\theta is arbitrary, this requires that \mathbf{r}\times\mathbf{p} does not change in time, hence angular momentum is a conserved quantity. Hamilton's Equations Using the ideas presented above, I'm going to take a moment to derive Hamilton's Equations, which will prove useful later on. Consider changes in the Lagrangian L, according to dL = \sum_i\frac{\partial L}{\partial\dot{q}_i}\,d\dot{q}_i + \sum_i\frac{\partial L}{\partial q_i}\,dq_i This can be written: dL=\sum_ip_i\,d\dot{q}_i+\sum_i\dot{p}_i\,dq_i since \frac{\partial L}{\partial q_i}=\dot{p}_i and \frac{\partial L}{\partial\dot{q}_i}=p_i . Using, \sum_i p_i\,d\dot{q}_i = d\left(\sum_i p_i q_i\right) - \sum_i\dot{q}_i\,dp_i , d\left(\sum_i p_i\dot{q}_i - L\right) = -\sum_i\dot{p}_i\,dq_i+\sum_i\dot{q}_i\,dp_i The argument of the differential on the left is the Hamiltonian, H, H(q,p,t)=\sum_i p_i\dot{q}_i - L , therefore: dH = -\sum_i\dot{p}_i\,dq_i + \sum_i\dot{q}_i\,dp_i From here, we can obtain Hamilton's Equations: \begin{align*}\dot{q}_i &= \frac{\partial H}{\partial p_i}\\\dot{p_i} &= \frac{\partial H}{\partial q_i}\end{align*} For m coordinates (and m momenta), Hamilton's Equations form a system of 2m first-order differential equations, compared to the m second-order equations in the Lagrangian treatment. The total time derivative, \frac{dH}{dt}=\frac{\partial H}{\partial t}+\sum_i\frac{\partial H}{\partial q_i}\dot{q}_i+\sum_i\frac{\partial H}{\partial p_i}\dot{p}_i Substituting Hamilton's equations for \dot{q}_i, \dot{p}_i , the last two terms cancel, so \frac{dH}{dt} = \frac{\partial H}{\partial t} and if H does not depend explicitly on time, \frac{dH}{dt}=0 and energy is conserved! Noether's Theorem The three conserved quantities above were shown to be related to the invariance of the Lagrangian under some symmetry transformation: • Translational invariance (homogeneity of space) ==> Conservation of momentum • Rotational invariance (isotropy of space) ==> Conservation of angular momentum • Time invariance (homogeneity of time) ==> Conservation of energy Noether's Theorem states that any differentiable symmetry of the Action (integral of the Lagrangian) of a physical system has a corresponding conservation law. To every differentiable symmetry generated by local actions, there corresponds a conserved current. `Symmetry' here, refers to the covariance of the form of a physical law with respect to a Lie group of transformations; the conserved quantity is known as a charge and the flow carrying it as a current (c.f. electrodynamics). Noether's Theorem, which I will discuss in more detail at a later date, is another critical component used to build gauge theories. The key thing to remember right now is that a symmetry (invariance) of the Lagrangian corresponds to a conserved quantity; we can use this result to look for the underlying symmetry behind quantities we know to be conserved (for example, electric charge).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905077815055847, "perplexity": 1788.328841604554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210243.28/warc/CC-MAIN-20180815161419-20180815181419-00488.warc.gz"}
http://mathoverflow.net/questions/81881/reference-request-introductions-to-current-mathematics-derived-from-related-t
# Reference request: Introductions to current mathematics derived from / related to gauge theories I was searching for introductions to current mathematics related to gauge theories. Can someone suggest some good references? E.g. Topics in Physical Mathematics by K. Marathe - Since you are asking for "mathematics related to gauge theories" I am assuming that you mean "gauge theory" in the context of physics. In mathematics, the term "gauge theory" is a well-defined subfield of differential geometry: the study of connections on vector bundles, usually with special properties, and their associated moduli spaces. This is a huge area of current mathematical research. Some of it is physics inspired, but certainly not all of it. –  Spiro Karigiannis Nov 25 '11 at 14:18 If I'm not mistaken "gauge field" in physics is equivalent to "connections on vector bundles" in maths. - In the context of your comment, I mean 'gauge theory' in the context of maths. –  Sadiq Ahmed Nov 25 '11 at 14:28 (ie. current pure mathematics inspired by / from "gauge (field) theory" in physics) –  Sadiq Ahmed Nov 25 '11 at 14:30 This question is awfully broad. Are you looking for introductions to areas like Donaldson theory or Seiberg-Witten theory? –  S. Carnahan Nov 26 '11 at 3:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915290117263794, "perplexity": 1102.0086666720529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094924.48/warc/CC-MAIN-20150627031814-00274-ip-10-179-60-89.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/215321/how-small-can-a-mathcalc-1-subgroup-of-psl-2q-containing-elements-of-ce
# How small can a $\mathcal{C}_1$ subgroup of $PSL_2(q)$ containing elements of certain prime orders be? Let $q=p^f$, $r$ be a primitive prime divisor of $p^f-1$, i.e., $r\mid p^f-1$ but $r\nmid p^j-1$ for $j<f$. Let $G=Z_p^f:Z_\frac{q-1}{\gcd(2,q-1)}$ be the parabolic subgroup of $PSL_2(q)$, i.e., the subgroup stabilizing a $1$-dimensional subspace of $\mathbb{F}_q^2$. A Subgroup of $G$ is called a $\mathcal{C}_1$ subgroup of $PSL_2(q)$ (coming from Aschbacher's description of subgroups of classical groups). If a $\mathcal{C}_1$ subgroup $H$ of $PSL_2(q)$ contains elements of orders $p$ and $r$ (or equivalently $pr$ divides $|H|$), then how small can $H$ be? Does $H$ need to contain $Z_p^f:Z_r$? - Your assumption that $r$ is a primitive prime divisor of $p^f$ ensures that $Z_p^r$ is irreducible as a $Z_rF$-module (where $F$ is the field of order $p$). So, if $pr$ divides $|H|$, then the intersection of $H$ with $Z_p^r$ is nontrivial and hence, by irreducibility of the action, must be the whole of $Z_p^r$. So yes, $H = Z_p^r:Z_r$. – Derek Holt Oct 17 '12 at 12:38 Thanks! That's exactly the proof I'm looking for. – Binzhou Xia Oct 17 '12 at 15:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813169240951538, "perplexity": 90.52690648812623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827781.76/warc/CC-MAIN-20160723071027-00162-ip-10-185-27-174.ec2.internal.warc.gz"}
https://kfa-juelich.de/SharedDocs/Termine/PGI/PGI-1/EN/Eigene/2020/2020-11-25Mook.html
# PGI-1 Talk: Dr. Alexander Mook ## Chern Insulator of Magnons: Skyrmions, Interactions, and Second-Order Topology begin 25 Nov 2020 11:30 venue BigBlueButton Univ of Basel, Switzerland Abstract: Condensed matter systems admit topological collective excitations above a trivial ground state, an example being Chern insulators formed by Dirac bosons with a gap at finite energies. However, in contrast to electrons, there is no particle-number conservation law for collective excitations. This gives rise to particle number-nonconserving many-body interactions whose influence on single-particle topology is an open issue of fundamental interest in the field of topological quantum materials. Herein, I concentrate on magnons that are the elementary spin excitations of ferromagnets. A ferromagnet with Chern-insulating behavior of magnons exhibits a magnonic spectral gap hosting topologically protected chiral edge modes that unidirectionally revolve the sample. Since these chiral edge magnons may serve as directed information highways in next-generation technologies, a fundamental understanding of their formation and stability is at the very core of the topological magnonics paradigm. I present topological magnons in three different setups: (i) skyrmion crystals, (ii) saturated chiral magnets, and (iii) stacks of honeycomb-lattice van der Waals magnets. These setups respectively serve as platforms to study (i) quantum damping due to spontaneous quasiparticle decay, (ii) interaction-stabilized topological gaps in the magnon spectrum, and (iii) second-order topology in three-dimensional samples that admit chiral states along their hinges, where facets intersect. ## Contact Prof. Dr. Stefan Blügel Phone: +49 2461 61-4249 Fax: +49 2461 61-2850 email: [email protected]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8225036859512329, "perplexity": 4955.835954243213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703549416.62/warc/CC-MAIN-20210124141945-20210124171945-00436.warc.gz"}
https://www.physicsforums.com/threads/positron-helix-magnetic-field-find-pitch-and-radius.540323/
# Homework Help: Positron Helix magnetic field, find pitch and radius 1. Oct 14, 2011 ### JosephK 1. The problem statement, all variables and given/known data A uniform magnetic field of magnitude 0.137 T is directed along the positive x axis. A positron moving at a speed of 5.40 106 m/s enters the field along a direction that makes an angle of θ = 85.0° with the x axis (see figure below). The motion of the particle is expected to be a helix. (a) Calculate the pitch p of the trajectory as defined in figure. (b) Calculate the radius r of the trajectory as defined in figure. 2. Relevant equations vx = sqrt(vy^2 + vz^2) R = (m (vx) )/ qB T = (2 pi r ) / vx 3. The attempt at a solution To find vx multiply v vector by sin(5 degrees). Plugging in values to radius. Answer is significantly wrong. To find pitch, p = (vx)T = 2 pi r. 2. Oct 15, 2011 ### JosephK I understand this problem now. Since the magnetic field is directed in the x direction, the magnetic force is not directed in the x direction. Thus, acceleration in the x direction is zero. And so, velocity in the x direction is constant. We obtain velocity in the x direction by multiplying velocity vector by cos 85 degrees. We now find the period. It follows that the period is the circumference of the circle divided by the velocity of the particle. Replacing the velocity by the equation v = qBr/m, the period is equal to ( 2 pi m ) / q B. Consequently, the pitch p is equal to the velocity in the x direction times T. Now we solve part B. We assume that velocity in the y direction is equal to the velocity in the z direction. Then, velocity in the y direction is equal to the velocity vector times sin 85. Then, by equation, r = (v m) / qB, we find the radius.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796622395515442, "perplexity": 458.357898455348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509845.17/warc/CC-MAIN-20181015205152-20181015230652-00133.warc.gz"}
http://reliawiki.org/index.php/Randomization_and_Blocking_in_DOE
# Randomization and Blocking in DOE Chapter 7: Randomization and Blocking in DOE Chapter 7 Randomization and Blocking in DOE ## Contents Available Software: Weibull++ Experiment Design & Analysis (*.pdf) Generate Reference Book: File may be more up-to-date # Randomization The aspect of recording observations in an experiment in a random order is referred to as randomization. Specifically, randomization is the process of assigning the various levels of the investigated factors to the experimental units in a random fashion. An experiment is said to be completely randomized if the probability of an experimental unit to be subjected to any level of a factor is equal for all the experimental units. The importance of randomization can be illustrated using an example. Consider an experiment where the effect of the speed of a lathe machine on the surface finish of a product is being investigated. In order to save time, the experimenter records surface finish values by running the lathe machine continuously and recording observations in the order of increasing speeds. The analysis of the experiment data shows that an increase in lathe speeds causes a decrease in the quality of surface finish. However the results of the experiment are disputed by the lathe operator who claims that he has been able to obtain better surface finish quality in the products by operating the lathe machine at higher speeds. It is later found that the faulty results were caused because of overheating of the tool used in the machine. Since the lathe was run continuously in the order of increased speeds the observations were recorded in the order of increased tool temperatures. This problem could have been avoided if the experimenter had randomized the experiment and taken reading at the various lathe speeds in a random fashion. This would require the experimenter to stop and restart the machine at every observation, thereby keeping the temperature of the tool within a reasonable range. Randomization would have ensured that the effect of heating of the machine tool is not included in the experiment. # Blocking Many times a factorial experiment requires so many runs that not all of them can be completed under homogeneous conditions. This may lead to inclusion of the effects of nuisance factors into the investigation. Nuisance factors are factors that have an effect on the response but are not of primary interest to the investigator. For example, two replicates of a two factor factorial experiment require eight runs. If four runs require the duration of one day to be completed, then the total experiment will require two days to be completed. The difference in the conditions on the two days may introduce effects on the response that are not the result of the two factors being investigated. Therefore, the day is a nuisance factor for this experiment. Nuisance factors can be accounted for using blocking. In blocking, experimental runs are separated based on levels of the nuisance factor. For the case of the two factor factorial experiment (where the day is a nuisance factor), separation can be made into two groups or blocks: runs that are carried out on the first day belong to block 1, and runs that are carried out on the second day belong to block 2. Thus, within each block conditions are the same with respect to the nuisance factor. As a result, each block investigates the effects of the factors of interest, while the difference in the blocks measures the effect of the nuisance factor. For the example of the two factor factorial experiment, a possible assignment of runs to the blocks could be as follows: one replicate of the experiment is assigned to block 1 and the second replicate is assigned to block 2 (now each block contains all possible treatment combinations). Within each block, runs are subjected to randomization (i.e., randomization is now restricted to the runs within a block). Such a design, where each block contains one complete replicate and the treatments within a block are subjected to randomization, is called randomized complete block design. In summary, blocking should always be used to account for the effects of nuisance factors if it is not possible to hold the nuisance factor at a constant level through all of the experimental runs. Randomization should be used within each block to counter the effects of any unknown variability that may still be present. #### Example Consider the example discussed in General Full Factorial Design where the mileage of a sports utility vehicle was investigated for the effects of speed and fuel additive type. Now assume that the three replicates for this experiment were carried out on three different vehicles. To ensure that the variation from one vehicle to another does not have an effect on the analysis, each vehicle is considered as one block. See the experiment design in the following figure. For the purpose of the analysis, the block is considered as a main effect except that it is assumed that interactions between the block and the other main effects do not exist. Therefore, there is one block main effect (having three levels - block 1, block 2 and block 3), two main effects (speed -having three levels; and fuel additive type - having two levels) and one interaction effect (speed-fuel additive interaction) for this experiment. Let ${{\zeta }_{i}}\,\!$ represent the block effects. The hypothesis test on the block main effect checks if there is a significant variation from one vehicle to the other. The statements for the hypothesis test are: \begin{align} & {{H}_{0}}: & {{\zeta }_{1}}={{\zeta }_{2}}={{\zeta }_{3}}=0\text{ (no main effect of block)} \\ & {{H}_{1}}: & {{\zeta }_{i}}\ne 0\text{ for at least one }i \end{align}\,\! The test statistic for this test is: ${{F}_{0}}=\frac{M{{S}_{Block}}}{M{{S}_{E}}}\,\!$ where $M{{S}_{Block}}\,\!$ represents the mean square for the block main effect and $M{{S}_{E}}\,\!$ is the error mean square. The hypothesis statements and test statistics to test the significance of factors $A\,\!$ (speed), $B\,\!$ (fuel additive) and the interaction $AB\,\!$ (speed-fuel additive interaction) can be obtained as explained in the example. The ANOVA model for this example can be written as: ${{Y}_{ijk}}=\mu +{{\zeta }_{i}}+{{\tau }_{j}}+{{\delta }_{k}}+{{(\tau \delta )}_{jk}}+{{\epsilon }_{ijk}}\,\!$ where: • $\mu \,\!$ represents the overall mean effect • ${{\zeta }_{i}}\,\!$ is the effect of the $i\,\!$th level of the block ($i=1,2,3\,\!$) • ${{\tau }_{j}}\,\!$ is the effect of the $j\,\!$th level of factor $A\,\!$ ($j=1,2,3\,\!$) • ${{\delta }_{k}}\,\!$ is the effect of the $k\,\!$th level of factor $B\,\!$ ($k=1,2\,\!$) • ${{(\tau \delta )}_{jk}}\,\!$ represents the interaction effect between $A\,\!$ and $B\,\!$ • and ${{\epsilon }_{ijk}}\,\!$ represents the random error terms (which are assumed to be normally distributed with a mean of zero and variance of ${{\sigma }^{2}}\,\!$) In order to calculate the test statistics, it is convenient to express the ANOVA model of the equation given above in the form $y=X\beta +\epsilon \,\!$. This can be done as explained next. #### Expression of the ANOVA Model as y = ΧΒ + ε Since the effects ${{\zeta }_{i}}\,\!$, ${{\tau }_{j}}\,\!$, ${{\delta }_{k}}\,\!$, and ${{(\tau \delta )}_{jk}}\,\!$ are defined as deviations from the overall mean, the following constraints exist. Constraints on ${{\zeta }_{i}}\,\!$ are: \begin{align} & \underset{i=1}{\overset{3}{\mathop \sum }}\,{{\zeta }_{i}}= & 0 \\ & \text{or }{{\zeta }_{1}}+{{\zeta }_{2}}+{{\zeta }_{3}}= & 0 \end{align}\,\! Therefore, only two of the ${{\zeta }_{i}}\,\!$ effects are independent. Assuming that ${{\zeta }_{1}}\,\!$ and ${{\zeta }_{2}}\,\!$ are independent, ${{\zeta }_{3}}=-({{\zeta }_{1}}+{{\zeta }_{2}})\,\!$. (The null hypothesis to test the significance of the blocks can be rewritten using only the independent effects as ${{H}_{0}}:{{\zeta }_{1}}={{\zeta }_{2}}=0\,\!$.) In DOE folios, the independent block effects, ${{\zeta }_{1}}\,\!$ and ${{\zeta }_{2}}\,\!$, are displayed as Block[1] and Block[2], respectively. Constraints on ${{\tau }_{j}}\,\!$ are: \begin{align} & \underset{j=1}{\overset{3}{\mathop \sum }}\,{{\tau }_{j}}= & 0 \\ & \text{or }{{\tau }_{1}}+{{\tau }_{2}}+{{\tau }_{3}}= & 0 \end{align}\,\! Therefore, only two of the ${{\tau }_{j}}\,\!$ effects are independent. Assuming that ${{\tau }_{1}}\,\!$ and ${{\tau }_{2}}\,\!$ are independent, ${{\tau }_{3}}=-({{\tau }_{1}}+{{\tau }_{2}})\,\!$. The independent effects, ${{\tau }_{1}}\,\!$ and ${{\tau }_{2}}\,\!$, are displayed as A[1] and A[2], respectively. Constraints on ${{\delta }_{k}}\,\!$ are: \begin{align} & \underset{k=1}{\overset{2}{\mathop \sum }}\,{{\delta }_{k}}= & 0 \\ & \text{or }{{\delta }_{1}}+{{\delta }_{2}}= & 0 \end{align}\,\! Therefore, only one of the ${{\delta }_{k}}\,\!$ effects is independent. Assuming that ${{\delta }_{1}}\,\!$ is independent, ${{\delta }_{2}}=-{{\delta }_{1}}\,\!$. The independent effect, ${{\delta }_{1}}\,\!$, is displayed as B:B. Constraints on ${{(\tau \delta )}_{jk}}\,\!$ are: \begin{align} & \underset{j=1}{\overset{3}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= & 0 \\ & \text{and }\underset{k=1}{\overset{2}{\mathop \sum }}\,{{(\tau \delta )}_{jk}}= & 0 \\ & \text{or }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{21}}+{{(\tau \delta )}_{31}}= & 0 \\ & {{(\tau \delta )}_{12}}+{{(\tau \delta )}_{22}}+{{(\tau \delta )}_{32}}= & 0 \\ & \text{and }{{(\tau \delta )}_{11}}+{{(\tau \delta )}_{12}}= & 0 \\ & {{(\tau \delta )}_{21}}+{{(\tau \delta )}_{22}}= & 0 \\ & {{(\tau \delta )}_{31}}+{{(\tau \delta )}_{32}}= & 0 \end{align}\,\! The last five equations given above represent four constraints as only four of the five equations are independent. Therefore, only two out of the six ${{(\tau \delta )}_{jk}}\,\!$ effects are independent. Assuming that ${{(\tau \delta )}_{11}}\,\!$ and ${{(\tau \delta )}_{21}}\,\!$ are independent, we can express the other four effects in terms of these effects. The independent effects, ${{(\tau \delta )}_{11}}\,\!$ and ${{(\tau \delta )}_{21}}\,\!$, are displayed as A[1]B and A[2]B, respectively. The regression version of the ANOVA model can be obtained using indicator variables. Since the block has three levels, two indicator variables, ${{x}_{1}}\,\!$ and ${{x}_{2}}\,\!$, are required, which need to be coded as shown next: \begin{align} & \text{Block 1}: & {{x}_{1}}=1,\text{ }{{x}_{2}}=0\text{ } \\ & \text{Block 2}: & {{x}_{1}}=0,\text{ }{{x}_{2}}=1\text{ } \\ & \text{Block 3}: & {{x}_{1}}=-1,\text{ }{{x}_{2}}=-1\text{ } \end{align}\,\! Factor $A\,\!$ has three levels and two indicator variables, ${{x}_{3}}\,\!$ and ${{x}_{4}}\,\!$, are required: \begin{align} & \text{Treatment Effect }{{\tau }_{1}}: & {{x}_{3}}=1,\text{ }{{x}_{4}}=0 \\ & \text{Treatment Effect }{{\tau }_{2}}: & {{x}_{3}}=0,\text{ }{{x}_{4}}=1\text{ } \\ & \text{Treatment Effect }{{\tau }_{3}}: & {{x}_{3}}=-1,\text{ }{{x}_{4}}=-1\text{ } \end{align}\,\! Factor $B\,\!$ has two levels and can be represented using one indicator variable, ${{x}_{5}}\,\!$, as follows: \begin{align} & \text{Treatment Effect }{{\delta }_{1}}: & {{x}_{5}}=1 \\ & \text{Treatment Effect }{{\delta }_{2}}: & {{x}_{5}}=-1 \end{align}\,\! The $AB\,\!$ interaction will be represented by ${{x}_{3}}{{x}_{5}}\,\!$ and ${{x}_{4}}{{x}_{5}}\,\!$. The regression version of the ANOVA model can finally be obtained as: $Y=\mu +{{\zeta }_{1}}\cdot {{x}_{1}}+{{\zeta }_{2}}\cdot {{x}_{2}}+{{\tau }_{1}}\cdot {{x}_{3}}+{{\tau }_{2}}\cdot {{x}_{4}}+{{\delta }_{1}}\cdot {{x}_{5}}+{{(\tau \delta )}_{11}}\cdot {{x}_{3}}{{x}_{5}}+{{(\tau \delta )}_{21}}\cdot {{x}_{4}}{{x}_{5}}+\epsilon \,\!$ In matrix notation this model can be expressed as: $y=X\beta +\epsilon \,\!$ or: $\left[ \begin{matrix} 17.3 \\ 18.9 \\ 17.1 \\ 18.7 \\ 19.1 \\ 18.8 \\ 17.8 \\ 18.2 \\ . \\ . \\ 18.3 \\ \end{matrix} \right]=\left[ \begin{matrix} 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & -1 & -1 & 1 & -1 & -1 \\ 1 & 1 & 0 & 1 & 0 & -1 & -1 & 0 \\ 1 & 1 & 0 & 0 & 1 & -1 & 0 & -1 \\ 1 & 1 & 0 & -1 & -1 & -1 & 1 & 1 \\ 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\ . & . & . & . & . & . & . & . \\ . & . & . & . & . & . & . & . \\ 1 & -1 & -1 & -1 & -1 & -1 & 1 & 1 \\ \end{matrix} \right]\left[ \begin{matrix} \mu \\ {{\zeta }_{1}} \\ {{\zeta }_{2}} \\ {{\tau }_{1}} \\ {{\tau }_{2}} \\ {{\delta }_{1}} \\ {{(\tau \delta )}_{11}} \\ {{(\tau \delta )}_{21}} \\ \end{matrix} \right]+\left[ \begin{matrix} {{\epsilon }_{111}} \\ {{\epsilon }_{121}} \\ {{\epsilon }_{131}} \\ {{\epsilon }_{112}} \\ {{\epsilon }_{122}} \\ {{\epsilon }_{132}} \\ {{\epsilon }_{211}} \\ {{\epsilon }_{221}} \\ . \\ . \\ {{\epsilon }_{332}} \\ \end{matrix} \right]\,\!$ Knowing $y\,\!$, $X\,\!$ and $\beta \,\!$, the sum of squares for the ANOVA model and the extra sum of squares for each of the factors can be calculated. These are used to calculate the mean squares that are used to obtain the test statistics. #### Calculation of the Sum of Squares for the Model The model sum of squares, $S{{S}_{TR}}\,\!$, for the ANOVA model of this example can be obtained as: \begin{align} & S{{S}_{TR}}= & {{y}^{\prime }}[H-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ & = & {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ & = & 9.9256 \end{align}\,\! Since seven effect terms (${{\zeta }_{1}}\,\!$, ${{\zeta }_{2}}\,\!$, ${{\tau }_{1}}\,\!$, ${{\tau }_{2}}\,\!$, ${{\delta }_{1}}\,\!$, ${{(\tau \delta )}_{11}}\,\!$ and ${{(\tau \delta )}_{21}}\,\!$) are used in the model the number of degrees of freedom associated with $S{{S}_{TR}}\,\!$ is seven ($dof(S{{S}_{TR}})=7\,\!$). The total sum of squares can be calculated as: \begin{align} & S{{S}_{T}}= & {{y}^{\prime }}[I-(\frac{1}{{{n}_{a}}\cdot {{n}_{b}}\cdot m})J]y \\ & = & {{y}^{\prime }}[H-(\frac{1}{18})J]y \\ & = & 10.7178 \end{align}\,\! Since there are 18 observed response values, the number of degrees of freedom associated with the total sum of squares is 17 ($dof(S{{S}_{T}})=17\,\!$). The error sum of squares can now be obtained: \begin{align} S{{S}_{E}}= & S{{S}_{T}}-S{{S}_{TR}} \\ = & 10.7178-9.9256 \\ = & 0.7922 \end{align}\,\! The number of degrees of freedom associated with the error sum of squares is: \begin{align} dof(S{{S}_{E}})= & dof(S{{S}_{T}})-dof(S{{S}_{TR}}) \\ = & 17-7 \\ = & 10 \end{align}\,\! Since there are no true replicates of the treatments (as can be seen from the design of the previous figure, where all of the treatments are seen to be run just once), all of the error sum of squares is the sum of squares due to lack of fit. The lack of fit arises because the model used is not a full model since it is assumed that there are no interactions between blocks and other effects. #### Calculation of the Extra Sum of Squares for the Factors The sequential sum of squares for the blocks can be calculated as: \begin{align} S{{S}_{Block}}= & S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}})-S{{S}_{TR}}(\mu ) \\ = & {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0 \end{align}\,\! where $J\,\!$ is the matrix of ones, ${{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!$ is the hat matrix, which is calculated using ${{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}={{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}{{(X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }{{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}})}^{-1}}X_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}^{\prime }\,\!$, and ${{X}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}\,\!$ is the matrix containing only the first three columns of the $X\,\!$ matrix. Thus \begin{align} S{{S}_{Block}}= & {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y-0 \\ = & 0.1944-0 \\ = & 0.1944 \end{align}\,\! Since there are two independent block effects, and ${{\zeta }_{2}}\,\!$, the number of degrees of freedom associated with $S{{S}_{Blocks}}\,\!$ is two ($dof(S{{S}_{Blocks}})=2\,\!$). Similarly, the sequential sum of squares for factor $A\,\!$ can be calculated as: \begin{align} S{{S}_{A}}= & S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}})-S{{S}_{TR}}(\mu ,{{\zeta }_{1}},{{\zeta }_{2}}) \\ = & {{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}},{{\tau }_{1}},{{\tau }_{2}}}}-(\frac{1}{18})J]y-{{y}^{\prime }}[{{H}_{\mu ,{{\zeta }_{1}},{{\zeta }_{2}}}}-(\frac{1}{18})J]y \\ = & 4.7756-0.1944 \\ = & 4.5812 \end{align}\,\! Sequential sum of squares for the other effects are obtained as $S{{S}_{B}}=4.9089\,\!$ and $S{{S}_{AB}}=0.2411\,\!$. #### Calculation of the Test Statistics Knowing the sum of squares, the test statistics for each of the factors can be calculated. For example, the test statistic for the main effect of the blocks is: \begin{align} {{({{f}_{0}})}_{Block}}= & \frac{M{{S}_{Block}}}{M{{S}_{E}}} \\ = & \frac{S{{S}_{Block}}/dof(S{{S}_{Blocks}})}{S{{S}_{E}}/dof(S{{S}_{E}})} \\ = & \frac{0.1944/2}{0.7922/10} \\ = & 1.227 \end{align}\,\! The $p\,\!$ value corresponding to this statistic based on the $F\,\!$ distribution with 2 degrees of freedom in the numerator and 10 degrees of freedom in the denominator is: \begin{align} p\text{ }value= & 1-P(F\le {{({{f}_{0}})}_{Block}}) \\ = & 1-0.6663 \\ = & 0.3337 \end{align}\,\! Assuming that the desired significance level is 0.1, since $p\,\!$ value > 0.1, we fail to reject ${{H}_{0}}:{{\zeta }_{i}}=0\,\!$ and conclude that there is no significant variation in the mileage from one vehicle to the other. Statistics to test the significance of other factors can be calculated in a similar manner. The complete analysis results obtained from the DOE folio for this experiment are presented in the following figure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 115, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984527230262756, "perplexity": 818.5238481789456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104560.59/warc/CC-MAIN-20170818024629-20170818044629-00104.warc.gz"}
http://kitchingroup.cheme.cmu.edu/blog/2014/02/08/Separating-code-blocks-from-results-in-org-mode/
## Separating code blocks from results in org-mode | categories: org-mode | tags: | View Comments I often put my code blocks right where I need them in my org documents. It usually has a section explaining what I want to do, then the code block that implements the idea, following by the output. Sometimes the code blocks are long, however, and it might be desirable for that code to be in an appendix. 1 Org-mode enables this with #+CALL. For example, I have a function named circle-area in the appendix of this post that calculates the area of a circle given its radius. The function is "named" by a line like this: #+name: function-name I can use the function like this: #+CALL: circle-area(1) 3.14159265359 That is pretty nice. You can separate the code out from the main document. You still have to put the #+CALL: line in though. It may be appropriate to put a call inline with your text. If you add the following sentence, and put your cursor on the callcircle-area and press C-c C-c, the output is put in verbatim markers right after it. The area of a circle with unit radius is call_circle-area(1). The area of a circle with unit radius is 3.14159265359. Here is another interesting way to do it. We can specify a named results block. Let us consider another function named hello-block that prints output. We specify a named results block like this: #+RESULTS: function-name Now, whenever you execute that block, the results will get put where this line is like this. hello John These could be useful approaches to making the "top" of your document cleaner, with less code in it. The code of course is still in the document, but at the end, in an appendix for example. This kind of separation might make it a little harder to find the code, and to reevaluate it,2 but it might improve the readability for others. ## 1 Appendix of code ### 1.1 Area of a circle import numpy as np return np.pi * r**2 ### 1.2 Hello function print 'hello ' + name ## Footnotes: 1 I know I can pretty conveniently collapse a code block by pressing tab on the header. Sometimes that is not enough. 2 It is not much harder, C-s will let you search for the named block. I do not know if there are nice convenient navigation commands for this. org-mode source Org-mode version = 8.2.5h
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806439995765686, "perplexity": 1327.3404354500883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.75/warc/CC-MAIN-20170823152006-20170823172006-00012.warc.gz"}
https://infoscience.epfl.ch/record/89532
## A General Characterization of Indulgence (Invited Paper) An indulgent algorithm is a distributed algorithm that, besides tolerating process failures, also tolerates arbitrarily long periods of instability, with an unbounded number of timing and scheduling failures. In particular, no process can take any irrevocable action based on the operational status, correct or failed, of other processes. This paper presents an intuitive and general characterization of indulgence. The characterization can be viewed as a simple application of Murphy's law to partial runs of a distributed algorithm, in a computing model that encompasses various communication and resilience schemes. We use our characterization to establish several results about the inherent power and limitations of indulgent algorithms. Published in: Proceedings of the Eighth International Symposium on Stabilization, Safety, and Security of Distributed Systems Presented at: Eighth International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2006), Dallas, Texas, USA, November 17th-19th, 2006 Year: 2006 Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397341370582581, "perplexity": 2712.6765716371942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527396.78/warc/CC-MAIN-20190721225759-20190722011759-00298.warc.gz"}
https://chem.libretexts.org/Courses/Brevard_College/CHE_202%3A_Organic_Chemistry_II/06%3A_Structural_Determination_II/6.01%3A_Nuclear_Magnetic_Resonance__Spectroscopy
# 6.1: Nuclear Magnetic Resonance Spectroscopy Objectives After completing this section, you should be able to 1. discuss the principles of NMR spectroscopy. 2. identify the two magnetic nuclei that are most important to an organic chemist. Key Terms Make certain that you can define, and use in context, the key term below. • resonance Study Notes Notice that the word “resonance” has a different meaning when we are discussing nuclear magnetic resonance spectroscopy than it does when discussing molecular structures. ## Introduction Some types of atomic nuclei act as though they spin on their axis similar to the Earth. Since they are positively charged they generate an electromagnetic field just as the Earth does. So, in effect, they will act as tiny bar magnetics. Not all nuclei act this way, but fortunately both 1H and 13C do have nuclear spins and will respond to this technique. NMR Spectrometer In the absence of an external magnetic field the direction of the spin of the nuclei will be randomly oriented (see figure below left). However, when a sample of these nuclei is place in an external magnetic field, the nuclear spins will adopt specific orientations much as a compass needle responses to the Earth’s magnetic field and aligns with it. Two possible orientations are possible, with the external field (i.e. parallel to and in the same direction as the external field) or against the field (i.e. antiparallel to the external field). Nuclei in line with the magnetic field have a slightly lower energy than those aligned against the magnetic field (��E). The difference in energy depends on the. intensity of the applied magnetic field Bo. See figure below right. Figure 1: (Left) Random nuclear spin without an external magnetic field. (Right)Ordered nuclear spin in an external magnetic field If the ordered nuclei are now subjected to EM radiation of the proper frequency the nuclei aligned with the field will absorb energy and "spin-flip" to align themselves against the field, a higher energy state. When this spin-flip occurs the nuclei are said to be in "resonance" with the field, hence the name for the technique, Nuclear Magentic Resonance or NMR. The amount of energy, and hence the exact frequency of EM radiation required for resonance to occur is dependent on both the strength of the magnetic field applied and the type of the nuclei being studied, but it is always located in the radio wave region of the electromagnetic spectrum. As the strength of the magnetic field increases the energy difference between the two spin states increases and a higher frequency (more energy) EM radiation needs to be applied to achieve a spin-flip (see image below). Superconducting magnets can be used to produce very strong magnetic field, on the order of 21 tesla (T). Lower field strengths can also be used, in the range of 4 - 7 T. At these levels the energy required to bring the nuclei into resonance is in the MHz range and corresponds to radio wavelength energies, i.e. at a field strength of 4.7 T 200 MHz bring 1H nuclei into resonance and 50 MHz bring 13C into resonance. This is considerably less energy then is required for IR spectroscopy, ~10-4 kJ/mol versus ~5 - ~50 kJ/mol. 1H and 13C are not unique in their ability to undergo NMR. All nuclei with an odd number of protons (1H, 2H, 14N, 19F, 31P ...) or nuclei with an odd number of neutrons (i.e. 13C) show the magnetic properties required for NMR. Only nuclei with even number of both protons and neutrons (12C and 16O) do not have the required magnetic properties. ## 13.1 Nuclear Magnetic Resonance Spectroscopy ### 13.1 Exercises #### Questions Q13.1.1 If in a field strength of 4.7 T, H1 requires 200 MHz of energy to maintain resonance. If atom X requires 150 MHz, calculate the amount of energy required to spin flip atom X’s nucleus. Is this amount greater than the energy required for hydrogen? Q13.1.2 Calculate the energy required to spin flip at 400 MHz. Does changing the frequency to 500 MHz decrease or increase the energy required? What about 300 MHz. #### Solutions S13.1.1 E = hυ E = (6.62 × 10−34)(150 MHz) E = 9.93 × 10−26 J The energy is equal to 9.93x10-26 J. This value is smaller than the energy required for hydrogen (1.324 × 10−25 J). S13.1.2 E = hυ E = (6.62 × 10−34)(400 MHz) E = 2.648 × 10−25 J The energy would increase if the frequency would increase to 500 MHz, and decrease if the frequency would decrease to 300 MHz.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991456031799316, "perplexity": 766.4383321189605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358688.35/warc/CC-MAIN-20211129044311-20211129074311-00379.warc.gz"}
https://blog.eitanadler.com/2011/07/blogging-my-way-through-clrs-section-31.html
## Sunday, July 10, 2011 ### Blogging my way through CLRS section 3.1 [part 5] Part 4 here. I wrote an entire blog post explaining the answers to 2.3 but Blogger decided to eat it. I don't want to redo those answers so here is 3.1: For now on I will title my posts with the section number as well to help Google. Question 3.1-1: Let $f(n)$ and $g(n)$be asymptotically non-negative functions. Using the basic definition of $\theta$-notation, prove that $\max(f(n) , g(n)) \in \theta(f(n) + g(n))$ . CLRS defines $\theta$ as $\theta(g(n))= \{ f(n) :$ there exists some positive constants $c_1, c_2$, and $n_0,$ such that $0 \leq c_1g(n) \leq f(n) \leq c_2g(n)$ for all $n \geq n_0\}$ Essentially we must prove that there exists some $c_1$ and $c_2$ such that $c_1 \times (f(n) + g(n)) \leq \max(f(n), g(n)) \leq c_2 \times (f(n) + g(n))$ There are a variety of ways to do this but I will choose the easiest way I could think of. Based on the above equation we know that $\max(f(n), g(n)) \leq f(n) + g(n)$ (as f(n) and g(n) must both me non-negative) and we further know that $\max(f(n), g(n))$ can't be more than twice f(n)+g(n). What we have then are the following inequalities: $$\max(f(n), g(n)) \leq c_1 \times (f(n) + g(n))$$ and $$c_2 \times (f(n) + g(n)) \leq 2 \times \max(f(n), g(n))$$ Solving for $c_1$ we get 1 and for $c_2$ we get $\frac {1} {2}$ Question 3.1-2: Show for any real constants $a$ and $b$ where $b \gt 0$ that $(n+a)^b \in \theta(n^b)$ Because $a$ is a constant and the definition of $\theta$ is true after some $n_0$ adding $a$ to $n$ does not affect the definition and we simplify to $n^b \in \theta(n^b)$ which is trivially true Question 3.1-3: Explain why the statement "The running time of $A$ is at least $O(n^2)$," is meaningless. I'm a little uncertain of this answer but I think this is what CLRS is getting at when we say a function $f(n)$ has a running time of $O(g(n))$ what we really mean is that $f(n)$ has an asymptotic upper bound of $g(n)$. This means that $f(n) \leq g(n)$ after some $n_0$. To say a function has a running time of at least g(n) seems to be saying that $f(n) \leq g(n) \And f(n) \geq g(n)$ which is a contradiction. Question 3.1-4: Is $2^{n+1} = O(2^n)$? Is $2^{2n} = O(2^n)$? $2^{n+1} = 2 \times 2^n$. which means that $2^{n+1} \leq c_1 \times 2^n$ after $n_0$ so we have our answer that $2^{n+1} \in o(2^n)$ Alternatively we could say that the two functions only differ by a constant coefficient and therefore the answer is yes. There is no constant such that $2^{2n} = c \times 2^n$ and thefore $2^{2n} \notin O(2^n)$ Question 3.1-5: Prove that for any two functions $f(n)$ and $g(n)$, we have $f(n) \in \theta(g(n)) \iff f(n) \in O(g(n)) \And f(n) \in \Omega(g(n))$ This is an "if an only if" problem so we must prove this in two parts: Firstly, if $f(n) \in O(g(n))$ then there exists some $c_1$ and $n_0$ such that $f(n) \leq c_1 \times g(n)$ after some $n_0$. Further if $f(n) \in Omega(g(n))$ then there exists some $c_2$ and $n_0$ such that $f(n) \geq c_2 \times g(n)$ after some $n_0$. If we combine the above two statements (which come from the definitions of $\Omega$ and O) than we know that there exists some $c_1, c_2, and n_0,$ such that $c_1g(n) \leq f(n) \leq c_2g(n)$ for all $n \geq n_0\}$ We could do the same thing backward for the other direction: If $f(n) \in \theta(g(n))$ then we could split the above inequality and show that each of the individual statements are true. Question 3.1-6: Prove that the running time of an algorithm is $\theta(g(n)) \iff$ its worst-case running time is $O(g(n))$ and its best case running time $\Omega(g(n))$. I'm going to try for an intuitive proof here instead of a mathematical one. If the worst case is asymptotically bound above in the worst case by a certain function and is asymptotically bound from below in the best case which means that the function is tightly bound by both those functions. f(n) never goes below some constant times g(n) and never goes above some constant times g(n). This is what we get from the above definition of $\theta(g(n)))$ A mathematical follows from question 3.1-5. Question 3.1-7: Prove that $o(g(n)) \cap \omega(g(n)) = \varnothing$ little o and little omega are defined as follows: $o(g(n)) = \{ f(n) : \forall c > 0 \exists n_0 \text{such that } 0 \leq f(n) \leq c \times g(n) \forall n \gt n_0$ and $\omega(g(n)) = \{ f(n) : \forall c > 0 \exists n_0 \text{such that } 0 \leq c \times g(n) \leq f(n) \forall n \gt n_0$ In other words $$f(n) \in o(g(n)) \iff \lim_{n \to \infty} \frac {f(n)} {g(n)} = 0$$ and $$f(n) \in \omega(g(n)) \iff \lim_{n \to \infty} \frac {f(n)} {g(n)} = \infty$$ It is obvious that these can not be true at the same time. This would require that $0 = \infty$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416412711143494, "perplexity": 144.0193760901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00405.warc.gz"}
http://www.ck12.org/algebra/Solving-Equations-with-Exponents/lesson/Exponential-Equations-Honors/r9/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are viewing an older version of this Concept. Go to the latest version. # Solving Equations with Exponents ## Solve equations with variable exponents and variable bases 0% Progress Practice Solving Equations with Exponents Progress 0% Exponential Equations The following exponential equation is one in which the variable appears in the exponent. \begin{align*}9^{x+1}=\sqrt{27}\end{align*} How can you solve this type of equation where you can't isolate the variable? ### Guidance When an equation has exponents, sometimes the variable will be in the exponent and sometimes it won't. There are different strategies for solving each type of equation. • When the variable is in the exponent: Rewrite each side of the equation so that the bases of the exponent are the same. Then, create a new equation where you set the exponents equal to each other and solve (see Example A). • When the variable is not in the exponent: Manipulate the equation so the exponent is no longer there (see Example B). Or, rewrite each side of the equation so that both sides have the same exponent. Then, create a new equation where you set the bases equal to each other and solve (see Example C). #### Example A Solve the following exponential equation: \begin{align*}25^{x-3}=\left(\frac{1}{5}\right)^{3x+18}\end{align*} Solution: The variable appears in the exponent. Write both sides of the equation as a power of 5. \begin{align*}({\color{red}5^2})^{x-3}=({\color{red}5^{-1}})^{3x+18}\end{align*} Apply the law of exponents for raising a power to a power . #### Example B Solve the following exponential equation: \begin{align*}4(x-2)^{\frac{1}{2}}=16\end{align*} Solution: The variable appears in the base. #### Example C Solve the following exponential equation: \begin{align*}(2x-4)^{\frac{2}{3}}=\sqrt[3]{9}\end{align*} Solution: The variable appears in the base. #### Concept Problem Revisited \begin{align*}9^{x+1}=\sqrt{27}\end{align*} To begin, write each side of the equation with a common base. Both 9 and 27 can be written as a power of ‘3’. Therefore, \begin{align*}({\color{red}3^2})^{x+1}=\sqrt{{\color{red}3^3}}\end{align*}. Apply to the left side of the equation. \begin{align*}3^{{\color{red}2x+2}}=\sqrt{3^3}\end{align*} Express the right side of the equation in exponential form and apply . Now that the bases are the same, then the exponents are equal quantities. \begin{align*}{\color{red}2x+2}={\color{red}\frac{3}{2}}\end{align*} Solve the equation. \begin{align*}{\color{red}2}(2x+2)={\color{red}2} \left(\frac{3}{2}\right)\end{align*} Multiply both sides of the equation by ‘2’. Simplify and solve. ### Vocabulary Exponential Equation An exponential equation is an equation in which the variable appears in either the exponent or in the base. The equation is solved by applying the laws of exponents. ### Guided Practice 1. Use the laws of exponents to solve the following exponential equation: \begin{align*}27^{1-x}=\left(\frac{1}{9}\right)^{2-x}\end{align*} 2. Use the laws of exponents to solve the following exponential equation: \begin{align*}(x-3)^{\frac{1}{2}}=(25)^{\frac{1}{4}}\end{align*} 3. Use the laws of exponents to solve \begin{align*}\frac{(8^{x-4})(2^x)(4^{2x+3})}{32^x}=16\end{align*}. 1. 2. 3. ### Practice Use the laws of exponents to solve the following exponential equations: 1. \begin{align*}2^{3x-1}=\sqrt[3]{16}\end{align*} 2. \begin{align*}36^{x-2}=\left(\frac{1}{6}\right)^{2x+5}\end{align*} 3. \begin{align*}6(x-4)^{\frac{1}{3}}=18\end{align*} 4. \begin{align*}(3x-2)^{\frac{2}{5}}=4\end{align*} 5. \begin{align*}36^{x+1}=\sqrt{6}\end{align*} 6. \begin{align*}3^{5x-1}=\sqrt[3]{9}\end{align*} 7. \begin{align*}9^{2x-1}=\left(\sqrt[4]{27}\right)^x\end{align*} 8. \begin{align*}(3x-2)^{\frac{3}{2}}=8\end{align*} 9. \begin{align*}(x+1)^{-\frac{5}{2}}=32\end{align*} 10. \begin{align*}\left(\sqrt{3}\right)^{4x}=27^{x-3}\end{align*} 11. \begin{align*}4^{3x-1}=\sqrt[3]{32}\end{align*} 12. \begin{align*}(x+2)^{\frac{2}{3}}=(27)^{\frac{2}{9}}\end{align*} 13. \begin{align*}(2^{x-3})(8^x)=32\end{align*} 14. \begin{align*}(x-2)^{\frac{1}{2}}=9^{\frac{1}{4}}\end{align*} 15. \begin{align*}8^{x+12}=\left(\frac{1}{16}\right)^{2x-7}\end{align*}
{"extraction_info": {"found_math": true, "script_math_tex": 28, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363054633140564, "perplexity": 870.3894953196683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460263.61/warc/CC-MAIN-20151124205420-00231-ip-10-71-132-137.ec2.internal.warc.gz"}
https://byjus.com/question-answer/if-p-and-q-are-the-roots-of-2x-2-3x-4-0-them-p-1/
Question # If p and q are the roots of 2x2+3x+4 = 0, them p2q+q2p = .-3-4816 Solution ## The correct option is A -3For a quadratic equation given by ax2+bx+c = 0, the sum of the roots = -ba, the product of the roots = ca. Given, quadratic equation:  2x2+3x+4 = 0, here, a = 2, b = 3, c = 4 Here, p+q = sum of the roots =  -ba =  -32 Here, pq = product of the roots =  ca =  42 = 2. p2q+q2p = pq(p+q) = -32×2 = -3. Suggest corrections
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9439999461174011, "perplexity": 1854.9240725552843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00421.warc.gz"}
https://www.physicsforums.com/threads/integration-on-manifolds.803315/
Integration on manifolds Tags: 1. Mar 15, 2015 "Don't panic!" In all the notes that I've found on differential geometry, when they introduce integration on manifolds it is always done with top forms with little or no explanation as to why (or any intuition). From what I've manage to gleam from it, one has to use top forms to unambiguously define integration on a manifold (although I'm not quite sure why this is the case?!) and one can integrate lower dimensional forms via integration on a chain (through defining pullbacks). I'm really struggling to understand these notions, please could someone enlighten me on the subject? 2. Mar 16, 2015 Bacle2 Well, one issue is the independence of integration of forms on the choice of coordinates, which is not the case for functions. 3. Mar 16, 2015 lavinia What is your reference? Can you illustrate what you are talking about? k-forms can be integrated over smooth k-chains. These do not need to be top dimensional forms. In fact, the entire cohomology with real coefficients can in principal be computed from integrals of k-forms over smooth k-chains.. 4. Mar 16, 2015 "Don't panic!" The notes that I've managed to find all introduce integration on manifolds with a phrase like "consider an n-form defined on an n-dimensional manifold. The integral of such an n-form is...". They also state things such as "n-forms are natural objects to integrate on a manifold as they do not require a metric". I was trying to find some motivation for these things and thought that it might have something to do with orientability issues, especially after reading this sentence on the Wiki page: "There is in general no meaningful way to integrate k-forms over subsets for because there is no consistent way to orient k-dimensional subsets". The issue arose as myself and a colleague were trying to figure out the following expression: $$V=\int_{0}^{R}dr\;4\pi r^{2}\sqrt{1-\left(\frac{dt}{dr}\right)^{2}}$$ where $t=t(r)$. Which is apparently the volume enclosed by a 3-dimensional sphere (of radius $R$) in Minkowski spacetime?! We wondered why one couldn't just integrate over 3-dimensional space using a 3-D volume element and assumed that one had to integrate the 4-volume form for Minkowski space, i.e. $$\int dV= \int\sqrt{-g}dt\wedge dx\wedge dy \wedge dz$$ with the standard orientation $+(t,x,y,z)$, and introduce a pull-back map to constrain one of the degrees of freedom to obtain a 3-dimensional volume integral, but weren't sure as to why (or if our intuition was correct)?! Last edited: Mar 16, 2015 5. Mar 16, 2015 lavinia The reference starts with the assumption that you want to integrate something over the entire manifold and states that n-forms are the natural candidates. But to do this one needs to in principal express the manifold as a n-chain and then piece a global form together using partitions of unity subordinate to the smooth simplices in the n chain. The fundamental object is the n-simplex. But one could have a k-simplex k<n and integrate a k form over it. The idea is exactly the same. I do not know relativity theory but the formula seems to be a simplification of the general volume integral for the special case of a sphere. What is meant by a sphere in Space-Time? Last edited: Mar 16, 2015 6. Mar 16, 2015 Ben Niehoff You can integrate k-forms over k-chains by pulling them back, at which point they become top forms that live on the k-chain itself. So you really only need to define integration of top forms, and pullbacks. My understanding of chains is that a chain is not just a set, but also has an orientation, so the integration of forms is well-defined. Calculating volumes and areas is trickier, though. Consider the simpler case of calculating arc lengths. You know what the arc length functional is: $$\int_\gamma d\lambda \, \sqrt{ \pm g_{\mu\nu} \frac{dx^\mu}{d\lambda} \frac{dx^\nu}{d\lambda} }$$ You can think of this as integrating the pullback along $\gamma$ of the form $$\sqrt{ \pm g_{\mu\nu} \, dx^\mu \, dx^\nu }$$ But this object is not a 1-form, because it is not a linear map from $T_xM \to \mathbb{R}$. In general, there is no linear form whose pullback along $\gamma$ gives the arc length functional for all possible $\gamma$. A similar situation applies to area functionals of any k-area for $k < n$. These area functionals cannot be linear forms, but are more complicated objects (essentially, square roots of various determinants). The exception is the top form on $M$, which is linear only because $\Lambda^n T_xM$ is 1-dimensional. Last edited by a moderator: Apr 19, 2017 7. Mar 16, 2015 "Don't panic!" So is it possible to directly integrate a k-form (with k<n) on an n-dimensional manifold? The Minkowski space-time is a 4-dimensional manifold with zero curvature (by space-time in general it is meant that one is considering a 4-dimensional Pseudo-Riemannian manifold, with 3 spatial coordinates and 1 temporal coordinate). Basically, I think they are integrating a 3-d sphere on such a manifold, but I don't see where the expression I gave above comes from?! 8. Mar 16, 2015 "Don't panic!" Ah, so is the reason that integration on manifolds seems to be introduced in terms of top forms because these are the natural objects to integrate on an n-dimensional manifold, and then by introducing the notion of a pull-back map one can always map a top-form to a lower dimensional form [k(<n) form] and integrate this on a submanifold? 9. Mar 16, 2015 lavinia Sure. What about a line integral around a closed curve in the plane.? This is a 1-form integrated over a 1 chain in a two dimensional manifold. 10. Mar 16, 2015 lavinia No. What Ben was saying was the a k- form is integrated over a smooth k-simplex so with respect to the k-simplex it is automatically top dimensional. So you only need to define the integrals of forms in the top dimension. When one pulls a k-form back along a k-simplex one ends up integrating over the standard k-simplex in Euclidean space. This is a k-manifold with boundary. For a k-manifold without boundary, one must in principal express the manifold as a linear combination of k-simplces with boundary components cancelling. 11. Mar 16, 2015 "Don't panic!" So is the point that one defines integration for top forms on manifolds and then uses pullback maps to create chains to reduce it to a lower dimensional integral, e.g. for the example you gave would one pullback the 1-form defined on the two dimensional manifold to a 1-form defined on the one-dimensional sub-manifold? (for example, if $\omega\in\Omega^{1}(\mathbb{R}^{2})$ and $\phi :\mathbb{R}\rightarrow\mathbb{R}^{2}$, then $\phi^{\ast}\omega\in\Omega^{1}(\mathbb{R})$ such that $$\int_{\phi}\omega =\int_{\mathbb{R}}\phi^{\ast}\omega\;\;)$$ 12. Mar 16, 2015 lavinia No. ω is defined on the plane but its integrals are defined on 1 chains. One can not integrate it over $R^2$ 13. Mar 16, 2015 "Don't panic!" Sorry, I noticed my original error, but wasn't able to correct it quickly enough, is the subsequent correction I made correct? Is it correct to say that the chain is given via the map $\phi :\mathbb{R}\rightarrow\mathbb{R}^{2}$ such that $\omega$ is integrated over the chain $\phi$,i.e. $$\int_{\phi}\omega =\int_{\mathbb{R}}\phi^{\ast}\omega$$ 14. Mar 16, 2015 lavinia Yes except that a 1 simplex is an oriented line segment with end points included. It is not all of R. 15. Mar 16, 2015 "Don't panic!" Can one extend this to arbitrary (but finite) dimension, i.e. given an n-form $\omega\in\Omega^{n}(M)$ defined on some n-dimensional manifold $M$, then one can define a set of $k$-chains such that one can pull-back the n-form $\omega$ to a k-form defined on some k-dimensional sub-manifold $N$ of $M$ and then integrate this over a subset of $N$. That is (something of the form) $$\int_{\phi (D)}\omega =\int_{D}\phi^{\ast}\omega$$ where $D\subset N$?! 16. Mar 16, 2015 lavinia The pull back of a n-form will be an n-form not a k form. The process of pulling back does not change the dimension. Last edited: Mar 16, 2015 17. Mar 16, 2015 "Don't panic!" yes, sorry, you're right. It's the manifold that we are considering that changes dimension, right? 18. Mar 16, 2015 lavinia Not really. It is true that one starts with a n-manifold and ends up integrating on a k manifold. But one is not integrating over the n-manifold itself so you are not changing dimension. One is integrating over a k-dimensional simplex inside the manifold. 19. Mar 16, 2015 "Don't panic!" Excuse my ignorance, but what is a "simplex". Are we integrating over a k-dimensional submanifold of the original manifold? 20. Mar 16, 2015 lavinia A simplex is a smooth mapping of the standard simplex in Euclidean space into the manifold. One integrates it by pulling it back to the standard simplex in Euclidean space. A simplex is not a submanifold. It is a mapping. I apologize because I feel that I have confused things by bringing in the idea of simplexes. Let's go back the case of a line integral. Here one integrates along a parameterized curve. The parameterization is a mapping of a closed interval into the plane. If there is a 1-form defined on the plane one can pull it back and then integrate it over the interval. This is the same as integrating the function you get by evaluating the 1-form on the velocity vectors of the curve. In higher dimensions you need to generalize the idea of an oriented interval and usually this is done formally by taking a standard set of oriented triangles, tertrahedra and their higher dimensional analogues. These are called standard simplexes. If you like you could instead use oriented squares and cubes and hypercubes or something similar. In Mathematics these these are all used but usually the formal definition is in terms of standard simplexes. Though I have seen papers where cubes are more convenient. The terminology is a little confusing because a standard simplex is an oriented geometric object in Euclidean space while a smooth simplex is a smooth mapping of the standard simplex into a manifold. Last edited: Mar 16, 2015 Similar Discussions: Integration on manifolds
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576911926269531, "perplexity": 461.0448663465157}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.75/warc/CC-MAIN-20171021224608-20171022004608-00287.warc.gz"}
https://v-apothecary.com/products/genestra-digest-dairy-plus-60-capsules
# Genestra Digest Dairy Plus 60 Capsules Write a review Regular price \$54.75 Unit price  per Shipping calculated at checkout. # Digest Dairy Plus • Supports digestion of dairy products • Digestive enzyme/lactase to assist in the digestion of foods containing lactose • Helps prevent symptoms of lactose intolerance • Digestive aid • Helps to digest proteins Digest Dairy Plus is designed to assist in the digestion of dairy products. With lactase, protease and lipase, all major nutrient types in dairy foods are targeted. The addition of rennin provides rapid coagulation of milk protein in the stomach. In in vitro laboratory tests, the effect of one Digest Dairy Plus capsule added to 568 ml of whole milk under intestinal conditions was investigated. Casein and non-casein proteins were digested to short peptides and amino acids. Fat triglycerides were broken down to monoglycerides and free fatty acids, and lactose was virtually undetectable. Each capsule of Digest Dairy Plus will digest the equivalent of 568 ml of whole milk in vitro within a 3 hour period. Medicinal Ingredients Lactase (from Aspergillus oryzae) . . . . . . . . ..................................................250 mg / 16 250 LAU (1625 FCC ALU) Bacterial Protease (from Bacillus subtilis) . . . . . . . . . . . . . . . . . . . . . ..............60 mg / 8.4 CPU (3750 FCC PC) Stem Bromelain (from Ananas comosus stem) . . . . . . . . . . . . . ......................30 mg / 60 GDU (900 000 FCC PU) Triacylglycerol Lipase (from Rhizopus oryzae) . . . . . . . . . . . . . . . . . . . . . .......15 mg / 525 LU (2450 FCC LU) Acid Active Protease (from Aspergillus oryzae) . . . . . . . . . . . . . . . . . . ............10 mg / 0.7 CPU (1980 FCC HUT) Alkaline Active Protease (from Aspergillus oryzae) . . . . . . . . . . . . . . . . . . . . ..10 mg / 2 CPU (5100 FCC HUT) Microbial Rennet (from Rhizomucor miehei) . . . . . . . . . . . . . . ........................0.1 mg / 0.22 IMCU Non-Medicinal Ingredients: Hypromellose, magnesium stearate Contains: Milk, wheat
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655742406845093, "perplexity": 3806.072761316725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00445.warc.gz"}
http://mathhelpforum.com/calculus/3234-riemann-integrable.html
# Math Help - Riemann integrable 1. ## Riemann integrable Let f(x) = 1/x^2 when x is between [2,3) = 2 when x is between [3,4] a.) prove that f(x) is Riemann integrable on [2,4]. Given epsilon is greater than 0, find a partition P... ( i have no idea what this means. ) b.) find a continuous funciton, F(x), statisfying F(x) = integral of f(t)dt from 2 to x on [2,4] 2. [QUOTE=Nichelle14]Let f(x) = 1/x^2 when x is between [2,3) = 2 when x is between [3,4] a.) prove that f(x) is Riemann integrable on [2,4]. Because it is countinous on this interval. Originally Posted by Nichelle14 Given epsilon is greater than 0, find a partition P... I have no idea what you mean by that. Perhaps they ask to find some partion which statisfies the Riemann integral? b.) find a continuous funciton, F(x), statisfying F(x) = integral of f(t)dt from 2 to x on [2,4] You have, $F(x)=\int^x_2\frac{1}{t^2}dt$ Thus, $\left -\frac{1}{t} \right|^x_2=-\frac{1}{x}+\frac{1}{2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911360144615173, "perplexity": 1605.0904781988218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657119965.46/warc/CC-MAIN-20140914011159-00079-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://readingfeynman.org/tag/string-theory/
# Electron and photon strings Note: I have published a paper that is very coherent and fully explains what the idea of a photon might be. There is nothing stringy. Check it out: The Meaning of the Fine-Structure Constant. No ambiguity. No hocus-pocus. Jean Louis Van Belle, 23 December 2018 Original post: In my previous posts, I’ve been playing with… Well… At the very least, a new didactic approach to understanding the quantum-mechanical wavefunction. I just boldly assumed the matter-wave is a gravitational wave. I did so by associating its components with the dimension of gravitational field strength: newton per kg, which is the dimension of acceleration (N/kg = m/s2). Why? When you remember the physical dimension of the electromagnetic field is N/C (force per unit charge), then that’s kinda logical, right? 🙂 The math is beautiful. Key consequences include the following: 1. Schrodinger’s equation becomes an energy diffusion equation. 2. Energy densities give us probabilities. 3. The elementary wavefunction for the electron gives us the electron radius. 4. Spin angular momentum can be interpreted as reflecting the right- or left-handedness of the wavefunction. 5. Finally, the mysterious boson-fermion dichotomy is no longer “deep down in relativistic quantum mechanics”, as Feynman famously put it. It’s all great. Every day brings something new. 🙂 Today I want to focus on our weird electron model and how we get God’s number (aka the fine-structure constant) out of it. Let’s recall the basics of it. We had the elementary wavefunction: ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ) In one-dimensional space (think of a particle traveling along some line), the vectors (p and x) become scalars, and so we simply write: ψ = a·ei[E·t − p∙x]/ħa·ei[E·t − p∙x]/ħ = a·cos(p∙x/ħ − E∙t/ħ) + i·a·sin(p∙x/ħ − E∙t/ħ) This wavefunction comes with constant probabilities |ψ|2  = a2, so we need to define a space outside of which ψ = 0. Think of the particle-in-a-box model. This is obvious oscillations pack energy, and the energy of our particle is finite. Hence, each particle – be it a photon or an electron – will pack a finite number of oscillations. It will, therefore, occupy a finite amount of space. Mathematically, this corresponds to the normalization condition: all probabilities have to add up to one, as illustrated below.Now, all oscillations of the elementary wavefunction have the same amplitude: a. [Terminology is a bit confusing here because we use the term amplitude to refer to two very different things here: we may say a is the amplitude of the (probability) amplitude ψ. So how many oscillations do we have? What is the size of our box? Let us assume our particle is an electron, and we will reduce its motion to a one-dimensional motion only: we’re thinking of it as traveling along the x-axis. We can then use the y- and z-axes as mathematical axes only: they will show us how the magnitude and direction of the real and imaginary component of ψ. The animation below (for which I have to credit Wikipedia) shows how it looks like.Of course, we can have right- as well as left-handed particle waves because, while time physically goes by in one direction only (we can’t reverse time), we can count it in two directions: 1, 2, 3, etcetera or −1, −2, −3, etcetera. In the latter case, think of time ticking away. 🙂 Of course, in our physical interpretation of the wavefunction, this should explain the (spin) angular momentum of the electron, which is – for some mysterious reason that we now understand 🙂 – always equal to = ± ħ/2. Now, because a is some constant here, we may think of our box as a cylinder along the x-axis. Now, the rest mass of an electron is about 0.510 MeV, so that’s around 8.19×10−14 N∙m, so it will pack some 1.24×1020 oscillations per second. So how long is our cylinder here? To answer that question, we need to calculate the phase velocity of our wave. We’ll come back to that in a moment. Just note how this compares to a photon: the energy of a photon will typically be a few electronvolt only (1 eV ≈ 1.6 ×10−19 N·m) and, therefore, it will pack like 1015 oscillations per second, so that’s a density (in time) that is about 100,000 times less. Back to the angular momentum. The classical formula for it is L = I·ω, so that’s angular frequency times angular mass. What’s the angular velocity here? That’s easy: ω = E/ħ. What’s the angular mass? If we think of our particle as a tiny cylinder, we may use the formula for its angular mass: I = m·r2/2. We have m: that’s the electron mass, right? Right? So what is r? That should be the magnitude of the rotating vector, right? So that’s a. Of course, the mass-energy equivalence relation tells us that E = mc2, so we can write: L = I·ω = (m·r2/2)·(E/ħ) = (1/2)·a2·m·(mc2/ħ) = (1/2)·a2·m2·c2 Does it make sense? Maybe. Maybe not. You can check the physical dimensions on both sides of the equation, and that works out: we do get something that is expressed in N·m·s, so that’s action or angular momentum units. Now, we know L must be equal to = ± ħ/2. [As mentioned above, the plus or minus sign depends on the left- or right-handedness of our wavefunction, so don’t worry about that.] How do we know that? Because of the Stern-Gerlach experiment, which has been repeated a zillion times, if not more. Now, if L = J, then we get the following equation for a:  This is the formula for the radius of an electron. To be precise, it is the Compton scattering radius, so that’s the effective radius of an electron as determined by scattering experiments. You can calculate it: it is about 3.8616×10−13 m, so that’s the picometer scale, as we would expect. This is a rather spectacular result. As far as I am concerned, it is spectacular enough for me to actually believe my interpretation of the wavefunction makes sense. Let us now try to think about the length of our cylinder once again. The period of our wave is equal to T = 1/f = 1/(ω/2π) = 1/[(E/ħ)·2π] = 1/(E/h) = h/E. Now, the phase velocity (vp) will be given by: vp = λ·= (2π/k)·(ω/2π) = ω/k = (E/ħ)/(p/ħ) = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg This is very interesting, because it establishes an inverse proportionality between the group and the phase velocity of our wave, with c2 as the coefficient of inverse proportionality. In fact, this equation looks better if we write as vp·vg = c2. Of course, the group velocity (vg) is the classical velocity of our electron. This equation shows us the idea of an electron at rest doesn’t make sense: if vg = 0, then vp times zero must equal c2, which cannot be the case: electrons must move in space. More generally, speaking, matter-particles must move in space, with the photon as our limiting case: it moves at the speed of light. Hence, for a photon, we find that vp = vg = E/p = c. How can we calculate the length of a photon or an electron? It is an interesting question. The mentioned orders or magnitude of the frequency (1015 or 1020) gives us the number of oscillations per second. But how many do we have in one photon, or in one electron? Let’s first think about photons, because we have more clues here. Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. We know how to calculate to calculate the Q of these atomic oscillators (see, for example, Feynman I-32-3): it is of the order of 108, which means the wave train will last about 10–8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). Now, the frequency of sodium light, for example, is 0.5×1015 oscillations per second, and the decay time is about 3.2×10–8 seconds, so that makes for (0.5×1015)·(3.2×10–8) = 16 million oscillations. Now, the wavelength is 600 nanometer (600×10–9) m), so that gives us a wavetrain with a length of (600×10–9)·(16×106) = 9.6 m. These oscillations may or may not have the same amplitude and, hence, each of these oscillations may pack a different amount of energies. However, if the total energy of our sodium light photon (i.e. about 2 eV ≈ 3.3×10–19 J) are to be packed in those oscillations, then each oscillation would pack about 2×10–26 J, on average, that is. We speculated in other posts on how we might imagine the actual wave pulse that atoms emit when going from one energy state to another, so we don’t do that again here. However, the following illustration of the decay of a transient signal dies out may be useful. This calculation is interesting. It also gives us an interesting paradox: if a photon is a pointlike particle, how can we say its length is like 10 meter or more? Relativity theory saves us here. We need to distinguish the reference frame of the photon – riding along the wave as it is being emitted, so to speak – and our stationary reference frame, which is that of the emitting atom. Now, because the photon travels at the speed of light, relativistic length contraction will make it look like a pointlike particle. What about the electron? Can we use similar assumptions? For the photon, we can use the decay time to calculate the effective number of oscillations. What can we use for an electron? We will need to make some assumption about the phase velocity or, what amounts to the same, the group velocity of the particle. What formulas can we use? The p = m·v is the relativistically correct formula for the momentum of an object if m = mv, so that’s the same m we use in the E = mc2 formula. Of course, v here is, obviously, the group velocity (vg), so that’s the classical velocity of our particle. Hence, we can write: p = m·vg = (E/c2vg ⇔ vg = p/m =  p·c2/E This is just another way of writing that vg = c2/vp or vp = c2/vg so it doesn’t help, does it? Maybe. Maybe not. Let us substitute in our formula for the wavelength: λ = vp/f = vp·T = vp⋅(h/E) = (c2/vg)·(h/E) = h/(m·vg) = h/p This gives us the other de Broglie relation: λ = h/p. This doesn’t help us much, although it is interesting to think about it. The = E/h relation is somewhat intuitive: higher energy, higher frequency. In contrast, what the λ = h/p relation tells us that we get an infinite wavelength if the momentum becomes really small. What does this tell us? I am not sure. Frankly, I’ve look at the second de Broglie relation like a zillion times now, and I think it’s rubbish. It’s meant to be used for the group velocity, I feel. I am saying that because we get a non-sensical energy formula out of it. Look at this: 1. E = h·f and p = h/λ. Therefore, f = E/h and λ = p/h. 2. v = λ = (E/h)∙(p/h) = E/p 3. p = m·v. Therefore, E = v·p = m·v2 E = m·v2? This formula is only correct if c, in which case it becomes the E = mc2 equation. So it then describes a photon, or a massless matter-particle which… Well… That’s a contradictio in terminis. 🙂 In all other cases, we get nonsense. Let’s try something differently.  If our particle is at rest, then p = 0 and the p·x/ħ term in our wavefunction vanishes, so it’s just: ψ = a·ei·E·t/ħa·cos(E∙t/ħ) − i·a·sin(E∙t/ħ) Hence, our wave doesn’t travel. It has the same amplitude at every point in space at any point in time. Both the phase and group velocity become meaningless concepts. The amplitude varies – because of the sine and cosine – but the probability remains the same: |ψ|2  = a2. Hmm… So we need to find another way to define the size of our box. One of the formulas I jotted down in my paper in which I analyze the wavefunction as a gravitational wave was this one: It was a physical normalization condition: the energy contributions of the waves that make up a wave packet need to add up to the total energy of our wave. Of course, for our elementary wavefunction here, the subscripts vanish and so the formula reduces to E = (E/c2a2·(E22), out of which we get our formula for the scattering radius: = ħ/mc. Now how do we pack that energy in our cylinder? Assuming that energy is distributed uniformly, we’re tempted to write something like E = a2·l or, looking at the geometry of the situation: E = π·a2·l ⇔ = E/(π·a2) It’s just the formula for the volume of a cylinder. Using the value we got for the Compton scattering radius (= 3.8616×10−13 m), we find an l that’s equal to (8.19×10−14)/(π·14.9×10−26) =≈ 0.175×1012Meter? Yes. We get the following formula: 0.175×1012 m is 175 million kilometer. That’s – literally – astronomic. It corresponds to 583 light-seconds, or 9.7 light-minutes. So that’s about 1.17 times the (average) distance between the Sun and the Earth. You can see that we do need to build a wave packet: that space is a bit too large to look for an electron, right? 🙂 Could we possibly get some less astronomic proportions? What if we impose that should equal a? We get the following condition:We find that m would have to be equal to m ≈ 1.11×10−36 kg. That’s tiny. In fact, it’s equivalent to an energy of about  equivalent to 0.623 eV (which you’ll see written as 623 milli-eV. This corresponds to light with a wavelength of about 2 micro-meter (μm), so that’s in the infrared spectrum. It’s a funny formula: we find, basically, that the l/ratio is proportional to m4. Hmm… What should we think of this? If you have any ideas, let me know ! Post scriptum (3 October 2017): The paper is going well. Getting lots of downloads, and the views on my blog are picking up too. But I have been vicious. Substituting B for (1/c)∙iE or for −(1/c)∙iE implies a very specific choice of reference frame. The imaginary unit is a two-dimensional concept: it only makes sense when giving it a plane view. Literally. Indeed, my formulas assume the i (or −i) plane is perpendicular to the direction of propagation of the elementary quantum-mechanical wavefunction. So… Yes. The need for rotation matrices is obvious. But my physical interpretation of the wavefunction stands. 🙂 # Strings in classical and quantum physics This post is not about string theory. The goal of this post is much more limited: it’s to give you a better understanding of why the metaphor of the string is so appealing. Let’s recapitulate the basics by see how it’s used in classical as well as in quantum physics. In my posts on music and math, or music and physics, I described how a simple single string always vibrates in various modes at the same time: every tone is a mixture of an infinite number of elementary waves. These elementary waves, which are referred to as harmonics (or as (normal) modes, indeed) are perfectly sinusoidal, and their amplitude determines their relative contribution to the composite waveform. So we can always write the waveform F(t) as the following sum: F(t) = a1sin(ωt) + a2sin(2ωt) + a3sin(3ωt) + … + ansin(nωt) + … [If this is your first reading of my post, and the formula shies you away, please try again. I am writing most of my posts with teenage kids in mind, and especially this one. So I will not use anything else than simple arithmetic in this post: no integrals, no complex numbers, no logarithms. Just a bit of geometry. That’s all. So, yes, you should go through the trouble of trying to understand this formula. The only thing that you may have some trouble with is ω, i.e. angular frequency: it’s the frequency expressed in radians per time unit, rather than oscillations per second, so ω = 2π·f = 2π/T, with the frequency as you know it (i.e. oscillations per second) and T the period of the wave.] I also noted that the wavelength of these component waves (λ) is determined by the length of the string (L), and by its length only: λ1 = 2L, λ2 = L, λ3 = (2/3)·L. So these wavelengths do not depend on the material of the string, or its tension. At any point in time (so keeping t constant, rather than x, as we did in the equation above), the component waves look like this: etcetera (1/8, 1/9,…,1/n,… 1/∞) That the wavelengths of the harmonics of any actual string only depend on its length is an amazing result in light of the complexities behind: a simple wound guitar string, for example, is not simple at all (just click the link here for a quick introduction to guitar string construction). Simple piano wire isn’t simple either: it’s made of high-carbon steel, i.e. a very complex metallic alloy. In fact, you should never think any material is simple: even the simplest molecular structures are very complicated things. Hence, it’s quite amazing all these systems are actually linear systems and that, despite the underlying complexity, those wavelength ratios form a simple harmonic series, i.e. a simple reciprocal function y = 1/x, as illustrated below. A simple harmonic series? Hmm… I can’t resist noting that the harmonic series is, in fact, a mathematical beast. While its terms approach zero as x (or n) increases, the series itself is divergent. So it’s not like 1+1/2+1/4+1/8+…+1/2n+…, which adds up to 2. Divergent series don’t add up to any specific number. Even Leonhard Euler – the most famous mathematician of all times, perhaps – struggled with this. In fact, as late as in 1826, another famous mathematician, Niels Henrik Abel (in light of the fact he died at age 26 (!), his legacy is truly amazing), exclaimed that a series like this was “an invention of the devil”, and that it should not be used in any mathematical proof. But then God intervened through Abel’s contemporary Augustin-Louis Cauchy 🙂 who finally cracked the nut by rigorously defining the mathematical concept of both convergent as well as divergent series, and equally rigorously determining their possibilities and limits in mathematical proofs. In fact, while medieval mathematicians had already grasped the essentials of modern calculus and, hence, had already given some kind of solution to Zeno’s paradox of motion, Cauchy’s work is the full and final solution to it. But I am getting distracted, so let me get back to the main story. More remarkable than the wavelength series itself, is its implication for the respective energy levels of all these modes. The material of the string, its diameter, its tension, etc will determine the speed with which the wave travels up and down the string. [Yes, that’s what it does: you may think the string oscillates up and down, and it does, but the waveform itself travels along the string. In fact, as I explained in my previous post, we’ve got two waves traveling simultaneously: one going one way and the other going the other.] For a specific string, that speed (i.e. the wave velocity) is some constant, which we’ll denote by c. Now, is, obviously, the product of the wavelength (i.e. the distance that the wave travels during one oscillation) and its frequency (i.e. the number of oscillations per time unit), so c = λ·f. Hence, f = c/λ and, therefore, f1 = (1/2)·c/L, f2 = (2/2)·c/L, f3 = (3/2)·c/L, etcetera. More in general, we write fn = (n/2)·c/L. In short, the frequencies are equally spaced. To be precise, they are all (1/2)·c/L apart. Now, the energy of a wave is directly proportional to its frequency, always, in classical as well as in quantum mechanics. For example, for photons, we have the Planck-Einstein relation: E = h·f = ħ·ω. So that relation states that the energy is proportional to the (light) frequency of the photon, with h (i.e. he Planck constant) as the constant of proportionality. [Note that ħ is not some different constant. It’s just the ‘angular equivalent’ of h, so we have to use ħ = h/2π when frequencies are expressed in angular frequency, i.e. radians per second rather than hertz.] Because of that proportionality, the energy levels of our simple string are also equally spaced and, hence, inserting another proportionality constant, which I’ll denote by a instead of (because it’s some other constant, obviously), we can write: En = a·fn = (n/2)·a·c/L Now, if we denote the fundamental frequency f1 = (1/2)·c/L, quite simply, by f (and, likewise, its angular frequency as ω), then we can re-write this as: En = n·a·f = n·ā·ω (ā = a/2π) This formula is exactly the same as the formula used in quantum mechanics when describing atoms as atomic oscillators, and why and how they radiate light (think of the blackbody radiation problem, for example), as illustrated below: En = n·ħ·ω = n·h·f. The only difference between the formulas is the proportionality constant: instead of a, we have Planck’s constant here: h, or ħ when the frequency is expressed as an angular frequency. This grand result – that the energy levels associated with the various states or modes of a system are equally spaced – is referred to as the equipartition theorem in physics, and it is what connects classical and quantum physics in a very deep and fundamental way. In fact, because they’re nothing but proportionality constants, the value of both a and h depends on our units. If w’d use the so-called natural units, i.e. equating ħ to 1, the energy formula becomes En = n·ω, and, hence, our unit of energy and our unit of frequency become one and the same. In fact, we can, of course, also re-define our time unit such that the fundamental frequency ω is one, i.e. one oscillation per (re-defined) time unit, so then we have the following remarkable formula: En = n Just think about it for a moment: what I am writing here is E0 = 0, E1 = 1, E2 = 2, E3 = 3, E4 = 4, etcetera. Isn’t that amazing? I am describing the structure of a system here – be it an atom emitting or absorbing photons, or a macro-thing like a guitar string – in terms of its basic components (i.e. its modes), and it’s as simple as counting: 0, 1, 2, 3, 4, etc. You may think I am not describing anything real here, but I am. We cannot do whatever we wanna do: some stuff is grounded in reality, and in reality only—not in the math. Indeed, the fundamental frequency of our guitar string – which we used as our energy unit – is a property of the string, so that’s real: it’s not just some mathematical shape out: it depends on the string’s length (which determines its wavelength), and it also depends on the propagation speed of the wave, which depends on other basic properties of the string, such as its material, its diameter, and its tension. Likewise, the fundamental frequency of our atomic oscillator is a property of the atomic oscillator or, to use a much grander term, a property of the Universe. That’s why h is a fundamental physical constant. So it’s not like π or e. [When reading physics as a freshman, it’s always useful to clearly distinguish physical constants (like Avogadro’s number, for example) from mathematical constants (like Euler’s number).] The theme that emerges here is what I’ve been saying a couple of times already: it’s all about structure, and the structure is amazingly simple. It’s really that equipartition theorem only: all you need to know is that the energy levels of the modes of a system – any system really: an atom, a molecular system, a string, or the Universe itself – are equally spaced, and that the space between the various energy levels depends on the fundamental frequency of the system. Moreover, if we use natural units, and also re-define our time unit so the fundamental frequency is equal to 1 (so the frequencies of the other modes are 2, 3, 4 etc), then the energy levels are just 0, 1, 2, 3, 4 etc. So, yes, God kept things extremely simple. 🙂 In order to not cause too much confusion, I should add that you should read what I am writing very carefully: I am talking the modes of a system. The system itself can have any energy level, of course, so there is no discreteness at the level of the system. I am not saying that we don’t have a continuum there. We do. What I am saying is that its energy level can always be written as a (potentially infinite) sum of the energies of its components, i.e. its fundamental modes, and those energy levels are discrete. In quantum-mechanical systems, their spacing is h·f, so that’s the product of Planck’s constant and the fundamental frequency. For our guitar, the spacing is a·f (or, using angular frequency, ā·ω: it’s the same amount). But that’s it really. That’s the structure of the Universe. 🙂 Let me conclude by saying something more about a. What information does it capture? Well… All of the specificities of the string (like its material or its tension) determine the fundamental frequency f and, hence, the energy levels of the basic modes of our string. So a has nothing to do with the particularities of our string, of our system in general. However, we can, of course, pluck our string very softly or, conversely, give it a big jolt. So our a coefficient is not related to the string as such, but to the total energy of our string. In other words, a is related to those amplitudes  a1, a2, etc in our F(t) = a1sin(ωt) + a2sin(2ωt) + a3sin(3ωt) + … + ansin(nωt) + … wave equation. How exactly? Well… Based on the fact that the total energy of our wave is equal to the sum of the energies of all of its components, I could give you some formula. However, that formula does use an integral. It’s an easy integral: energy is proportional to the square of the amplitude, and so we’re integrating the square of the wave function over the length of the string. But then I said I would not have any integral in this post, and so I’ll stick to that. In any case, even without the formula, you know enough now. For example, one of the things you should be able to reflect on is the relation between a and h. It’s got to do with structure, of course. 🙂 But I’ll let you think about that yourself. […] Let me help you. Think of the meaning of Planck’s constant h. Let’s suppose we’d have some elementary ‘wavicle’, like that elementary ‘string’ that string theorists are trying to define: the smallest ‘thing’ possible. It would have some energy, i.e. some frequency. Perhaps it’s just one full oscillation. Just enough to define some wavelength and, hence, some frequency indeed. Then that thing would define the smallest time unit that makes sense: it would the time corresponding to one oscillation. In turn, because of the E = h·relation, it would define the smallest energy unit that makes sense. So, yes, h is the quantum (or fundamental unit) of energy. It’s very small indeed (h = 6.626070040(81)×10−34 J·s, so the first significant digit appears only after 33 zeroes behind the decimal point) but that’s because we’re living at the macro-scale and, hence, we’re measuring stuff in huge units: the joule (J) for energy, and the second (s) for time. In natural units, h would be one. [To be precise, physicist prefer to equate ħ, rather than h, to one when talking natural units. That’s because angular frequency is more ‘natural’ as well when discussing oscillations.] What’s the conclusion? Well… Our will be some integer multiple of h. Some incredibly large multiple, of course, but a multiple nevertheless. 🙂 Post scriptum: I didn’t say anything about strings in this post or, let me qualify, about those elementary ‘strings’ that string theorists try to define. Do they exist? Feynman was quite skeptical about it. He was happy with the so-called Standard Model of phyics, and he would have been very happy to know that the existence Higgs field has been confirmed experimentally (that discovery is what prompted my blog!), because that confirms the Standard Model. The Standard Model distinguishes two types of wavicles: fermions and bosons. Fermions are matter particles, such as quarks and electrons. Bosons are force carriers, like photons and gluons. I don’t know anything about string theory, but my guts instinct tells me there must be more than just one mathematical description of reality. It’s the principle of duality: concepts, theorems or mathematical structures can be translated into other concepts, theorems or structures. But… Well… We’re not talking equivalent descriptions here: string theory is different theory, it seems. For a brief but totally incomprehensible overview (for novices at least), click on the following link, provided by the C.N. Yang Institute for Theoretical Physics. If anything, it shows I’ve got a lot more to study as I am inching forward on the difficult Road to Reality. 🙂 # The Strange Theory of Light and Matter (II) If we limit our attention to the interaction between light and matter (i.e. the behavior of photons and electrons only—so we we’re not talking quarks and gluons here), then the ‘crazy ideas’ of quantum mechanics can be summarized as follows: 1. At the atomic or sub-atomic scale, we can no longer look at light as an electromagnetic wave. It consists of photons, and photons come in blobs. Hence, to some extent, photons are ‘particle-like’. 2. At the atomic or sub-atomic scale, electrons don’t behave like particles. For example, if we send them through a slit that’s small enough, we’ll observe a diffraction pattern. Hence, to some extent, electrons are ‘wave-like’. In short, photons aren’t waves, but they aren’t particles either. Likewise, electrons aren’t particles, but they aren’t waves either. They are neither. The weirdest thing of all, perhaps, is that, while light and matter are two very different things in our daily experience – light and matter are opposite concepts, I’d say, just like particles and waves are opposite concepts) – they look pretty much the same in quantum physics: they are both represented by a wavefunction. Let me immediately make a little note on terminology here. The term ‘wavefunction’ is a bit ambiguous, in my view, because it makes one think of a real wave, like a water wave, or an electromagnetic wave. Real waves are described by real-valued wave functions describing, for example, the motion of a ball on a spring, or the displacement of a gas (e.g. air) as a sound wave propagates through it, or – in the case of an electromagnetic wave – the strength of the electric and magnetic field. You may have questions about the ‘reality’ of fields, but electromagnetic waves – i.e. the classical description of light – are quite ‘real’ too, even if: 1. Light doesn’t travel in a medium (like water or air: there is no aether), and 2. The magnitude of the electric and magnetic field (they are usually denoted by E and B) depend on your reference frame: if you calculate the fields using a moving coordinate system, you will get a different mixture of E and B. Therefore, E and B may not feel very ‘real’ when you look at them separately, but they are very real when we think of them as representing one physical phenomenon: the electromagnetic interaction between particles. So the E and B mix is, indeed, a dual representation of one reality. I won’t dwell on that, as I’ve done that in another post of mine. How ‘real’ is the quantum-mechanical wavefunction? The quantum-mechanical wavefunction is not like any of these real waves. In fact, I’d rather use the term ‘probability wave’ but, apparently, that’s used only by bloggers like me 🙂 and so it’s not very scientific. That’s for a good reason, because it’s not quite accurate either: the wavefunction in quantum mechanics represents probability amplitudes, not probabilities. So we should, perhaps, be consistent and term it a ‘probability amplitude wave’ – but then that’s too cumbersome obviously, so the term ‘probability wave’ may be confusing, but it’s not so bad, I think. Amplitudes and probabilities are related as follows: 1. Probabilities are real numbers between 0 and 1: they represent the probability of something happening, e.g. a photon moves from point A to B, or a photon is absorbed (and emitted) by an electron (i.e. a ‘junction’ or ‘coupling’, as you know). 2. Amplitudes are complex numbers, or ‘arrows’ as Feynman calls them: they have a length (or magnitude) and a direction. 3. We get the probabilities by taking the (absolute) square of the amplitudes. So photons aren’t waves, but they aren’t particles either. Likewise, electrons aren’t particles, but they aren’t waves either. They are neither. So what are they? We don’t have words to describe what they are. Some use the term ‘wavicle’ but that doesn’t answer the question, because who knows what a ‘wavicle’ is? So we don’t know what they are. But we do know how they behave. As Feynman puts it, when comparing the behavior of light and then of electrons in the double-slit experiment—struggling to find language to describe what’s going on: “There is one lucky break: electrons behave just like light.” He says so because of that wave function: the mathematical formalism is the same, for photons and for electrons. Exactly the same? […] But that’s such a weird thing to say, isn’t it? We can’t help thinking of light as waves, and of electrons as particles. They can’t be the same. They’re different, aren’t they? They are. Scales and senses To some extent, the weirdness can be explained because the scale of our world is not atomic or sub-atomic. Therefore, we ‘see’ things differently. Let me say a few words about the instrument we use to look at the world: our eye. Our eye is particular. The retina has two types of receptors: the so-called cones are used in bright light, and distinguish color, but when we are in a dark room, the so-called rods become sensitive, and it is believed that they actually can detect a single photon of light. However, neural filters only allow a signal to pass to the brain when at least five photons arrive within less than a tenth of a second. A tenth of a second is, roughly, the averaging time of our eye. So, as Feynman puts it: “If we were evolved a little further so we could see ten times more sensitively, we wouldn’t have this discussion—we would all have seen very dim light of one color as a series of intermittent little flashes of equal intensity.” In other words, the ‘particle-like’ character of light would have been obvious to us. Let me make a few more remarks here, which you may or may not find useful. The sense of ‘color’ is not something ‘out there’:  colors, like red or brown, are experiences in our eye and our brain. There are ‘pigments’ in the cones (cones are the receptors that work only if the intensity of the light is high enough) and these pigments absorb the light spectrum somewhat differently, as a result of which we ‘see’ color. Different animals see different things. For example, a bee can distinguish between white paper using zinc white versus lead white, because they reflect light differently in the ultraviolet spectrum, which the bee can see but we don’t. Bees can also tell the direction of the sun without seeing the sun itself, because they are sensitive to polarized light, and the scattered light of the sky (i.e. the blue sky as we see it) is polarized. The bee can also notice flicker up to 200 oscillations per second, while we see it only up to 20, because our averaging time is like a tenth of a second, which is short for us, but so the averaging time of the bee is much shorter. So we cannot see the quick leg movements and/or wing vibrations of bees, but the bee can! Sometimes we can’t see any color. For example, we see the night sky in ‘black and white’ because the light intensity is very low, and so it’s our rods, not the cones, that process the signal, and so these rods can’t ‘see’ color. So those beautiful color pictures of nebulae are not artificial (although the pictures are often enhanced). It’s just that the camera that is used to take those pictures (film or, nowadays, digital) is much more sensitive than our eye. Regardless, color is a quality which we add to our experience of the outside world ourselves. What’s out there are electromagnetic waves with this or that wavelength (or, what amounts to the same, this or that frequency). So when critics of the exact sciences say so much is lost when looking at (visible) light as an electromagnetic wave in the range of 430 to 790 teraherz, they’re wrong. Those critics will say that physics reduces reality. That is not the case. What’s going on is that our senses process the signal that they are receiving, especially when it comes to vision. As Feynman puts it: “None of the other senses involves such a large amount of calculation, so to speak, before the signal gets into a nerve that one can make measurements on. The calculations for all the rest of the senses usually happen in the brain itself, where it is very difficult to get at specific places to make measurements, because there are so many interconnections. Here, with the visual sense, we have the light, three layers of cells making calculations, and the results of the calculations being transmitted through the optic nerve.” Hence, things like color and all of the other sensations that we have are the object of study of other sciences, including biochemistry and neurobiology, or physiology. For all we know, what’s ‘out there’ is, effectively, just ‘boring’ stuff, like electromagnetic radiation, energy and ‘elementary particles’—whatever they are. No colors. Just frequencies. 🙂 Light versus matter If we accept the crazy ideas of quantum mechanics, then the what and the how become one and the same. Hence we can say that photons and electrons are a wavefunction somewhere in space. Photons, of course, are always traveling, because they have energy but no rest mass. Hence, all their energy is in the movement: it’s kinetic, not potential. Electrons, on the other hand, usually stick around some nucleus. And, let’s not forget, they have an electric charge, so their energy is not only kinetic but also potential. But, otherwise, it’s the same type of ‘thing’ in quantum mechanics: a wavefunction, like those below. Why diagram A and B? It’s just to emphasize the difference between a real-valued wave function and those ‘probability waves’ we’re looking at here (diagram C to H). A and B represent a mass on a spring, oscillating at more or less the same frequency but a different amplitude. The amplitude here means the displacement of the mass. The function describing the displacement of a mass on a spring (so that’s diagram A and B) is an example of a real-valued wave function: it’s a simple sine or cosine function, as depicted below. [Note that a sine and a cosine are the same function really, except for a phase difference of 90°.] Let’s now go back to our ‘probability waves’. Photons and electrons, light and matter… The same wavefunction? Really? How can the sunlight that warms us up in the morning and makes trees grow be the same as our body, or the tree? The light-matter duality that we experience must be rooted in very different realities, isn’t it? Well… Yes and no. If we’re looking at one photon or one electron only, it’s the same type of wavefunction indeed. The same type… OK, you’ll say. So they are the same family or genus perhaps, as they say in biology. Indeed, both of them are, obviously, being referred to as ‘elementary particles’ in the so-called Standard Model of physics. But so what makes an electron and a photon specific as a species? What are the differences? There’re  quite a few, obviously: 1. First, as mentioned above, a photon is a traveling wave function and, because it has no rest mass, it travels at the ultimate speed, i.e. the speed of light (c). An electron usually sticks around or, if it travels through a wire, it travels at very low speeds. Indeed, you may find it hard to believe, but the drift velocity of the free electrons in a standard copper wire is measured in cm per hour, so that’s very slow indeed—and while the electrons in an electron microscope beam may be accelerated up to 70% of the speed of light, and close to in those huge accelerators, you’re not likely to find an electron microscope or accelerator in Nature. In fact, you may want to remember that a simple thing like electricity going through copper wires in our houses is a relatively modern invention. 🙂 So, yes, those oscillating wave functions in those diagrams above are likely to represent some electron, rather than a photon. To be precise, the wave functions above are examples of standing (or stationary) waves, while a photon is a traveling wave: just extend that sine and cosine function in both directions if you’d want to visualize it or, even better, think of a sine and cosine function in an envelope traveling through space, such as the one depicted below. Indeed, while the wave function of our photon is traveling through space, it is likely to be limited in space because, when everything is said and done, our photon is not everywhere: it must be somewhere. At this point, it’s good to pause and think about what is traveling through space. It’s the oscillation. But what’s the oscillation? There is no medium here, and even if there would be some medium (like water or air or something like aether—which, let me remind you, isn’t there!), the medium itself would not be moving, or – I should be precise here – it would only move up and down as the wave propagates through space, as illustrated below. To be fully complete, I should add we also have longitudinal waves, like sound waves (pressure waves): in that case, the particles oscillate back and forth along the direction of wave propagation. But you get the point: the medium does not travel with the wave. When talking electromagnetic waves, we have no medium. These E and B vectors oscillate but is very wrong to assume they use ‘some core of nearby space’, as Feynman puts it. They don’t. Those field vectors represent a condition at one specific point (admittedly, a point along the direction of travel) in space but, for all we know, an electromagnetic wave travels in a straight line and, hence, we can’t talk about its diameter or so. Still, as mentioned above, we can imagine, more or less, what E and B stand for (we can use field line to visualize them, for instance), even if we have to take into account their relativity (calculating their values from a moving reference frame results in different mixtures of E and B). But what are those amplitudes? How should we visualize them? The honest answer is: we can’t. They are what they are: two mathematical quantities which, taken together, form a two-dimensional vector, which we square to find a value for a real-life probability, which is something that – unlike the amplitude concept – does make sense to us. Still, that representation of a photon above (i.e. the traveling envelope with a sine and cosine inside) may help us to ‘understand’ it somehow. Again, you absolute have to get rid of the idea that these ‘oscillations’ would somehow occupy some physical space. They don’t. The wave itself has some definite length, for sure, but that’s a measurement in the direction of travel, which is often denoted as x when discussing uncertainty in its position, for example—as in the famous Uncertainty Principle (ΔxΔp > h). You’ll say: Oh!—but then, at the very least, we can talk about the ‘length’ of a photon, can’t we? So then a photon is one-dimensional at least, not zero-dimensional! The answer is yes and no. I’ve talked about this before and so I’ll be short(er) on it now. A photon is emitted by an atom when an electron jumps from one energy level to another. It thereby emits a wave train that lasts about 10–8 seconds. That’s not very long but, taking into account the rather spectacular speed of light (3×10m/s), that still makes for a wave train with a length of not less than 3 meter. […] That’s quite a length, you’ll say. You’re right. But you forget that light travels at the speed of light and, hence, we will see this length as zero because of the relativistic length contraction effect. So… Well… Let me get back to the question: if photons and electrons are both represented by a wavefunction, what makes them different? 2. A more fundamental difference between photons and electrons is how they interact with each other. From what I’ve written above, you understand that probability amplitudes are complex numbers, or ‘arrows’, or ‘two-dimensional vectors’. [Note that all of these terms have precise mathematical definitions and so they’re actually not the same, but the difference is too subtle to matter here.] Now, there are two ways of combining amplitudes, which are referred to as ‘positive’ and ‘negative’ interference respectively. I should immediately note that there’s actually nothing ‘positive’ or ‘negative’ about the interaction: we’re just putting two arrows together, and there are two ways to do that. That’s all. The diagrams below show you these two ways. You’ll say: there are four! However, remember that we square an arrow to get a probability. Hence, the direction of the final arrow doesn’t matter when we’re taking the square: we get the same probability. It’s the direction of the individual amplitudes that matters when combining them. So the square of A+B is the same as the square of –(A+B) = –A+(–B) = –AB. Likewise, the square of AB is the same as the square of –(AB) = –A+B. These are the only two logical possibilities for combining arrows. I’ve written ad nauseam about this elsewhere: see my post on amplitudes and statistics, and so I won’t go into too much detail here. Or, in case you’d want something less than a full mathematical treatment, I can refer you to my previous post also, where I talked about the ‘stopwatch’ and the ‘phase’: the convention for the stopwatch is to have its hand turn clockwise (obviously!) while, in quantum physics, the phase of a wave function will turn counterclockwise. But so that’s just convention and it doesn’t matter, because it’s the phase difference between two amplitudes that counts. To use plain language: it’s the difference in the angles of the arrows, and so that difference is just the same if we reverse the direction of both arrows (which is equivalent to putting a minus sign in front of the final arrow). OK. Let me get back to the lesson. The point is: this logical or mathematical dichotomy distinguishes bosons (i.e. force-carrying ‘particles’, like photons, which carry the electromagnetic force) from fermions (i.e. ‘matter-particles’, such as electrons and quarks, which make up protons and neutrons). Indeed, the so-called ‘positive’ and ‘negative’ interference leads to two very different behaviors: 1. The probability of getting a boson where there are already present, is n+1 times stronger than it would be if there were none before. 2. In contrast, the probability of getting two electrons into exactly the same state is zero. The behavior of photons makes lasers possible: we can pile zillions of photon on top of each other, and then release all of them in one powerful burst. [The ‘flickering’ of a laser beam is due to the quick succession of such light bursts. If you want to know how it works in detail, check my post on lasers.] The behavior of electrons is referred to as Fermi’s exclusion principle: it is only because real-life electrons can have one of two spin polarizations (i.e. two opposite directions of angular momentum, which are referred to as ‘up’ or ‘down’, but they might as well have been referred to as ‘left’ or ‘right’) that we find two electrons (instead of just one) in any atomic or molecular orbital. So, yes, while both photons and electrons can be described by a similar-looking wave function, their behavior is fundamentally different indeed. How is that possible? Adding and subtracting ‘arrows’ is a very similar operation, isn’it? It is and it isn’t. From a mathematical point of view, I’d say: yes. From a physics point of view, it’s obviously not very ‘similar’, as it does lead to these two very different behaviors: the behavior of photons allows for laser shows, while the behavior of electrons explain (almost) all the peculiarities of the material world, including us walking into doors. 🙂 If you want to check it out for yourself, just check Feynman’s Lectures for more details on this or, else, re-read my posts on it indeed. 3. Of course, there are even more differences between photons and electrons than the two key differences I mentioned above. Indeed, I’ve simplified a lot when I wrote what I wrote above. The wavefunctions of electrons in orbit around a nucleus can take very weird shapes, as shown in the illustration below—and please do google a few others if you’re not convinced. As mentioned above, they’re so-called standing waves, because they occupy a well-defined position in space only, but standing waves can look very weird. In contrast, traveling plane waves, or envelope curves like the one above, are much simpler. In short: yes, the mathematical representation of photons and electrons (i.e. the wavefunction) is very similar, but photons and electrons are very different animals indeed. Potentiality and interconnectedness I guess that, by now, you agree that quantum theory is weird but, as you know, quantum theory does explain all of the stuff that couldn’t be explained before: “It works like a charm”, as Feynman puts it. In fact, he’s often quoted as having said the following: “It is often stated that of all the theories proposed in this century, the silliest is quantum theory. Some say the the only thing that quantum theory has going for it, in fact, is that it is unquestionably correct.” Silly? Crazy? Uncommon-sensy? Truth be told, you do get used to thinking in terms of amplitudes after a while. And, when you get used to them, those ‘complex’ numbers are no longer complicated. 🙂 Most importantly, when one thinks long and hard enough about it (as I am trying to do), it somehow all starts making sense. For example, we’ve done away with dualism by adopting a unified mathematical framework, but the distinction between bosons and fermions still stands: an ‘elementary particle’ is either this or that. There are no ‘split personalities’ here. So the dualism just pops up at a different level of description, I’d say. In fact, I’d go one step further and say it pops up at a deeper level of understanding. But what about the other assumptions in quantum mechanics. Some of them don’t make sense, do they? Well… I struggle for quite a while with the assumption that, in quantum mechanics, anything is possible really. For example, a photon (or an electron) can take any path in space, and it can travel at any speed (including speeds that are lower or higher than light). The probability may be extremely low, but it’s possible. Now that is a very weird assumption. Why? Well… Think about it. If you enjoy watching soccer, you’ll agree that flying objects (I am talking about the soccer ball here) can have amazing trajectories. Spin, lift, drag, whatever—the result is a weird trajectory, like the one below: But, frankly, a photon taking the ‘southern’ route in the illustration below? What are the ‘wheels and gears’ there? There’s nothing sensible about that route, is there? In fact, there’s at least three issues here: 1. First, you should note that strange curved paths in the real world (such as the trajectories of billiard or soccer balls) are possible only because there’s friction involved—between the felt of the pool table cloth and the ball, or between the balls, or, in the case of soccer, between the ball and the air. There’s no friction in the vacuum. Hence, in empty space, all things should go in a straight line only. 2. While it’s quite amazing what’s possible, in the real world that is, in terms of ‘weird trajectories’, even the weirdest trajectories of a billiard or soccer ball can be described by a ‘nice’ mathematical function. We obviously can’t say the same of that ‘southern route’ which a photon could follow, in theory that is. Indeed, you’ll agree the function describing that trajectory cannot be ‘nice’. So even we’d allow all kinds of ‘weird’ trajectories, shouldn’t we limit ourselves to ‘nice’ trajectories only? I mean: it doesn’t make sense to allow the photons traveling from your computer screen to your retina take some trajectory to the Sun and back, does it? 3. Finally, and most fundamentally perhaps, even when we would assume that there’s some mechanism combining (a) internal ‘wheels and gears’ (such as spin or angular momentum) with (b) felt or air or whatever medium to push against, what would be the mechanism determining the choice of the photon in regard to these various paths? In Feynman’s words: How does the photon ‘make up its mind’? Feynman answers these questions, fully or partially (I’ll let you judge), when discussing the double-slit experiment with photons: “Saying that a photon goes this or that way is false. I still catch myself saying, “Well, it goes either this way or that way,” but when I say that, I have to keep in mind that I mean in the sense of adding amplitudes: the photon has an amplitude to go one way, and an amplitude to go the other way. If the amplitudes oppose each other, the light won’t get there—even though both holes are open.” It’s probably worth re-calling the results of that experiment here—if only to help you judge whether or not Feynman fully answer those questions above! The set-up is shown below. We have a source S, two slits (A and B), and a detector D. The source sends photons out, one by one. In addition, we have two special detectors near the slits, which may or may not detect a photon, depending on whether or not they’re switched on as well as on their accuracy. First, we close one of the slits, and we find that 1% of the photons goes through the other (so that’s one photon for every 100 photons that leave S). Now, we open both slits to study interference. You know the results already: 1. If we switch the detectors off (so we have no way of knowing where the photon went), we get interference. The interference pattern depends on the distance between A and B and varies from 0% to 4%, as shown in diagram (a) below. That’s pretty standard. As you know, classical theory can explain that too assuming light is an electromagnetic wave. But so we have blobs of energy – photons – traveling one by one. So it’s really that double-slit experiment with electrons, or whatever other microscopic particles (as you know, they’ve done these interference electrons with large molecules as well—and they get the same result!). We get the interference pattern by using those quantum-mechanical rules to calculate probabilities: we first add the amplitudes, and it’s only when we’re finished adding those amplitudes, that we square the resulting arrow to the final probability. 2. If we switch those special detectors on, and if they are 100% reliable (i.e. all photons going through are being detected), then our photon suddenly behaves like a particle, instead of as a wave: they will go through one of the slits only, i.e. either through A, or, alternatively, through B. So the two special detectors never go off together. Hence, as Feynman puts it: we shouldn’t think there is “sneaky way that the photon divides in two and then comes back together again.” It’s one or the other way and, and there’s no interference: the detector at D goes off 2% of the time, which is the simple sum of the probabilities for A and B (i.e. 1% + 1%). 3. When the special detectors near A and B are not 100% reliable (and, hence, do not detect all photons going through), we have three possible final conditions: (i) A and D go off, (ii) B and D go off, and (iii) D goes off alone (none of the special detectors went off). In that case, we have a final curve that’s a mixture, as shown in diagram (c) and (d) below. We get it using the same quantum-mechanical rules: we add amplitudes first, and then we square to get the probabilities. Now, I think you’ll agree with me that Feynman doesn’t answer my (our) question in regard to the ‘weird paths’. In fact, all of the diagrams he uses assume straight or nearby paths. Let me re-insert two of those diagrams below, to show you what I mean. So where are all the strange non-linear paths here? Let me, in order to make sure you get what I am saying here, insert that illustration with the three crazy routes once again. What we’ve got above (Figure 33 and 34) is not like that. Not at all: we’ve got only straight lines there! Why? The answer to that question is easy: the crazy paths don’t matter because their amplitudes cancel each other out, and so that allows Feynman to simplify the whole situation and show all the relevant paths as straight lines only. Now, I struggled with that for quite a while. Not because I can’t see the math or the geometry involved. No. Feynman does a great job showing why those amplitudes cancel each other out indeed (if you want a summary, see my previous post once again).  My ‘problem’ is something else. It’s hard to phrase it, but let me try: why would we even allow for the logical or mathematical possibility of ‘weird paths’ (and let me again insert that stupid diagram below) if our ‘set of rules’ ensures that the truly ‘weird’ paths (like that photon traveling from your computer screen to your eye doing a detour taking it to the Sun and back) cancel each other out anyway? Does that respect Occam’s Razor? Can’t we devise some theory including ‘sensible’ paths only? Of course, I am just an autodidact with limited time, and I know hundreds (if not thousands) of the best scientists have thought long and hard about this question and, hence, I readily accept the answer is quite simply: no. There is no better theory. I accept that answer, ungrudgingly, not only because I think I am not so smart as those scientists but also because, as I pointed out above, one can’t explain any path that deviates from a straight line really, as there is no medium, so there are no ‘wheels and gears’. The only path that makes sense is the straight line, and that’s only because… Well… Thinking about it… We think the straight path makes sense because we have no good theory for any of the other paths. Hmm… So, from a logical point of view, assuming that the straight line is the only reasonable path is actually pretty random too. When push comes to shove, we have no good theory for the straight line either! You’ll say I’ve just gone crazy. […] Well… Perhaps you’re right. 🙂 But… Somehow, it starts to make sense to me. We allow for everything to, then, indeed weed out the crazy paths using our interference theory, and so we do end up with what we’re ending up with: some kind of vague idea of “light not really traveling in a straight line but ‘smelling’ all of the neighboring paths around it and, hence, using a small core of nearby space“—as Feynman puts it. Hmm… It brings me back to Richard Feynman’s introduction to his wonderful little book, in which he says we should just be happy to know how Nature works and not aspire to know why it works that way. In fact, he’s basically saying that, when it comes to quantum mechanics, the ‘how’ and the ‘why’ are one and the same, so asking ‘why’ doesn’t make sense, because we know ‘how’. He compares quantum theory with the system of calculation used by the Maya priests, which was based on a system of bars and dots, which helped them to do complex multiplications and divisions, for example. He writes the following about it: “The rules were tricky, but they were a much more efficient way of getting an answer to complicated questions (such as when Venus would rise again) than by counting beans.” When I first read this, I thought the comparison was flawed: if a common Maya Indian did not want to use the ‘tricky’ rules of multiplication and what have you (or, more likely, if he didn’t understand them), he or she could still resort to counting beans. But how do we count beans in quantum mechanics? We have no ‘simpler’ rules than those weird rules about adding amplitudes and taking the (absolute) square of complex numbers so… Well… We actually are counting beans here then: 1. We allow for any possibility—any path: straight, curved or crooked. Anything is possible. 2. But all those possibilities are inter-connected. Also note that every path has a mirror image: for every route ‘south’, there is a similar route ‘north’, so to say, except for the straight line, which is a mirror image of itself. 3. And then we have some clock ticking. Time goes by. It ensures that the paths that are too far removed from the straight line cancel each other. [Of course, you’ll ask: what is too far? But I answered that question –  convincingly, I hope – in my previous post: it’s not about the ‘number of arrows’ (as suggested in the caption under that Figure 34 above), but about the frequency and, hence, the ‘wavelength’ of our photon.] 4. And so… Finally, what’s left is a limited number of possibilities that interfere with each other, which results in what we ‘see’: light seems to use a small core of space indeed–a limited number of nearby paths. You’ll say… Well… That still doesn’t ‘explain’ why the interference pattern disappears with those special detectors or – what amounts to the same – why the special detectors at the slits never click simultaneously. You’re right. How do we make sense of that? I don’t know. You should try to imagine what happens for yourself. Everyone has his or her own way of ‘conceptualizing’ stuff, I’d say, and you may well be content and just accept all of the above without trying to ‘imagine’ what’s happening really when a ‘photon’ goes through one or both of those slits. In fact, that’s the most sensible thing to do. You should not try to imagine what happens and just follow the crazy calculus rules. However, when I think about it, I do have some image in my head. The image is of one of those ‘touch-me-not’ weeds. I quickly googled one of these images, but I couldn’t quite find what I am looking for: it would be more like something that, when you touch it, curls up in a little ball. Any case… You know what I mean, I hope. You’ll shake your head now and solemnly confirm that I’ve gone mad. Touch-me-not weeds? What’s that got to do with photons? Well… It’s obvious you and I cannot really imagine how a photon looks like. But I think of it as a blob of energy indeed, which is inseparable, and which effectively occupies some space (in three dimensions that is). I also think that, whatever it is, it actually does travel through both slits, because, as it interferes with itself, the interference pattern does depend on the space between the two slits as well as the width of those slits. In short, the whole ‘geometry’ of the situation matters, and so the ‘interaction’ is some kind of ‘spatial’ thing. [Sorry for my awfully imprecise language here.] Having said that, I think it’s being detected by one detector only because only one of them can sort of ‘hook’ it, somehow. Indeed, because it’s interconnected and inseparable, it’s the whole blob that gets hooked, not just one part of it. [You may or may not imagine that the detectors that’s got the best hold of it gets it, but I think that’s pushing the description too much.] In any case, the point is that a photon is surely not like a lizard dropping its tail while trying to escape. Perhaps it’s some kind of unbreakable ‘string’ indeed – and sorry for summarizing string theory so unscientifically here – but then a string oscillating in dimensions we can’t imagine (or in some dimension we can’t observe, like the Kaluza-Klein theory suggests). It’s something, for sure, and something that stores energy in some kind of oscillation, I think. What it is, exactly, we can’t imagine, and we’ll probably never find out—unless we accept that the how of quantum mechanics is not only the why, but also the what. 🙂 Does this make sense? Probably not but, if anything, I hope it fired your imagination at least. 🙂 # Planck’s constant (II) My previous post was tough. Tough for you–if you’ve read it. But tough for me too. 🙂 The blackbody radiation problem is complicated but, when everything is said and done, what the analysis says is that the the ‘equipartition theorem’ in the kinetic theory of gases ‘theorem (or the ‘theorem concerning the average energy of the center-of-mass motion’, as Feynman terms it), is not correct. That equipartition theorem basically states that, in thermal equilibrium, energy is shared equally among all of its various forms. For example, the average kinetic energy per degree of freedom in the translation motion of a molecule should equal that of its rotational motions. That equipartition theorem is also quite precise: it also states that the mean energy, for each atom or molecule, for each degree of freedom, is kT/2. Hence, that’s the (average) energy the 19th century scientists also assigned to the atomic oscillators in a gas. However, the discrepancy between the theoretical and empirical result of their work shows that adding atomic oscillators–as radiators and absorbers of light–to the system (a box of gas that’s being heated) is not just a matter of adding additional ‘degree of freedom’ to the system. It can’t be analyzed in ‘classical’ terms: the actual spectrum of blackbody radiation shows that these atomic oscillators do not absorb, on average, an amount of energy equal to kT/2. Hence, they are not just another ‘independent direction of motion’. So what are they then? Well… Who knows? I don’t. But, as I didn’t quite go through the full story in my previous post, the least I can do is to try to do that here. It should be worth the effort. In Feynman’s words: “This was the first quantum-mechanical formula ever known, or discussed, and it was the beautiful culmination of decades of puzzlement.” And then it does not involve complex numbers or wave functions, so that’s another reason why looking at the detail is kind of nice. 🙂 Discrete energy levels and the nature of h To solve the blackbody radiation problem, Planck assumed that the permitted energy levels of the atomic harmonic oscillator were equally spaced, at ‘distances’ ħωapart from each other. That’s what’s illustrated below. Now, I don’t want to make too many digressions from the main story, but this En = nħω0 formula obviously deserves some attention. First note it immediately shows why the dimension of ħ is expressed in joule-seconds (J·s), or electronvolt-seconds (J·s): we’re multiplying it with a frequency indeed, so that’s something expressed per second (hence, its dimension is s–1) in order to get a measure of energy: joules or, because of the atomic scale, electronvolts. [The eV is just a (much) smaller measure than the joule, but it amounts to the same: 1 eV ≈ 1.6×10−19 J.] One thing to note is that the equal spacing consists of distances equal to ħω0, not of ħ. Hence, while h, or ħ (ħ is the constant to be used when the frequency is expressed in radians per second, rather than oscillations per second, so ħ = h/2π) is now being referred to as the quantum of action (das elementare Wirkungsquantum in German), Planck referred to it as as a Hilfsgrösse only (that’s why he chose the h as a symbol, it seems), so that’s an auxiliary constant only: the actual quantum of action is, of course, ΔE, i.e. the difference between the various energy levels, which is the product of ħ and ω(or of h and ν0 if we express frequency in oscillations per second, rather than in angular frequency). Hence, Planck (and later Einstein) did not assume that an atomic oscillator emits or absorbs packets of energy as tiny as ħ or h, but packets of energy as big as ħωor, what amounts to the same (ħω = (h/2π)(2πν) = hν), hν0. Just to give an example, the frequency of sodium light (ν) is 500×1012 Hz, and so its energy is E = hν. That’s not a lot–about 2 eV only– but it still packs 500×1012 ‘quanta of action’ ! Another thing is that ω (or ν) is a continuous variable: hence, the assumption of equally spaced energy levels does not imply that energy itself is a discrete variable: light can have any frequency and, hence, we can also imagine photons with any energy level: the only thing we’re saying is that the energy of a photon of a specific color (i.e. a specific frequency ν) will be a multiple of hν. Probability assumptions The second key assumption of Planck as he worked towards a solution of the blackbody radiation problem was that the probability (P) of occupying a level of energy E is P(EαeE/kT. OK… Why not? But what is this assumption really? You’ll think of some ‘bell curve’, of course. But… No. That wouldn’t make sense. Remember that the energy has to be positive. The general shape of this P(E) curve is shown below. The highest probability density is near E = 0, and then it goes down as E gets larger, with kT determining the slope of the curve (just take the derivative). In short, this assumption basically states that higher energy levels are not so likely, and that very high energy levels are very unlikely. Indeed, this formula implies that the relative chance, i.e. the probability of being in state E1 relative to the chance of being in state E0, is P1/Pe−(E1–E0)k= e−ΔE/kT. Now, Pis n1/N and Pis n0/N and, hence, we find that nmust be equal to n0e−ΔE/kT. What this means is that the atomic oscillator is less likely to be in a higher energy state than in a lower one. That makes sense, doesn’t it? I mean… I don’t want to criticize those 19th century scientists but… What were they thinking? Did they really imagine that infinite energy levels were as likely as… Well… More down-to-earth energy levels? I mean… A mechanical spring will break when you overload it. Hence, I’d think it’s pretty obvious those atomic oscillators cannot be loaded with just about anything, can they? Garbage in, garbage out:  of course, that theoretical spectrum of blackbody radiation didn’t make sense! Let me copy Feynman now, as the rest of the story is pretty straightforward: Now, we have a lot of oscillators here, and each is a vibrator of frequency w0. Some of these vibrators will be in the bottom quantum state, some will be in the next one, and so forth. What we would like to know is the average energy of all these oscillators. To find out, let us calculate the total energy of all the oscillators and divide by the number of oscillators. That will be the average energy per oscillator in thermal equilibrium, and will also be the energy that is in equilibrium with the blackbody radiation and that should go in the equation for the intensity of the radiation as a function of the frequency, instead of kT. [See my previous post: that equation is I(ω) = (ω2kt)/(π2c2).] Thus we let N0 be the number of oscillators that are in the ground state (the lowest energy state); N1 the number of oscillators in the state E1; N2 the number that are in state E2; and so on. According to the hypothesis (which we have not proved) that in quantum mechanics the law that replaced the probability eP.E./kT or eK.E./kT in classical mechanics is that the probability goes down as eΔE/kT, where ΔE is the excess energy, we shall assume that the number N1 that are in the first state will be the number N0 that are in the ground state, times e−ħω/kT. Similarly, N2, the number of oscillators in the second state, is N=N0e−2ħω/kT. To simplify the algebra, let us call e−ħω/k= x. Then we simply have N1 = N0x, N2 = N0x2, …, N= N0xn. The total energy of all the oscillators must first be worked out. If an oscillator is in the ground state, there is no energy. If it is in the first state, the energy is ħω, and there are N1 of them. So N1ħω, or ħωN0x is how much energy we get from those. Those that are in the second state have 2ħω, and there are N2 of them, so N22ħω=2ħωN0x2 is how much energy we get, and so on. Then we add it all together to get Etot = N0ħω(0+x+2x2+3x3+…). And now, how many oscillators are there? Of course, N0 is the number that are in the ground state, N1 in the first state, and so on, and we add them together: Ntot = N0(1+x+x2+x3+…). Thus the average energy is Now the two sums which appear here we shall leave for the reader to play with and have some fun with. When we are all finished summing and substituting for x in the sum, we should get—if we make no mistakes in the sum— Feynman concludes as follows: “This, then, was the first quantum-mechanical formula ever known, or ever discussed, and it was the beautiful culmination of decades of puzzlement. Maxwell knew that there was something wrong, and the problem was, what was right? Here is the quantitative answer of what is right instead of kT. This expression should, of course, approach kT as ω → 0 or as → .” It does, of course. And so Planck’s analysis does result in a theoretical I(ω) curve that matches the observed I(ω) curve as a function of both temperature (T) and frequency (ω). But so what it is, then? What’s the equation describing the dotted curves? It’s given below: I’ll just quote Feynman once again to explain the shape of those dotted curves: “We see that for a large ω, even though we have ωin the numerator, there is an e raised to a tremendous power in the denominator, so the curve comes down again and does not “blow up”—we do not get ultraviolet light and x-rays where we do not expect them!” Is the analysis necessarily discrete? One question I can’t answer, because I just am not strong enough in math, is the question or whether or not there would be any other way to derive the actual blackbody spectrum. I mean… This analysis obviously makes sense and, hence, provides a theory that’s consistent and in accordance with experiment. However, the question whether or not it would be possible to develop another theory, without having recourse to the assumption that energy levels in atomic oscillators are discrete and equally spaced with the ‘distance’ between equal to hν0, is not easy to answer. I surely can’t, as I am just a novice, but I can imagine smarter people than me have thought about this question. The answer must be negative, because I don’t know of any other theory: quantum mechanics obviously prevailed. Still… I’d be interested to see the alternatives that must have been considered. Post scriptum: The “playing with the sums” is a bit confusing. The key to the formula above is the substitution of (0+x+2x2+3x3+…)/(1+x+x2+x3+…) by 1/[(1/x)–1)] = 1/[eħω/kT–1]. Now, the denominator 1+x+x2+x3+… is the Maclaurin series for 1/(1–x). So we have: (0+x+2x2+3x3+…)/(1+x+x2+x3+…) = (0+x+2x2+3x3+…)(1–x) x+2x2+3x3… –x22x3–3x4… = x+x2+x3+x4 = –1+(1+x+x2+x3…) = –1 + 1/(1–x) = –(1–x)+1/(1–x) = x/(1–x). Note the tricky bit: if x = e−ħω/kT, then eħω/kis x−1 = 1/x, and so we have (1/x)–1 in the denominator of that (mean) energy formula, not 1/(x–1). Now 1/[(1/x)–1)] = 1/[(1–x)/x] = x/(1–x), indeed, and so the formula comes out alright. # Photons as strings In my previous post, I explored, somewhat jokingly, the grey area between classical physics and quantum mechanics: light as a wave versus light as a particle. I did so by trying to picture a photon as an electromagnetic transient traveling through space, as illustrated below. While actual physicists would probably deride my attempt to think of a photon as an electromagnetic transient traveling through space, the idea illustrates the wave-particle duality quite well, I feel. Understanding light is the key to understanding physics. Light is a wave, as Thomas Young proved to the Royal Society of London in 1803, thereby demolishing Newton’s corpuscular theory. But its constituents, photons, behave like particles. According to modern-day physics, both were right. Just to put things in perspective, the thickness of the note card which Young used to split the light – ordinary sunlight entering his room through a pinhole in a window shutter – was 1/30 of an inch, or approximately 0.85 mm. Hence, in essence, this is a double-slit experiment with the two slits being separated by a distance of almost 1 millimeter. That’s enormous as compared to modern-day engineering tolerance standards: what was thin then, is obviously not considered to be thin now. Scale matters. I’ll come back to this. Young’s experiment (from www.physicsclassroom.com) The table below shows that the ‘particle character’ of electromagnetic radiation becomes apparent when its frequency is a few hundred terahertz, like the sodium light example I used in my previous post: sodium light, as emitted by sodium lamps, has a frequency of 500×1012 oscillations per second and, therefore (the relation between frequency and wavelength is very straightforward: their product is the velocity of the wave, so for light we have the simple λf = c equation), a wavelength of 600 nanometer (600×10–9 meter). However, whether something behaves like a particle or a wave also depends on our measurement scale: 0.85 mm was thin in Young’s time, and so it was a delicate experiment then but now, it’s a standard classroom experiment indeed. The theory of light as a wave would hold until more delicate equipment refuted it. Such equipment came with another sense of scale. It’s good to remind oneself that Einstein’s “discovery of the law of the photoelectric effect”, which explained the photoelectric effect as the result of light energy being carried in discrete quantized packets of energy, now referred to as photons, goes back to 1905 only, and that the experimental apparatus which could measure it was not much older. So waves behave like particles if we look at them close enough. Conversely, particles behave like waves if we look at them close enough. So there is this zone where they are neither, the zone for which we invoke the mathematical formalism of quantum mechanics or, to put it more precisely, the formalism of quantum electrodynamics: that “strange theory of light and Matter”, as Feynman calls it. Let’s have a look at how particles became waves. It should not surprise us that the experimental apparatuses needed to confirm that electrons–or matter in general–can actually behave like a wave is more recent than the 19th century apparatuses which led Einstein to develop his ‘corpuscular’ theory of light (i.e. the theory of light as photons). The engineering tolerances involved are daunting. Let me be precise here. To be sure, the phenomenon of electron diffraction (i.e. electrons going through one slit and producing a diffraction pattern on the other side) had been confirmed experimentally already in 1925, in the famous Davisson-Germer experiment. I am saying because it’s rather famous indeed. First, because electron diffraction was a weird thing to contemplate at the time. Second, because it confirmed the de Broglie hypothesis only two years after Louis de Broglie had advanced it. And, third, because Davisson and Germer had never intended to set it up to detect diffraction: it was pure coincidence. In fact, the observed diffraction pattern was the result of a laboratory accident, and Davisson and Germer weren’t aware of other, conscious, attempts of trying to prove the de Broglie hypothesis. 🙂 […] OK. I am digressing. Sorry. Back to the lesson. The nanotechnology that was needed to confirm Feynman’s 1965 thought experiment on electron interference (i.e. electrons going through two slits and interfering with each other (rather than producing some diffraction pattern as they go through one slit only) – and, equally significant as an experiment result, with themselves as they go through the slit(s) one by one! – was only developed over the past decades. In fact, it was only in 2008 (and again in 2012) that the experiment was carried out exactly the way Feynman describes it in his Lectures. It is useful to think of what such experiments entail from a technical point of view. Have a look at the illustration below, which shows the set-up. The insert in the upper-left corner shows the two slits which were used in the 2012 experiment: they are each 62 nanometer wide – that’s 50×10–9 m! – and the distance between them is 272 nanometer, or 0.272 micrometer. [Just to be complete: they are 4 micrometer tall (4×10–6 m), and the thing in the middle of the slits is just a little support (150 nm) to make sure the slit width doesn’t vary.] The second inset (in the upper-right corner) shows the mask that can be moved to close one or both slits partially or completely. The mask is 4.5µm wide ×20µm tall. Please do take a few seconds to contemplate the technology behind this feat: a nanometer is a millionth of a millimeter, so that’s a billionth of a meter, and a micrometer is a millionth of a meter. To imagine how small a nanometer is, you should imagine dividing one millimeter in ten, and then one of these tenths in ten again, and again, and once again, again, and again. In fact, you actually cannot imagine that because we live in the world we live in and, hence, our mind is used only to addition (and subtraction) when it comes to comparing sizes and – to a much more limited extent – with multiplication (and division): our brain is, quite simply, not wired to deal with exponentials and, hence, it can’t really ‘imagine’ these incredible (negative) powers. So don’t think you can imagine it really, because one can’t: in our mind, these scales exist only as mathematical constructs. They don’t correspond to anything we can actually make a mental picture of. The electron beam consisted of electrons with an (average) energy of 600 eV. That’s not an awful lot: 8.5 times more than the energy of an electron in orbit in a atom, whose energy would be some 70 eV, so the acceleration before they went through the slits was relatively modest. I’ve calculated the corresponding de Broglie wavelength of these electrons in another post (Re-Visiting the Matter-Wave, April 2014), using the de Broglie equations: f = E/h or λ = p/h. And, of course, you could just google the article on the experiment and read about it, but it’s a good exercise, and actually quite simple: just note that you’ll need to express the energy in joule (not in eV) to get it right. Also note that you need to include the rest mass of the electron in the energy. I’ll let you try it (or else just go to that post of mine). You should find a de Broglie wavelength of 50 picometer for these electrons, so that’s 50×10–12 m. While that wavelength is less than a thousandth of the slit width (62 nm), and about 5,500 times smaller than the space between the two slits (272 nm), the interference effect was unambiguous in the experiment. I advice you to google the results yourself (or read that April 2014 post of mine if you want a summary): the experiment was done at the University of Nebraska-Lincoln in 2012. Electrons and X-rays To put everything in perspective: 50 picometer is like the wavelength of X-rays, and you can google similar double-slit experiments for X-rays: they also loose their ‘particle behavior’ when we look at them at this tiny scale. In short, scale matters, and the boundary between ‘classical physics’ (electromagnetics) and quantum physics (wave mechanics) is not clear-cut. If anything, it depends on our perspective, i.e. what we can measure, and we seem to be shifting that boundary constantly. In what direction? Downwards obviously: we’re devising instruments that measure stuff at smaller and smaller scales, and what’s happening is that we can ‘see’ typical ‘particles’, including hard radiation such as gamma rays, as local wave trains. Indeed, the next step is clear-cut evidence for interference between gamma rays. Energy levels of photons We would not associate low-frequency electromagnetic waves, such as radio or radar waves, with photons. But light in the visible spectrum, yes. Obviously. […] Isn’t that an odd dichotomy? If we see that, on a smaller scale, particles start to look like waves, why would the reverse not be true? Why wouldn’t we analyze radio or radar waves, on a much larger scale, as a stream of very (I must say extremely) low-energy photons? I know the idea sounds ridiculous, because the energies involved would be ridiculously low indeed. Think about it. The energy of a photon is given by the Planck relation: E = h= hc/λ. For visible light, with wavelengths ranging from 800 nm (red) to 400 nm (violet or indigo), the photon energies range between 1.5 and 3 eV. Now, the shortest wavelengths for radar waves are in the so-called millimeter band, i.e. they range from 1 mm to 1 cm. A wavelength of 1 mm corresponds to a photon energy of 0.00124 eV. That’s close to nothing, of course, and surely not the kind of energy levels that we can currently detect. But you get the idea: there is a grey area between classical physics and quantum mechanics, and it’s our equipment–notably the scale of our measurements–that determine where that grey area begins, and where it ends, and it seems to become larger and larger as the sensitivity of our equipment improves. What do I want to get at? Nothing much. Just some awareness of scale, as an introduction to the actual topic of this post, and that’s some thoughts on a rather primitive string theory of photons. What !? Yes. Purely speculative, of course. 🙂 Photons as strings I think my calculations in the previous post, as primitive as they were, actually provide quite some food for thought. If we’d treat a photon in the sodium light band (i.e. the light emitted by sodium, from a sodium lamp for instance) just like any other electromagnetic pulse, we would find it’s a pulse of some 10 meter long. We also made sense of this incredibly long distance by noting that, if we’d look at it as a particle (which is what we do when analyzing it as a photon), it should have zero size, because it moves at the speed of light and, hence, the relativistic length contraction effect ensures we (or any observer in whatever reference frame really, because light always moves at the speed of light, regardless of the reference frame) will see it as a zero-size particle. Having said that, and knowing damn well that we have treat the photon as an elementary particle, I would think it’s very tempting to think of it as a vibrating string. Huh? Yes. Let me copy that graph again. The assumption I started with is a standard one in physics, and not something that you’d want to argue with: photons are emitted when an electron jumps from a higher to a lower energy level and, for all practical purposes, this emission can be analyzed as the emission of an electromagnetic pulse by an atomic oscillator. I’ll refer you to my previous post – as silly as it is – for details on these basics: the atomic oscillator has a Q, and so there’s damping involved and, hence, the assumption that the electromagnetic pulse resembles a transient should not sound ridiculous. Because the electric field as a function in space is the ‘reversed’ image of the oscillation in time, the suggested shape has nothing blasphemous. Just go along with it for a while. First, we need to remind ourselves that what’s vibrating here is nothing physical: it’s an oscillating electromagnetic field. That being said, in my previous post, I toyed with the idea that the oscillation could actually also represent the photon’s wave function, provided we use a unit for the electric field that ensures that the area under the squared curve adds up to one, so as to normalize the probability amplitudes. Hence, I suggested that the field strength over the length of this string could actually represent the probability amplitudes, provided we choose an appropriate unit to measure the electric field. But then I was joking, right? Well… No. Why not consider it? An electromagnetic oscillation packs energy, and the energy is proportional to the square of the amplitude of the oscillation. Now, the probability of detecting a particle is related to its energy, and such probability is calculated from taking the (absolute) square of probability amplitudes. Hence, mathematically, this makes perfect sense. It’s quite interesting to think through the consequences, and I hope I will (a) understand enough of physics and (b) find enough time for this—one day! One interesting thing is that the field strength (i.e. the magnitude of the electric field vector) is a real number. Hence, if we equate these magnitudes with probability amplitudes, we’d have real probability amplitudes, instead of complex-valued ones. That’s not a very fundamental issue. It probably indicates we should also take into account the fact that the E vector also oscillates in the other direction that’s normal to the direction of propagation, i.e. the y-coordinate (assuming that the z-axis is the direction of propagation). To put it differently, we should take the polarization of the light into account. The figure below–which I took from Wikipedia again (by far the most convenient place to shop for images and animations: what would I do without it?– shows how the electric field vector moves in the xy-plane indeed, as the wave travels along the z-axis. So… Well… I still have to figure it all out, but the idea surely makes sense. Another interesting thing to think about is how the collapse of the wave function would come about. If we think of a photon as a string, it must have some ‘hooks’ which could cause it to ‘stick’ or ‘collapse’ into a ‘lump’ as it hits a detector. What kind of hook? What force would come into play? Well… The interaction between the photon and the photodetector is electromagnetic, but we’re looking for some other kind of ‘hook’ here. What could it be? I have no idea. Having said that, we know that the weakest of all fundamental forces—gravity—becomes much stronger—very much stronger—as the distance becomes smaller and smaller. In fact, it is said that, if we go to the Planck scale, the strength of the force of gravity becomes quite comparable with the other forces. So… Perhaps it’s, quite simply, the equivalent mass of the energy involved that gets ‘hooked’, somehow, as it starts interacting with the photon detector. Hence, when thinking about a photon as an oscillating string of energy, we should also think of that string as having some inseparable (equivalent) mass that, once it’s ‘hooked’, has no other option that to ‘collapse into itself’. [You may note there’s no quantum theory for gravity as yet. I am not sure how, but I’ve got a gut instinct that tells me that may help to explain why a photon consists of one single ‘unbreakable’ lump, although I need to elaborate this argument obviously.] You must be laughing aloud now. A new string theory–really? I know… I know… I haven’t reach sophomore level and I am already wildly speculating… Well… Yes. What I am talking about here has probably nothing to do with current string theories, although my proposed string would also replace the point-like photon by a one-dimensional ‘string’. However, ‘my’ string is, quite simply, an electromagnetic pulse (a transient actually, for reasons I explained in my previous post). Naive? Perhaps. However, I note that the earliest version of string theory is referred to as bosonic string theory, because it only incorporated bosons, which is what photons are. So what? Well… Nothing… I am sure others have thought of this too, and I’ll look into it. It’s surely an idea which I’ll keep in the back of my head as I continue to explore physics. The idea is just too simple and beautiful to disregard, even if I am sure it must be pretty naive indeed. Photons as ten-meter long strings? Let’s just forget about it. 🙂 Onwards !!! 🙂 Post Scriptum: The key to ‘closing’ this discussion is, obviously, to be found in a full-blown analysis of the relativity of fields. So, yes, I have not done all of the required ‘homework’ on this and the previous post. I apologize for that. If anything, I hope it helped you to also try to think somewhat beyond the obvious. I realize I wasted a lot of time trying to understand the pre-cooked ready-made stuff that’s ‘on the market’, so to say. I still am, actually. Perhaps I should first thoroughly digest Feynman’s Lectures. In fact, I think that’s what I’ll try to do in the next year or so. Sorry for any inconvenience caused. 🙂
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742775917053223, "perplexity": 639.4375192428353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00386.warc.gz"}
http://tex.stackexchange.com/questions/42556/so-style-markups-for-references-in-the-bottom-of-the-page-in-bibtex-ed-document
# SO-style Markups for References in the Bottom of the Page in Bibtex-ed Document? I often have a long bibliography and then later I find that I should have a notice like this buying-list in a project (not that important so not wanting them to the bibtex). I find it overkill to explain a trivia like a buying-list-notice next to more important things in Bibtex, I need just SO-style references. How can I add the references nicely to the bottom of the page instead of the back with author/howpublished/title-bibtex-clutter? I am not against Bibtex but I am looking for more casual referencing besides Bibtex. \begin{itemize} \item 4pcs [1] (24USD) % Here [1] and [2] are references to \item 2pcs [2] (25USD) % the below thing with the urls. \item ... \end{itemize} <<echo=FALSE>> % I am not sure about the parameters here but they % but they could be things like fonts etc. % % I want here a Sweave-like -tool % for easy mark-up references that will % automatically appear at the bottom of the page. % Then just latexmk to handle compiling? % - This is not clear at all, since both answers below provide something in terms of your first edit. After your edit you want a sweave-like tool? Moreover, please elaborate on what is meant by "SO-style references". –  Werner Jan 28 '12 at 5:30 @Werner: SO = StackOverflow (actually valid on this site for referencing), I am trying to reuse the references on this site to my latex document without needing to reinvent the wheel by always changing from one markup into another. –  hhh Jan 28 '12 at 5:40 I think footnotes would work, but I'm not sure if I'm missing something: \documentclass[a5paper,12pt]{article} % paper and font chosen to make screenshot smaller \usepackage{hyperref} \begin{document} \begin{itemize} \end{itemize} \end{document} - I like this because it is succint +1. It would be cool if I could somehow hide those "\footnote{\url{...}} with simply [1] for referencing and [1]: www.something.com for the target so I could use the same SO-style referencing as on this site, possible? Perhaps some macro (I used the term Sweave -style in trying to clarify this, hopefully not distracted, trying to point out the different environment for parsing). Small usability thing to help writing like in SO. –  hhh Jan 28 '12 at 5:37 I wonder if you might be better off using MarkDown or MultiMarkDown for the document, inserting LaTeX as needed for equations as such. I can't look at the syntax at the moment, but that may solve the problem. –  Mike Renfro Jan 28 '12 at 13:46 Here is an implementation that uses its own counter shorturl to typeset these per-page references as part of the footnotes on the page: \documentclass{article} \usepackage[paperheight=200pt]{geometry}% http://ctan.org/pkg/geometry % The above paperheight configuration is just for this example \usepackage{url}% http://ctan.org/pkg/url \usepackage{perpage}% http://ctan.org/pkg/perpage \MakePerPage{shorturl}% shorturl counter will reset every page \newcounter{shorturl}\renewcommand{\theshorturl}{\arabic{shorturl}}% \makeatletter \newcommand{\shorturl}[1]{% \refstepcounter{shorturl}% \hbox{\normalfont[\theshorturl]}% \insert\footins{% \reset@font\footnotesize \interlinepenalty\interfootnotelinepenalty \splittopskip\footnotesep \splitmaxdepth \dp\strutbox \floatingpenalty \@MM \hsize\columnwidth \@parboxrestore \protected@edef\@currentlabel{% \csname p@shorturl\endcsname\theshorturl }% \color@begingroup \par\makebox[1.8em][r]{[\theshorturl]:\ }% \rule\z@\footnotesep\ignorespaces#1\@finalstrut\strutbox% \color@endgroup}% } \makeatother \begin{document} \begin{itemize} \item Some information\footnote{Here is a footnote.} Loading perpage makes the \shorturl references restart every page. Remove this if it is of no concern. It would be possible to modify the layout of the URL in the footnote. At the moment, the \shorturl mark within the footnote is right-aligned in a 1.8em box (similar to that of the regular \footnote). geometry was just loaded to make the text and footnote be visible/close together.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881239891052246, "perplexity": 2327.887924347972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776444/warc/CC-MAIN-20131218054936-00062-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/complex-logarithm-rules.268828/
# Complex logarithm rules 1. Nov 2, 2008 ### daudaudaudau Hi. I know that for real numbers log(z)=-log(1/z) is this also true in general for complex numbers? 2. Nov 2, 2008 ### rbj yup. no promises about the log of 0 or $\infty$. 3. Nov 2, 2008 ### jostpuur But one might need to add $2\pi i$ somewhere sometimes because of some branch choosing issues. 4. Nov 2, 2008 ### jostpuur If you choose to use a branch $$\log(z) = \log(|z|) + i\textrm{arg}(z),\quad 0\leq \textrm{arg}(z) < 2\pi$$ then for example $$\log(-1+i) = \log(\sqrt{2}) + \frac{3\pi i}{4}$$ and $$\log(\frac{1}{-1+i}) = \log(-\frac{1}{2}(1+i)) = \log(\frac{1}{\sqrt{2}}) + \frac{5\pi i}{4}.$$ So you've got $$\log(-1+i) + \log(\frac{1}{-1+i}) = 2\pi i,$$ in contradiction with your equation. But if you choose the branch so that $$-\pi < \textrm{arg}(z) \leq \pi,$$ then you've got $$\log(-1+i) + \log(\frac{1}{-1+i}) = 0,$$ as your equation stated. Even with this choice of branch still, for example, $$\log(-1) + \log(\frac{1}{-1}) = 2\pi i,$$ so actually... for positive real numbers! 5. Nov 2, 2008 ### mathman The essentail point is than ln(1)= $2n\pi i$ with n being any integer. 6. Nov 2, 2008 ### daudaudaudau Thank you for the answers and examples. I understand it much better now. 7. Nov 2, 2008 ### jostpuur You probably remember the trick where one does something like this: $$1 = \sqrt{1} = \cdots = -1$$ with imaginary units. The examples I gave are very similar in nature. Most of the time, a blind use of familiar calculation rules might seem to work, but you never know when something tricky surprises you, if you are not careful.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030555248260498, "perplexity": 2497.4655349447657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589932.22/warc/CC-MAIN-20180717222930-20180718002930-00409.warc.gz"}
https://reciprocal.systems/phpBB3/viewtopic.php?f=6&t=392
## Larsons units This forum is dedicated to the student just starting out with the concepts of the Reciprocal System, or RS2. Questions and clarifications for the RS/RS2 concepts go here; please place new ideas and commentary in the appropriate RS2 fora. rossum Posts: 28 Joined: Thu Jan 17, 2013 1:36 am ### Larsons units Larson somehow (through long series of inductions) arrived to conclusion, that mass is m=t3/s3. I would like to know if there is some way to deduce it from experience, like any other physical laws in the deductive science. Metric system (not just SI but also cgs and other conventional systems) can be reduced to min. four independend irreducible units of space, time, mass and current (or their combinations). RS reduces the metric system by two more units - mass and current. Current can be got rid of by similarity between voltage and force: F=dp/dt U=dP/dt where F is force, dp is change of momentum, dt change of time, U is voltage, dP chage of magnetic flux. In order to have the same units, in the second relation current must be speed times some constant. However for mass I didn't find any such similarity (and I doubt there is one - the playground with relations containing only time and space is quite small). I wonder - can the relation between space, time and mass be somehow shown without "theory contamination"? (E.g. the relations above describe only the experience, they don't represent any particular theory. On the other hand 'electron' is a result of a theory used to explain certain experience. 'An electron was accelerated by 5V' is a "theory contamined" description. It is not necessarily right or wrong, but you have to use some theory to understand the statement.) Jan bperet Posts: 1490 Joined: Thu Jul 22, 2004 1:43 am Location: 7.5.3.84.70.24.606 Contact: ### mass = dp/dv Not sure if this is what you are looking for... From Gustav LeBon, circa 1907, mass is dp/dv, where dp is the change in momentum (called "weight" in those days) and dv is change in velocity. In electronics, inductance replaces mass with the same units. Every dogma has its day... Horace Posts: 256 Joined: Sat Apr 15, 2006 3:40 pm ### From Gustav LeBon, circa 1907 From Gustav LeBon, circa 1907, mass is dp/dv, where dp is the change in momentum (called "weight" in those days) and dv is change in velocity. But the cubed exponents s^3/t^3 are not evident from this relation and cannot be gathered from mere observation. His point is how to arrive at that empirically without making theoretical assumptions. silvio.caggia Posts: 16 Joined: Sun Sep 07, 2014 5:58 pm ### m=t3/s3 from E=mc2 If I have well understood your problem, you can derive m=t3/s3 from Einstein formula E=mc2. m=E/c2 [m]=(t/s) / (s2/t2)=t3/s3 Hope it helps bperet Posts: 1490 Joined: Thu Jul 22, 2004 1:43 am Location: 7.5.3.84.70.24.606 Contact: ### Only space is observable; time changes space But the cubed exponents s^3/t^3 are not evident from this relation and cannot be gathered from mere observation. His point is how to arrive at that empirically without making theoretical assumptions. Jan gave dp as empirical; velocity is also empirical. Mass is just defining the relationship between the two. (You have your ratio inverted.) $Mass \left( \frac{t^3}{s^3} \right) = \frac{dp \left( \frac{t^2}{s^2} \right)}{dv \left( \frac{s}{t} \right)}$ In my opinion, mass isn't an empirical quantity, it is one that was invented by scientists as a device to explain equivalent space, a rotational space (imaginary numbers) that cannot be described with linear mathematics, but is still there in observation. Space is linear, time (as a clock) is linear, current (speed, the ratio of space to time) is linear--the only non-linear component is mass, which is why it was considered an irreducible unit. In space, you start with 0 dimensions and work up to 3, which is why linear space (1D, first step) is predominant in our thought. It is the baseline from which geometry is built. Rotation, however, occurs in time, which is the reciprocal of space. That means it works backwards--you start with 3 dimensions (everything, the inverse of nothing), and work down to 0. In the equivalent space represented by time, 3D is analogous to the spatial 0D--unmanifest. Therefore, the first manifestation we see with rotational spaces is a 2D rotation, momentum (Larson's magnetic rotation), the most predominant. Then you get a 1D rotation (electric) as the next step. These two rotations occur separately in time, but because we only observe the net effect here in space, we see it as a single, 3D rotation that is called "mass." Observation and experience are based solely on changes in space. Mass is in time, therefore inherently unobservable with no direct experience. The only thing "observable" is how time changes space. None of the rotational systems (mass, magnetic, electric) can therefore be empirical from a linear, spatial perspective. Every dogma has its day... Gopi Posts: 146 Joined: Wed Jan 05, 2005 1:58 am ### Units of mass Mass units can be derived in a different way from Larson's: Consider movement - s/t. In the RS where "1" is the datum, the only expression that can oppose or resist a motion is t/s, because (s/t)*(t/s) = 1. Hence, t/s is geometrically polar to s/t, rotational instead of linear, temporal instead of spatial, indirectly observable instead of directly empirical. Since the resistance is independent of spatial direction, it can hold in all three dimensions, giving (t/s)3 or t3/s3. Hence, it can be considered a sort of "friction" to the movement, that casts a shadow for the light. 1 dimensional "resistance": t/s (ENERGY) 2 dimensional "resistance": t2/s2 (MOMENTUM) 3 dimensional "resistance": t3/s3 (MASS) Conventional physics does not have unit datum, but only zero datum. So once they had speed s/t, the only "resistance" they could consider was "change in speed magnitude/direction" i.e. ds/dt or acceleration. This means they also opened the door to an unlimited series of possible motions: s/t, s/t2, s/t3, s/t4... s/tn or: v, a, a', a'' ... a(n) So this lead to an infinite series, which Newton and his followers used as wiggle room to apply to all of mechanics, instead of tackling rotation directly. It also led to an undigestible precipitate, mass, and thereby to F=ma. That is why the force ideas are so vague and non-intuitive, and leaves several paths untouched. One can ask for example, why does no one consider all the possible motions: mv, ma, ma', ma'', ma''', ... ma(n) Why stop with just P = mv and F = ma? What about the rest? No answer is given. Besides, what about the opposite series, why isn't the following series popular: s/t, s2/t, s3/t... etc.? Because the wiggle room is limited to three dimensions. Besides, no one measures area or volume directly, so you have limited application. But Larson spotted this, and s3/t especially, is nothing but his expanding balloon. Voila! We have the first hint of Scalar motion: s/t, s/t2, s/t3 -> Vectorial Motion s/t, s2/t, s3/t -> Scalar Motion This helped him get at the reciprocity of space and time, and their consequences. Hope that helps. rossum Posts: 28 Joined: Thu Jan 17, 2013 1:36 am ### Relations as description The beauty of the descriptive approach is in the fact, that you don't have to "believe" in mass, momentum, pressure etc., you only have to define them proprely: if it gets to e.g. calculating the heat needed to push a piston, you still use the same equation. It is only a meas to describe your experience through mathematics, so whether they are "real" or not doesn't make a difference. Horace is right, I would like to arrive to the dimensions of mass, energy, momentum, moment of inertia, action or any other quantity that has time with positive exponent without theoretical assumptions. All the other relations are based on the definition of concept in question. If we didn't give this concepts names, we could write them down systematically (as Gopi showed). The problem is, if I would argue that mass is some other combination of space and time (e.g. t2/s), there is no way to experimentally disprove that. The only difference between science and pseudoscience is that a scientific theory can be disproven by an experiment. (Note that e.g. string theory is in fact pseudoscience) Observation and experience are based solely on changes in space. Mass is in time, therefore inherently unobservablewith no direct experience. The only thing "observable" is how time changes space. None of the rotational systems (mass, magnetic, electric) can therefore be empirical from a linear, spatial perspective. Perhaps not for masses and properties of physical objects, but rotations of real physical objects should have the same properties as rotations that make up matter. There are some experiments with forced precession that clearly show loss of weight. I guess the answer could be somewhere there. So there might be a way to show it without theoretical assumptions after all... Jan rossum Posts: 28 Joined: Thu Jan 17, 2013 1:36 am ### Summarizing... Considering how much time I spent thinking about the units problem I wrote a short article on this - the units of measurement. I.e. what do they actually mean and how can one get from regular units to the ones used in RS. Someone might find it helpful... Anyway, is there a review process here? I'll post it right here. If someone has any comment (how to change it, to be more informative) please let me know. If someone wants to add, change, modify or use some part of it I can send the original TEX file... Jan Attachments shortnote_units.pdf Short note on units of measurement bperet Posts: 1490 Joined: Thu Jul 22, 2004 1:43 am Location: 7.5.3.84.70.24.606 Contact: ### Corrections / suggestions Typeo on page 3: "mapping is quite indirect as, as we saw merging two units" Page 3: "Consequently, the charge must be equivalent to the distance." Larson, in Basic Properties of Matter, points out that conventional physics use of "charge" has two different units associated with it, the energy of charge (t/s) and the quantity of charge (s). This arises because they do not have an uncharged version of the electron, as the RS does. The uncharged electron has units of space, and therefore appears as a quantity (number of electrons), versus the charged electron (t/s), where one is using the magnitude of the charge, not the quantity of electrons. Page 4: typeo: "only “displacemes” from the unit level."; "originates from Philip Porter’s speach)", Phillip has 2 "L"s. What is "Appendix A" for? Perhaps you should include the corresponding natural units there? If you haven't already, read Larson's paper: The Dimensions of Motion. Larson goes over the derivation of many of his natural units there. Every dogma has its day... rossum Posts: 28 Joined: Thu Jan 17, 2013 1:36 am ### Thanks and next version Thanks for comments and corrections. Here is the corrected version. The appendix is there to show how are the same concepts differently complex different mappings. Jan Attachments shortnote_dimensions.pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 1, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786153793334961, "perplexity": 1748.6980313461065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256426.13/warc/CC-MAIN-20190521142548-20190521164548-00396.warc.gz"}
http://physics.stackexchange.com/questions/33079/rotating-mirror-foucaults-measurement-of-light-speed
Rotating mirror - Foucault's measurement of light speed Some time ago I came across a secondary web source on measurement of light speed in water made by Foucault around 1850. I append its redrawn scheme below (light is reflected from the rotating mirror to the fixed one and from the later again to the rotating one which slightly rotates in the meantime so the light ends in the direction labeled air rather than back in the light source). When Foucault inserted a container filled with water between the mirrors then light was reflected back at larger angle (water), because light is slower in water. Which velocity is exactly measured in this experiment? Phase velocity (light source is continuous, similar to infinite monochromatic wave), group velocity (which usually applies to a wave packet - is such a packet somehow created by rotation of mirror?) or another one? Edit (based on the answer by miceterminator): How would the result (the angle) change in case of negative group velocity (which, as far as I know, is possible)? - It measures the group velocity. With every rotating mirror experiment you always measure the group velocity. You may have a continious light beam from the source, however there is usually a aperture/blind (I don't know the correct term in english) between the rotating mirror and the fixed mirror to prevent light that is not exactly straight to pass through (If the fixed mirror is small enough you don't even need it). So the beam is split in time because it only passes through when the rotating mirror is aligned at the appropriate angle (45° in this case). Then the beam is reflected back and with a constant angular velocity of the rotating mirror you can see one point slightly away from the beam source.(Depending on the size of your experiment and the angular velocity of the rotating mirror). Without the blind you would just see a line which is indeed very useless. The experiment is also not able to measure the speed of light in air and water simultaneously. You would have to measure it in air at first and then put a tank of water in (with sufficiently small walls). - I agree that there must be some sort of highligting the measured light beam (say the air one), since the rotating mirror reflects the light coming from light source to all angles, but I am not sure if aperture would work. –  Leos Ondra Jul 30 '12 at 10:15 I did this experiment once so I am pretty sure it works. The smaller the aperture the more accurate your reading because your reflected beam gets smaller. However then you can not see it as well. Thats why one usually uses a laser as light source. Why do you think that the aperture (essentially a small hole) would not work? –  miceterminator Jul 30 '12 at 11:37 "Why do you think that the aperture (essentially a small hole) would not work?" Because light from the light source can reflect to the air or water direction (and all directions around them) simply by a single reflection from rotating mirror. How can you distinguish that from the light which went via the fixed mirror and has been reflected from the rotating one twice? –  Leos Ondra Jul 30 '12 at 16:55 good point. Well basically you would still get double the intensity on the one point. I was wrong to put a blind in there it would only double the intensity. In the experiment we put a Lens between the rotating mirror and the reflecting mirror, with the focal point of the lens in the rotation center of the rotating mirror. That way the point is a lot brighter because for 20° rotation of the rotating mirror (depending on the size of the lens) the light is reflected back. Foucault used a Cog with a frequency coupled to the rotating mirror to stop the other light from coming through. –  miceterminator Jul 31 '12 at 6:30 It can be maddening, trying to understand this experiment from many of the "educational" discussions floating around! They may seem plausible, but if you really think about it, you realize something's missing. The experiment can't possibly work without an aperture, or beam mask, or something to keep the return beam a sharp point, instead of a wide smear of light. What's missing, is the fact that the fixed mirror should be a spherical mirror, with a radius of curvature (approximately) equal to the distance between the two mirrors. With a curved mirror, it doesn't matter where the rotating mirror happens to point: If the beam manages to hit anywhere on the spherical (fixed) mirror, it always gets reflected back to the same exact point on the rotating mirror. The return beam's impact point changes by only the amount the mirror rotates after it sends the beam to the fixed mirror. No mask is necessary, and the "packet", or "light-bullet" is whatever light hits the fixed mirror during each revolution. -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91806560754776, "perplexity": 350.59286367627794}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776439/warc/CC-MAIN-20131218054936-00089-ip-10-33-133-15.ec2.internal.warc.gz"}
https://quiz.jagranjosh.com/josh/quiz/index.php?attempt_id=7718564&page=1
# Progression Are you preparing for campus placements,Banking,SSC, IAS, Insurance,Defence and other competitive exams? Then, make sure to take some time in practicing the Progression questions and answer in Quantitative Aptitude. Moreover, only those questions are included that are relevant and likely to be asked in any competitive exam. So, take these questions and answer, brush up your skills and practice to stay fully prepared for any your exam. • Q8.Find the 5th term of GP, if it is given that are in GP. • Q9.If $\sum _{\text{p}=1}^{\text{n}}{\text{t}}_{\text{p}}=\frac{\text{n}\left(\text{n}+1\right)\left(\text{n}+2\right)\left(\text{n}+3\right)}{8}$, where ${t}_{p}$ denotes the pth term of a series, then $\underset{\text{n}\to \infty }{\mathrm{lim}}\sum _{\text{p}=1}^{\text{n}}\frac{1}{{\text{t}}_{\text{p}}}$ is • Q10.If then the minimum value of $\frac{1}{{\text{t}}_{1}}+\frac{1}{{\text{t}}_{2}}+\frac{1}{{\text{t}}_{3}}+\dots +\frac{1}{{\text{t}}_{50}}$is equal to • Q11.If $1+\text{k}+{\text{k}}^{2}+\dots +{\text{k}}^{\text{p}}=\left(1+\text{k}\right)\left(1+{\text{k}}^{2}\right)\left(1+{\text{k}}^{4}\right)\left(1+{\text{k}}^{8}\right)\left(1+{\text{k}}^{16}\right),$then the value p is • Q12.The sum of k terms of the series $\frac{1}{1·2·3·4}+\frac{1}{2·3·4·5}+\frac{1}{3·4·5·6}+\dots$ is • Q13.The sum of infinite terms of the series $\frac{9}{{5}^{2}.2.1}+\frac{13}{{5}^{3}.3.2}+\frac{17}{{5}^{4}.4.3}+\dots$ is: • Q14.If it is given that $\frac{1}{1+\sqrt{\text{x}}},\frac{1}{1-\text{x}},\frac{1}{1-\sqrt{\text{x}}}$ are in AP then value of x will be
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8460930585861206, "perplexity": 1603.3748456958417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00315.warc.gz"}
https://www.nature.com/articles/ncomms2367?error=cookies_not_supported&code=e56b5f35-aa0c-45eb-a86a-5892c3a2e570
## Introduction Electronic and vibrational coupling in the nanoscale governs many important physical processes, such as energy transfer in photosynthesis1, charge separation in heterojunctions2 and thermal transport through interfaces3. Quantum coupling between nanostructures is also crucial for understanding emerging properties in van der Waals-coupled superstructures, such as multilayer of graphene and superlattice of nanoparticles. Double-walled carbon nanotubes (DWNTs), a coaxial composite of two single-walled carbon nanotubes (SWNTs), provide a unique model system for quantitative study of such nanoscale coupling, because the structural and physical properties of DWNTs are precisely defined individually, but richly varied among different species4,5,6,7,8,9,10,11,12. Specifically, a SWNT is uniquely defined by its chiral indices (n, m). A DWNT, composed of two SWNTs, is fully defined by chiral indices (no, mo)/(ni, mi) of the coaxial outer-/inner-walls10,11,12. Its electronic and vibrational properties, however, can differ profoundly from those of the constituent SWNTs owing to tube–tube interactions. Radial-breathing mode (RBM) oscillation is a signature vibration of one-dimensional nanostructures. It is widely used for characterizing carbon nanotubes13,14,15,16,17, and can be sensitively measured at the single-nanotube level using resonance Raman spectroscopy18,19,20. In SWNTs, the RBM oscillation frequency (ωRBM) uniquely determines the nanotube diameter (D) following a simple scaling law ωRBM=228/D (nm cm−1)21. The RBM behaviour in DWNTs, however, is drastically different owing to strong inter-tube coupling. The coupling effects in DWNTs have been actively investigated for over one decade4,5,6,7,8,9, but they are still poorly understood. In this letter, we systematically study quantum-coupled RBM oscillations in chirality-defined DWNTs by simultaneously determining structural, electronic and vibrational properties of individual nanotubes with combined single-tube electron diffraction11,12, Rayleigh scattering22,23 and Raman scattering18,19,20 techniques. We show that mechanical coupling between the walls lead to collective oscillation modes with concerted inner- and outer-wall vibrations in DWNTs. Oscillation frequencies of the coupled RBM modes can be quantitatively described by a coupled oscillator model, where the coupling originates from the van der Waals force between the inner and outer tubes. Different DWNTs, which have different inner- and outer-wall separations, allow us to probe the inter-tube van der Waals interactions at an effective pressure that can be both positive and negative and at gigapascal level. The coupled RBM oscillations also exhibit unusual electron–phonon interactions. They always appear in pairs in resonance Raman spectra, because each RBM mode couples to electronic resonances in both walls. Furthermore, Raman amplitudes from inner- and outer-wall excitation channels can interfere quantum mechanically. Relative interference phase between the two Raman channels enabled us to examine the displaced excited-state potential energy surface in SWNTs and its dependence on the nanotube chirality. ## Results ### Experimental design Figure 1 shows the schematic illustration of our experimental design. We utilized the SiO2/Si wafer with open slit as our base substrate, where suspended carbon nanotubes are directly grown on (See Methods for more details). Transmission electron microscope (TEM) beam and laser beam can both go through the slit. This design enables the combination of TEM electron diffraction, Rayleigh scattering and Raman scattering techniques to probe the structural, electronic and vibrational properties of the same individual nanotubes. In addition, the use of the as-grown suspended nanotubes makes to exclude the substrate effects and ensures to probe the intrinsic physical properties of carbon nanotubes. ### Coupled RBM vibrations in DWNTs Figure 2 displays electron diffraction patterns, Rayleigh scattering and Raman scattering spectra for two representative individual DWNTs. Electron diffraction patterns (Fig. 2a) unambiguously determine the DWNT chiral indices (no, mo)/(ni, mi) to be (27, 5)/(18, 5) and (31, 4)/(15, 13). Electronic transitions of the two DWNTs are obtained from the Rayleigh scattering spectra (insets in Fig. 2b, respectively), in which prominent optical resonances can be observed. By comparing the optical resonances in Rayleigh spectra of DWNTs to that of the constituent SWNTs, we can assign the resonance peaks to inter-subband electronic transitions of either inner- and outer-wall nanotubes because tube–tube interactions in DWNTs produces only a small shift (tens of meV) of the electronic transition energies24. We measure the RBM oscillations of DWNTs using resonant Raman scattering. Two well-defined RBM peaks are observed in the resonant Raman spectra of the (27, 5)/(18, 5) and (31, 4)/(15, 13) DWNTs (Fig. 2b). The laser excitation energy (1.96 eV) is marked by dashed line in the figure insets showing the Rayleigh scattering spectra. We have studied 35 DWNTs with well-defined Rayleigh scattering resonances (that can be assigned to each tube components) and TEM diffraction data. Of these DWNTs, 13 have one or more electronic transitions in resonance with our Raman excitation lasers. RBM Raman peaks are observed in, and only in, these 13 tubes. We summarized in Table 1 their chirality and Raman data. Our DWNTs have relatively large diameters, with inner-wall diameter (Di) >1.5 nm in all the nanotubes. There are two striking features in the RBM Raman data of these (relatively large diameter) DWNTs. (1) Frequencies of the two RBM oscillations in DWNTs, denoted as ωL and ωH, are respectively much higher than the RBM frequencies of the constituent outer-(ωo) and inner-wall SWNTs (ωi) (for example, Fig. 2b). If this blue shift is attributed to a simple increase of ‘effective’ restoration force, this restoration force has to increase by as high as 35%. (2) The resonant peaks almost always appear in pairs in resonant Raman spectra: there are either no RBM peak (in 21 tubes) or two RBM peaks (in 13 tubes). Only one nanotube has just one RBM peak. This behaviour is quite surprising, because conventional picture suggests that resonance enhancement of the inner- and outer-wall RBM oscillations are largely independent5,7 and most nanotubes should have a single RBM peak when excited resonantly. To understand the unusual RBM behaviour in DWNTs, we have to realize that tube–tube interactions in these DWNTs are not small perturbations. Instead, they qualitatively change the DWNT RBM oscillations by strongly mixing the inner- and outer-wall vibrations: the observed lower-(ωL) and higher-frequency (ωH) RBM modes are collective oscillations with concerted in-phase and out-of-phase motion of the two walls. ### Inter-wall van der Waals interactions in DWNTs The collective RBM oscillations of a unit-length DWNT can be modelled by a coupled mechanical oscillator with five parameters: mi, ki, mo, ko and kc (Fig. 3a)25. The unit-length mass of the inner- and outer-walls (mi and mo) can be obtained directly from the tube chiral indices, and the unit-length intrinsic force constant of the two walls (ki and ko) can be obtained from RBM frequencies of isolated SWNTs (Supplementary Note S1). The only unknown parameter is the coupling force constant kc, which characterizes the van der Waals interaction between the two walls of the DWNT. Using a single value of kc for any given DWNT, we can simultaneously reproduce the in-phase (ωL) and out-of-phase RBM vibration frequencies (ωH) (Fig. 3b). For different DWNTs, the coupling constant kc vary significantly. The tube-dependent kc provides a unique way to examine how the van der Waals interaction varies with the separation between the inner- and outer-wall tubes. The unit-length coupling force constant kc can be expressed approximately using the average unit-area inter-tube van der Waals potential Uvdw with , where the mean diametre . In our as-grown DWNTs, the separation between the inner- and outer-tubes δr=(DoDi)/2 can vary from 0.34 to 0.37 nm, which is uniquely determined by the tube chiral indices. Figure 3d displays the data for unit-area force constant part;2Uvdw/part;r2, which changes by over two times over the range of inter-tube separation existing in our DWNTs. Our results can be compared with the van der Waals interaction between unit-area graphene sheets under pressure obtained from compressibility measurements of graphite (solid line) and theoretical extrapolation in the negative pressure range (dashed line)26, and they agree well. This comparison shows that the effective pressure between the walls of as-grown DWNTs reaches gigapascals owing to variations in the tube–tube separation26,27. It is interesting to note that DWNTs allow us to readily access a negative pressure in the gigapascal range, which is difficult to achieve using conventional approaches. The force constant deduced here is related to the C33 modulus of graphite, and its value is about one order of magnitude larger than that from the shear mode of few-layered graphene, which is related to the C44 modulus of graphite28,29. ### Quantum interference in resonance Raman process in DWNTs Unlike separated inner- and outer-SWNTs RBM excitations (illustrated in Fig. 4a), the collective DWNT oscillations ωL and ωH contain both inner- and outer-wall motion, and they couple simultaneously to electronic transitions in both walls (Fig. 4b). Therefore, both coupled RBM oscillations will be resonantly excited if an electronic transition of either wall matches the excitation photon energy. This leads to the unusual behaviour that we observed experimentally: the RBM Raman modes are mostly either not observable (in 21 tubes) or appearing in pairs (in 13 tubes) in resonant Raman scattering spectra of DWNTs. The collective RBM oscillations in DWNTs couple to the electronic excitations quantum mechanically, and it leads to interesting quantum interference between the inner- and outer-wall excitation pathways. With such quantum interference even non-resonant contribution can become important, and Raman intensities of the coupled RBM oscillations in DWTNs are described by the superposition of Raman amplitudes from inner- and outer-wall excitation pathways as Here, ‹ωi(o)|ωL(H)› is the inner(outer)-wall component of the coupled RBM mode ωL(ωH), and its value can be calculated directly from a quantized coupled mechanical oscillator model illustrated in Fig. 3a (Supplementary Note S2). Ri(o) and Mi(o) denotes, respectively, the electronic resonance factor and Raman matrix element of the inner(outer)-wall SWNT excitations. The resonance factor Ri(o) can be obtained from Rayleigh scattering spectra of the DWNTs, which probe directly the optical resonances of both the inner- and outer-wall nanotubes (Supplementary Note S3). The Raman matrix element Mi(o) characterizes how the phonon couples to the photo-excited exciton in the inner(outer)-wall nanotube. For simplicity, we will assume that Mi and Mo have the same magnitude and focus on their relative sign, s=sgn(Mi/Mo). This simplification emphasizes the importance of the phase factor, which determines whether the quantum interference is constructive or destructive for the coupled RBM oscillations. To compare with the experiment quantitatively, we use the Raman intensity ratio IL/IH because it does not depend on an absolute determination of Raman cross-section. The ratio has the form of in our approximation. In Table 1, we listed the measured Raman intensity ratio (IL/IH)exp together with calculation results including the quantum interference (IL/IH)I and not including the interference (IL/IH)N−I. Nice agreement between the theory and experiment is achieved when, and only when, the quantum interference effect is fully included using the correct s parameter. We take a closer look at the Raman quantum interference using (40, 1)/(22, 14) DWNT as an example (Fig. 4c). The s parameter, which characterizes the relative sign between Raman matrix elements Mi and Mo, depends sensitively on the chirality of the inner- and outer-SWNTs. Its value is related to the relative motion direction of the inner- and outer-wall nanotubes immediately after the optical excitation. A physical picture of this process in outer- and inner-SWNTs is provided by recalling Frank–Condon effects (Fig. 4e): the excited state has a displaced potential energy surface compared with that of the ground state. The displacement is extremely small (~1 fm), but is essential for the exciton–phonon coupling. Upon optical excitation, the SWNT lattice, initially at the ground-state equilibrium configuration, relaxes based on the excited-state potential energy surface and sets off the RBM vibration. In (40, 1)/(22, 14) DWNT the potential energy surface displacements of inner- and outer-walls displace are opposite with an s=−1 (refs 30,3130,31). To determine the RBM oscillation interference, we also need to include the resonance phase factor. For (40, 1)/(22, 14) DWNT, the phase factors are nearly opposite for the (non-resonant) inner- and (resonant) out-wall pathways with 2.33 eV photon excitation (Fig. 4d). The opposite resonance phase factors, together with s=−1, determine that the quantum interference will suppress the Raman intensity of ωH while enhance that of ωL. This quantum interference leads to increased IL/IH ratio from 5.8 to 20, and the experimental value of IL/IH is 19 (Table 1). We further plot the calculated RBM Raman intensity of the (40, 1)/(22, 14) DWNT at different excitation energies without (Fig. 4f) and with the quantum interference (Fig. 4g), which displays the variation of quantum interference effects with the excitation laser energy. ## Discussion Our observed s parameter shows that the relative displacement of the excited-state potential energy surface varies with the SWNT chirality, and it can be described by a simple family pattern: the excited-state potential energy surface is displaced inwards for transitions on one side of the zig–zag cut through K point of the graphene Brillouin zone, which includes even transitions of mod(nm,3)=1 semiconducting, odd transitions of mod(nm,3)=2 semiconducting and higher branch transitions of non-armchair metallic nanotubes. The displacement is outwards for all other transitions. This is the first observation of family behaviour in the excited-state potential energy surface displacement, which has been predicted previously theoretically in ab initio and tight-binding calculations30,31. The ability to combine structural, electronic and vibrational characterization on the same individual DWNTs allows us to systematically investigate quantum-coupled mechanical oscillations and their unusual interactions with electronic excitations for the first time. We show that van der Waals interaction, usually treated as a weak perturbation, can actually produce quantum phenomena in coupled nanostructures that have only been observed in covalent material previously. With exact structural determination, we are able to determine the separation-dependent van der Waals coupling between nanotube walls and the excited-state potential energy surfaces of constituent nanotubes. Such approach can also be used to explore other composite nanostructures. It will enable quantitative understanding of van der Waals interactions and the electronic and vibrational couplings in the nanoscale. ## Methods In our experiment, we use suspended DWNTs that are free of substrate effects and compatible with both TEM and single-tube optical spectroscopy techniques. Long suspended nanotubes were grown by chemical vapour deposition (CVD) across open slit structures (~30 × 500 μm) fabricated on silicon substrates32. We use Fe nanoparticles as catalysts and methane in hydrogen (CH4:H2=1:2) as gas feedstock, and control the size of catalyst particles to achieve the selective abundance of DWNTs33. We determine chiral structures of DWNTs using electron diffraction with nano-focused 80 keV electron beams in a JEOL 2100 TEM11,12. Utilizing the slit edges as markers, the same individual nanotubes can be identified in an optical microscopy setup. To determine electronic transitions of these individual nanotubes, we use Rayleigh scattering spectroscopy with a fibre-laser-based supercontinuum light source covering the spectral range from 450 to 900 nm (refs 22,2322,23). To determine RBM vibration and electron–phonon coupling in these individual nanotubes, we use resonant Raman scattering spectroscopy with laser excitation at photon energy of 1.96 or 2.33 eV (refs 18,19,2018–20). Our instrumental resolution for RBM Raman frequency is ~2 cm−1.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360517024993896, "perplexity": 3304.484569021852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00325.warc.gz"}
https://www.physicsforums.com/threads/casimir-trick-in-e-e-h-ffbar.614355/
# Casimir trick in e+e->H->ffbar 1. Jun 16, 2012 ### dingo_d Casimir trick in e+e-->H->ffbar 1. The problem statement, all variables and given/known data I have the process: $e^+e^-\to H\to f\bar{f}$ I have calculated the amplitude and it's conjugate, and now I want to find the averaged, unpolarized square of the invariant amplitude $\langle|M|^2\rangle$. I average over the initial spins and sum over the final and usually in some simple processes like Moller scattering, I would play with Casimir trick and traces. But here I have: $\langle|M|^2\rangle=\frac{1}{2}\frac{1}{2}\left( \frac{g_w^2}{4m_w^2} m_e m_f\right)^2\sum_{spins} \bar{u}_4v_2\bar{v}_1u_3\bar{v}_2u_4\bar{u}_3v_1$ Where $\bar{v}_1$ is the incoming positron with impulse p_1 and spin s_1, $u_3$ is the incoming electron, $v_2$ is the outgoing anti fermion, and $\bar{u}_4$ is the outgoing fermion. If I look at the spinor components, I can arrange them into pairs and use the relations: $\sum_{s_1}u_{1\delta}\bar{u}_{1\alpha}=({\not} p_1+m_1)_{\delta\alpha}$ and $\sum_{s_2}v_{2\beta}\bar{v}_{2\gamma}=({\not} p_2-m_2)_{\beta\gamma}$ But I'm not getting any trace out of this :\ What am I doing wrong? Can you offer guidance or do you also need help? Similar Discussions: Casimir trick in e+e->H->ffbar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8852736949920654, "perplexity": 1061.5782832023438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820930.11/warc/CC-MAIN-20171017072323-20171017092323-00286.warc.gz"}
https://www.dlubal.com/zh/support-and-learning/support/faq/000522
FAQ 000522 ZH # The surface results are output in a 500 mm grid. However, the governing value is not considered. How can I change the grid distance? #### 回答 The surface results are output in a 500 mm grid. However, the governing value is not considered. How can I change the grid distance? The tabular results are displayed at the grid points of each surface. The printout report also uses this grid. Double-click the surface to open the "Edit Surface" dialog box, and go to the "Grid" tab. Specify the distances b and h, that are adjusted to the model geometry. By using the "Customize" check box, you can also specify individual settings (grid origin, rotation). (可要求接中文热线) “谢谢你们提供的实用信息。
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8047673106193542, "perplexity": 2024.1876453657906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00703.warc.gz"}
https://www.yourdictionary.com/equating
#### Sentence Examples • The equations of motion can be established in a similar way by considering the rate of increase of momentum in a fixed direction of the fluid inside the surface, and equating it to the momentum generated by the force acting throughout the space 5, and by the pressure acting over the surface S. • = 0, we find that, eliminating x, the resultant is a homogeneous function of y and z of degree mn; equating this to zero and solving for the ratio of y to z we obtain mn solutions; if values of y and z, given by any solution, be substituted in each of the two equations, they will possess a common factor which gives a value of x which, corn bined with the chosen values of y and z, yields a system of values which satisfies both equations. • X (1 +PD1+12D2+...+�8D8+...) fm, and now expanding and equating coefficients of like powers of /t D 1 f - Z(Difi)f2f3. • This can be verified by equating to zero the five coefficients of the Hessian (ab) 2 axb2. • For instance, by equating coefficients of or in the expansions of (I +x) m+n and of (I dx) m .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569695591926575, "perplexity": 371.18581370554824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998959.46/warc/CC-MAIN-20190619103826-20190619125826-00018.warc.gz"}
http://math.stackexchange.com/questions/10422/g-finite-abelian-g-h-g-k-primary-cyclic-h-cap-k-1-h-neq-1-neq-k-i
# $G$ finite abelian. $G/H$, $G/K$ primary cyclic. $H\cap K=1$. $H\neq 1\neq K$. Is $HK=G$? Given a finite abelian group $G$. $H$ and $K$ subgroups of $G$ such that $G/H$ and $G/K$ are primary cyclic. $H$ and $K$ intersect trivially ($H\cap K=1$), and $H\neq 1$, $K\neq 1$. Is it then true that $HK=G$? My attempt so far: It's easy to show that $[G:HK] | ([G:H], [G:K])$. So if $G/H$ and $G/K$ are primary cyclic for different primes, then we're done. But I don't know what to do if they're not. Thank you! - +1 for showing the attempt. – Aryabhata Nov 15 '10 at 17:18 primary cyclic means? – anonymous Nov 15 '10 at 17:22 primary cyclic means cyclic of order p^k for some prime p and integer k>=0. – user3533 Nov 15 '10 at 17:25 HK/H is isomorphic to K/(H^K) = K, and HK/K is isomorphic to H. Then if HK = G, we have that H and K must also be primary cyclic. Of course, we also have that |HK| = |H||K| since they have trivial intersection, so we need |H||K| = |G|. – Gabe Cunningham Nov 15 '10 at 19:00 What does the / notation mean? – Tomas Lycken Nov 15 '10 at 22:35 If $a\in G$ is of order $p^n$ for some prime $p$ and $a \notin H$ then $a+H$ is of order $p^d$ for some $d\leq n$ so $G/H$ must be a p-group. With this argument we also know that $H$ must contain all elements that are not of order $p^m$ for some m. So if |G| has at least two different prime divisors, then $G/H$ and $G/K$ must be p-group and q-group for different primes (because $H\cap K= 1$). Therefore all the p-sylow groups are contained in at least one of H or K so HK=G. So we are now remained with the case where $|G|=p^n$. I tried to prove this case (for too much time), but it appears that this is not true. For example, take $G=\mathbb{Z}_2 \times \mathbb{Z}_4$ and let H be the subgroup $\{ (1,0), (0,0) \}$ and $K=\{ (1,2), (0,0) \}$, then $G/H\cong G/K \cong \mathbb{Z}_4$ and $|HK|=4 < |G|=8$ - Hmmm... For some reason I didn't see the "New answer has been posted" while I was busy writing, and then erasing, an attempt at proof and finally what I think is a counterexample (closely related to yours). – Arturo Magidin Nov 15 '10 at 21:58 Great answer! Thank you. – user3533 Nov 16 '10 at 9:34 I think this is a counterexample (assuming I haven't messed up something; I spent a fair amount of time trying to prove the result is true, reducing and reducing and reducing, and then the distillation suggested this; I really hope I didn't mess it up): Take $G=C_4\times C_{16}$, with $C_4$ generated by $x$ and $C_{16}$ generated by $y$. Take $H=\langle (x,y^2)\rangle$ and $K=\langle(x,y^4)\rangle$. Then $G/H$ is generated by the images of $(x,1)$ and $(1,y)$. But $(x,1)H = (1,y^{-2})H=(1,y)^{-2}H$, so $G/H$ is generated by $(1,y)H$ and hence is cyclic (and since $G$ is a $p$-group, necessarily primary cyclic). Similarly, $(x,1)K = (1,y^{-4})K$, so $G/K$ is generated by $(1,y)K$ and hence is cyclic. Suppose that $(x,y^2)^r = (x,y^4)^s$ for some integers $r$ and $s$; then $r\equiv s\pmod{4}$ and $2r\equiv 4s\pmod{16}$. But if $2r\equiv 4s\pmod{16}$, then $r\equiv 2s\pmod{8}$, hence $s\equiv r\equiv 2s\pmod {4}$. The only possibility is $s\equiv 0 \pmod{4}$, hence $x^s=1$. But we also now have $4s\equiv 0 \pmod{16}$, so $y^{4s}=1$. Therefore, $(x,y^4)^s = (x^s,y^{4s}) = (1,1)$. Therefore, $H\cap K=\{(1,1)\}$. However, $H$ and $K$ are both contained in $\langle(x,1),(1,y^2)\rangle$, so $HK$ is a proper subgroup of $G$. Added: Some comments: your hypothesis imply in any case that $G$ is a product of two primary cyclic groups. To see this, note that if $G/H$ is $p$-primary, and $G/K$ is $q$-primary, then $H$ must contain all $p'$-parts of $G$, and $K$ must contain all $q'$-primary parts of $G$, so $H\cap K=1$ implies that there is no prime which is different to both $p$ and $q$. If $p\neq q$, then the $p$-part of $G$ is a direct sum of $p$-primary cyclic groups, the $q$-part a sum of $q$-primary cyclic groups. Say there are two $p$-primary cyclic groups; then $H$ must contain the entire $q$-part as a subgroup, and have a nontrivial intersection with the $p$-part; since $K$ contains the $p$-part, $H\cap K$ would be nontrivial. Symmetrically with $q$. Thus, if $p\neq q$, then $G=C_{p^a}\times C_{q^b}$, and $H=C_{q^b}$, $K=C_{p^a}$. If $p=q$, then $G$ is a $p$-group. Then $K \cong K/(H\cap K) = HK/H$, which is a subgroup of $G/H$, so $K$ is cyclic. Symmetrically, $H$ is cyclic. Thus, $G$ can be generated by $2$ elements, so the decomposition of $G$ into a direct factors of cyclic groups has at most two factors. But $G$ cannot be cyclic and and $p$-group, because then any two nontrivial subgroups have nontrivial intersection, contradicting the hypothesis. Thus, $G=C_{p^a}\times C_{p^b}$ in this case as well. Thus, the conditions require that $G=C_{p^a}\times C_{q^b}$ with $p$ and $q$ primes, possibly equal, and $a$ and $b$ positive. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9491825103759766, "perplexity": 151.68627956048567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861831994.45/warc/CC-MAIN-20160428164351-00017-ip-10-239-7-51.ec2.internal.warc.gz"}
http://link.springer.com/article/10.1007/BF02353519
Pharmacometrics Journal of Pharmacokinetics and Biopharmaceutics , Volume 24, Issue 4, pp 389-402 First online: # Volumes of distribution and mean residence time of drugs with linear tissue distribution and binding and nonlinear protein binding • Haiyung ChengAffiliated withDepartment of Drug Metabolism, Merck Research Laboratories • , William R. GillespieAffiliated withFood and Drug Administration Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract Based on a generalized model, equations for calculating the mean residence time in the body at single dose (MRT) and at steady state (MRT ss), apparent steady-state volume of distribution ($$\hat V_{ss}$$) and steady-state volume of distribution (V ss) are derived for a drug exhibiting nonlinear protein binding. Interrelationships between$$\hat V_{ss}$$ andV ss as well as betweenMRT andMRT ss are also discussed and illustrated with simulated data. In addition, a method for estimating the central volume of distribution of the bound drug and the sum of the central volume of distribution of the unbound drug and the area under the first moment curve of distribution function for drugs with nonlinear protein binding is proposed and illustrated with both simulated and published data. ### Key words volumes of distribution mean residence time nonlinear protein binding
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615356683731079, "perplexity": 3917.6030066943813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860106452.21/warc/CC-MAIN-20160428161506-00084-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.tec-science.com/mechanical-power-transmission/belt-drive/maximum-belt-stress/
## Introduction In general, the forces acting in the belt must not exceed certain limits, as otherwise the belt will be damaged and will either deform unacceptably or even tear. Therefore, certain stress limits apply, which may not be exceeded depending on the belt material. Whether these requirements are met depends on the forces acting on the cross-section of the belt during operation. Besides the quasi-static belt span forces (relevant for power transmission), the centrifugal belt forces also act during operation. The resulting stresses σ are obtained by referring the forces to the cross-sectional area A of the belt: \begin{align} \sigma_t &= \frac{F_t}{A}  &&~~~\text{tight side tension}  \\[5px] \sigma_s &= \frac{F_s}{A}  &&~~~\text{slack side tension}  \\[5px] \sigma_{cf} &= \frac{F_{cf}}{A} &&~~~\text{centrifugal tension} \\[5px] \end{align} ## Centrifugal tension For the centrifugal tension in the belt, the formula derived from the article Centrifugal forces applies, where m’ corresponds to the specific mass of the belt (“mass per unit length”): \begin{align} F_{cf}= m’ \cdot v^2 ~~~~~\text{and} ~~~~~ m’ = \frac{m}{L}, \end{align} The centrifugal tension (or rather centrifugal stress) acting in the belt therefore applies: \begin{align} &\sigma_{cf} = \frac{F_{cf}}{A} = \frac{m’ \cdot v^2}{A} = \frac{m\cdot v^2}{\underbrace{L \cdot A}_{\text{volume } V}} = \frac{m\cdot v^2}{V} = \underbrace{\frac{m}{V}}_{\text{density } \rho} \cdot v^2 = \rho \cdot v^2  \\[5px] &\boxed{\sigma_{cf} = \rho \cdot v^2}  \\[5px] \end{align} Note, that the term L⋅A is the belt volume V and the term m/V corresponds to the density ϱ of the belt material. Thus, the centrifugal tension in the belt depends only on the belt density and the belt speed. ## Bending stress In addition to the above mentioned stresses, bending stresses σb must also be taken into account when rotating the belt around the pulleys. The belt is stretched in the outer areas and compressed in the inner areas; the neutral axis runs in-between and is neither stretched (necked) nor compressed (bulged). The strains lead to tensile stresses in the outer and to compressive stresses in the inner belt area. In the case of a belt drive, only tensile stresses are decisive, as these also have a pulling effect in addition to the span forces and the centrifugal forces and the belt is thus subjected to maximum stress in these outer areas. Starting from the neutral axis, the tensile stresses caused by bending increase as the elongation increases. The tensile stress reaches its maximum at the outer edge of the belt. This maximum stress is referred to as bending stress σb. It can be assumed that the stronger the belt is bent (i.e. the smaller the pulley) and the thicker the belt, the greater the bending stresses. According to the linear-elastic bending theory, the bending stress σb at a given bending radius or bending diameter d (=diameter of the pulley) and at a given thickness s of the belt, can be determined using the so-called bending modulus Eb) (flexural modulus) of the belt material (the bending modulus should not be confused with the Young’s modulus from the tensile test!): \begin{align} \boxed{\sigma_b = E_b \cdot \frac{s}{d + s}} ~~~\text{bending stress} \\[5px] \end{align} Note that the drawings above are not to scale. Especially with flat belts, the belt thickness is considerably smaller than the belt width. This also applies to the ratio of pulley diameter to belt thickness, which is in the order of about 50 to 100, i.e. the pulley diameter is 50 to 100 times larger than the belt thickness. For this reason, the belt thickness s compared to the diameter d of the pulley can often be neglected. In these cases, the bending stresses can then also be determined using the formula given below, whereby the actual bending stresses are lower than those calculated as a result of the conservative simplification. \begin{align} \sigma_b = E_b \cdot \frac{s}{d + \underbrace{s}_{\ll d}} \approx E_b \cdot \frac{s}{d} \\[5px] \end{align} It can now be seen that the bending stresses are directly dependent on the ratio of belt thickness to pulley diameter. The smaller the pulley radius and the thicker the belt, the greater the bending stresses. This means that the greatest bending stress occurs when the belt rotates around the smaller of the two pulleys (usually the drive pulley)! The largest belt tensions occur when the belt rotates around the smaller of the two pulleys! ## Distribution of the belt tension The figure below schematically shows the distribution of the belt tension. The effective tension used for power transmission is denoted by σc (circumferential force Fc related to the belt cross section A). This effective tension results from the difference between the tight side tension σt and the slack side tension σs (see also article Power transmission of a belt drive): \begin{align} \sigma_c = \frac{F_c}{A} = \frac{F_t – F_s}{A} = \frac{F_t}{A} – \frac{F_s}{A} = \sigma_t – \sigma_s \\[5px] \end{align} In comparison to the figure above, the figure below shows the distribution of the belt tension in the load-free idle state. The total preload is divided into a part that ensures the force-transmitting contact pressure in later operation (dynamic preload) and a part to compensate for centrifugal forces (centrifugal preload). ## Maximum belt tension While the centrifugal tension σcf acts equally throughout the belt during operation and the quasi-static tensions σt and σs are only present to this extent in the corresponding belt sections, the bending stress acts only during rotation around pulleys, with the largest bending stress acting on the smaller drive pulley (see figure above). The maximum belt stress therefore occurs on the slack side, where the belt runs onto the smaller of the two pulleys (provided that the smaller of the two pulleys is the drive pulley, otherwise the greatest bending stress would be present when the belt runs off). The maximum belt stress σmax results from the sum of the tight side stress σt, the centrifugal stress σcf and the bending stress σb. The dimensioning of the belt depends on this maximum stress. Always ensure that the maximum permissible belt stress σper is not exceeded. \begin{align} \label{zulaessige} \boxed{\sigma_{max} = \sigma_t + \sigma_{cf}+ \sigma_b} \le \sigma_{per}\\[5px] \end{align} ## Bending frequency Bending processes in the belt always take place during each revolution around a pulley. Regardless of whether it is a drive or output pulley or tensioner pulleys, guide pulleys, idler pulleys, etc. With every bending process the belt is deformed, which is connected with an strain energy to be applied. The smaller the pulley, the greater the strain energy and the more the belt heats up. The strain energy to be applied not only reduces the efficiency but also puts a great stress on the belt both mechanically and thermally. The more bending processes a belt goes through per time, i.e. the higher the bending frequency fb, the shorter its service life will be. Therefore, depending on the type and manufacturer, the belt must not exceed a certain permissible bending frequency fb,per. The bending frequency fb itself is determined by the belt length L, the belt speed v and the number of pulleys z: \begin{align} \boxed{f_b = \frac{z \cdot v }{L} }\le f_{b,per} ~~~~~[f] = \frac{1}{\text{s}} = \text{Hz}\\[5px] \end{align}
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9760144948959351, "perplexity": 2160.703980528187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00110.warc.gz"}
http://math.stackexchange.com/questions/36430/every-map-can-be-replaced-by-a-weakly-equivalent-fibration/36434
every map can be replaced by a weakly equivalent fibration What is the meaning of the statement "every map can be replaced by a weakly equivalent fibration"? - It means that given any $f:A \to B$ we can find a space $E_f$ containing $A$ that is homotopy equivalent to $A$ and a fibration $p:E_f \to B$ such that $f = p \circ i$ where $i:A \to E_f$ is the inclusion. See Hatcher p. 407 (http://www.math.cornell.edu/~hatcher/AT/ATpage.html) for more details.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843610525131226, "perplexity": 48.24116690738633}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156627.12/warc/CC-MAIN-20160205193916-00109-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/586141/limit-calculation
# Limit calculation $\lim_{n\to\infty}((\frac94)^n+(1+\frac1n)^{n^2})^{\frac1n}$ Here's what I did: $\lim_{n\to\infty}((\frac94)^n+(1+\frac1n)^{n^2})^{\frac1n}\\ =(\lim(\frac94)^n+\lim((1+\frac1n)^{n})^n)^{\frac1n}\\ =(\lim(\frac94)^n+\lim e^n)^{\frac1n}\\$ Any hints on how to continue? PS: no logs/integration/derivation because we haven't covered it. - You can't push the limit inside. –  egreg Nov 29 '13 at 21:57 I think the first step might actually be to recombine: $$1+\frac 1n=\frac {1+n}n$$ then the next step is $$\left(\frac 94\right)^n+\left(\frac{n+1}n\right)^{n^2}=\frac {(9n^n)^n+(n+1)^{n^2}}{(4n^n)^n}$$ –  abiessu Nov 29 '13 at 22:00 @abiessu, souldn't that be: $$\frac {(9n^n)^n+4^n(n+1)^{n^2}}{(4n^n)^n}$$ ? Also, I can't find how does that help... –  GinKin Nov 29 '13 at 22:21 This question begs for logarithms and derivatives... Why restrict yourself? –  Chris K Nov 29 '13 at 22:29 Well... you haven't done l'Hopital's Rule? Or logarithms? –  Chris K Nov 29 '13 at 22:31 \begin{align} &\lim_{n\to\infty}\left(\left(\frac94\right)^n+\left(1+\frac1n\right)^{n^2}\right)^{1/n}\tag{1}\\ &=\lim_{n\to\infty}\left(1+\frac1n\right)^n\lim_{n\to\infty}\left(\left(\frac94\left(1+\frac1n\right)^{-n}\right)^n+1\right)^{1/n}\tag{2}\\ &=e\lim_{n\to\infty}\left(\left(\frac94e^{-1}\right)^n+1\right)^{1/n}\tag{3}\\[9pt] &=e\lim_{n\to\infty}\left(0+1\right)^{1/n}\tag{4}\\[18pt] &=e\tag{5} \end{align} Explanation: $(1)$: original expression $(2)$: bring a factor of $\left(1+\frac1n\right)^n$ outside the parentheses $(3)$: evaluate $\lim\limits_{n\to\infty}\left(1+\frac1n\right)^n=e$ $(4)$: evaluate $\lim\limits_{n\to\infty}\left(\frac94e^{-1}\right)^n=0$ $(5)$: evaluate $\lim\limits_{n\to\infty}1^{1/n}=1$ - Please explain what and how did you do this: $\lim_{n\to\infty}(1+\frac1n)^n \lim_{n\to\infty} (\left(\frac94(1+\frac1n)^{-n}\right)^n+1)^{1/n}$ –  GinKin Nov 29 '13 at 23:11 @GinKin: factored $\left(1+\frac1n\right)^n$ out of the expression and put it on the left. That means dividing everything on the inside of the parentheses by $\left(1+\frac1n\right)^{n^2}$. –  robjohn Nov 29 '13 at 23:22 It's basically like Matik Ken's answer. What makes me confused is PEMDAS, would you still be able to factor out even if it wasn't the $nth$ root i.e if it was to the power of n instead ? –  GinKin Nov 29 '13 at 23:28 @GinKin: step $(2)$ is only algebra. Even if the limits were not there, step $(2)$ would be valid. –  robjohn Nov 29 '13 at 23:32 @ParamanandSingh: We can apply that if the family $\{f_n\}$ is equicontinuous at $x_\infty=\lim\limits_{n\to\infty}x_n$ , then $\lim\limits_{n\to\infty}f_n(x_n)=\lim\limits_{n\to\infty}f_n(x_\infty)$. $$\left(\left(\frac94x\right)^n+1\right)^{1/n}\text{ is equicontinuous at }x=e^{-1}\text{ for }(3)$$ and $$(x+1)^{1/n}\text{ is equicontinuous at }x=0\text{ for }(4)$$ –  robjohn Nov 30 '13 at 4:38 We know that the sequence $(a_n)$ defined by $$a_n=\left(1+\frac1n\right)^n$$ converges and its limit is $e$. Notice that $$\left[\left(\frac94\right)^n+\left(1+\frac1n\right)^{n^2}\right]^{1/n}=\left[\left(\frac94\right)^n+a_n^n\right]^{1/n}=a_n\left[1+\left(\frac{9}{4a_n}\right)^n\right]^{1/n}.$$ Since $$\lim_{n\to\infty}\frac{9}{4a_n}=\frac{9}{4e}<1,$$ it follows that $$\lim_{n\to\infty}\left[\left(\frac94\right)^n+\left(1+\frac1n\right)^{n^2}\right]^{1/n}=\lim_{n\to\infty}a_n\left[1+\left(\frac{9}{4a_n}\right)^n\right]^{1/n}=e(1+0)^0=e.$$ - The limit is indeed $e$. Let $a(n) = \frac94$ and $b(n) = (1+\frac1n)^n$ for $n \ge 1$. Since $\lim b(n) = e > 9/4 = 2.25$ ( versus $2.71728\ldots$ ) there is an $N$ such that if $n \ge N$ then $b(n) > 9/4$. Thus $x(n)= \frac{a(n)}{b(n)} < 1$ for $n \ge N$. Let $S(n) =$ the original expression, then: $S(n) = (a(n)^n + b(n)^n)^{\frac1n} = b(n)(1+(\frac{a(n)}{b(n)})^n)^{\frac1n} = b(n)r(n)$ whereas $r(n) = (1+(\frac{a(n)}{b(n)})^n)^{\frac1n}$ . We now estimate $r(n)$. It is easy to see that $1 < r(n) < 2^{\frac1n}$ for $n \ge N$. This means $\lim r(n) = 1$ and so $\lim S(n) = e$ as claimed. - For positive $n$, we have that $0 < \left(1 + \frac{1}{n}\right)^{n^2} = \left(\left(1 + \frac{1}{n}\right)^n\right)^n < e^n$. Thus, we can write $$\left(1 + \frac{1}{n}\right)^{n^2} < \left(\frac{9}{4}\right)^n + \left(1 + \frac{1}{n}\right)^{n^2} < \left(\frac{9}{4}\right)^n + e^n < 2 e^n$$ $$\left(\left(1 + \frac{1}{n}\right)^{n^2}\right)^{1/n} < \left(\left(\frac{9}{4}\right)^n + \left(1 + \frac{1}{n}\right)^{n^2}\right)^{1/n} < \left(2e^n\right)^{1/n}.$$ We can use the squeeze theorem on the last inequality to obtain the limit. - Um correct me if I'm wrong but using the squeeze theorem on this inequality will yield that the limit is 9/4 but from W|A I know that the limit is e. –  GinKin Nov 29 '13 at 22:19 You're right: for some reason, when I was writing the answer, I was convinced $\frac{9}{4} = 2.75 > e$. I fixed that, so now it should be right. –  Strants Nov 29 '13 at 22:36 Well it seems like you just arbitrarily choose one of the two expression and apply the squeeze theorem to it. What would have happened if the expressions were more complex so you wouldn't know which one was bigger ? Also, how do you know that what you did last time was wrong without checking with W|A ? Sorry if I don't understand enough but it seems like guesswork. –  GinKin Nov 29 '13 at 22:41 Well, first off, I decided I would try the squeeze theorem, because if $a(n) < b(n) < c(n)$, the $(a(n))^{\frac{1}{n}} < (b(n))^{\frac{1}{n}} < (c(n))^{\frac{1}{n}}$, so I can just ignore the $\frac{1}{n}$ exponent for a while, then add it back in at the end. From there, I knew that $\left(1 + \frac{1}{n}\right)^n < e$, so $\left(1 + \frac{1}{n}\right)^{n^n} < e^n$. Now, in the expression $\left(\frac{9}{4}\right)^n + e^n$, $e^n$ will dominate for large $n$ (since $e \approx 2.71 > 2.25 = \frac{9}{4}$), so I decided to eliminate the less important $\left(\frac{9}{4}\right)^n$ term. –  Strants Nov 30 '13 at 0:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786118268966675, "perplexity": 445.61790512910477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120928902.90/warc/CC-MAIN-20150124173528-00215-ip-10-180-212-252.ec2.internal.warc.gz"}
http://scitation.aip.org/content/aip/journal/jap/108/1/10.1063/1.3455874
• journal/journal.article • aip/jap • /content/aip/journal/jap/108/1/10.1063/1.3455874 • jap.aip.org 1887 No data available. No metrics data to plot. The attempt to plot a graph for these metrics has failed. Radiation efficiency of heavily doped bulk -InP semiconductor USD 10.1063/1.3455874 View Affiliations Hide Affiliations Affiliations: 1 Department of Electrical and Computer Engineering, State University of New York at Stony Brook, Stony Brook, New York 11794-2350, USA J. Appl. Phys. 108, 013101 (2010) /content/aip/journal/jap/108/1/10.1063/1.3455874 http://aip.metastore.ingenta.com/content/aip/journal/jap/108/1/10.1063/1.3455874 ## Figures FIG. 1. PMT signals of InP(S) transmission luminescence: (a) normalized; wafer with carrier concentration of at different temperatures (K) and (b) wafers of various doping concentration (in units of ) at a fixed excitation wavelength (the time scale is expanded for better resolution of the short signals in higher concentration samples). FIG. 2. Luminescence spectra for cw excitation [(a)–(c)] at 640 nm (1.94 eV) and time-integrated OPO excitation (d). (a) Reflection (amplitude decreases with concentration) and transmission (smaller curves in the same order) spectra for different carrier concentrations (in units of ); (b) evolution of the reflection spectra with temperature for [amplitude decreases with increasing temperature (K)]; (c) normalized reflection and transmission (T) spectra (solid curves) for at 78 and 300 K together with the corresponding transmission factors , where is the thickness of the wafer (dashed curves). (d) Reflection spectra from a sample with for different excitation photon energies (eV) at room temperature (OPO excitation). FIG. 3. Absorption coefficient for different photon energies and carrier concentrations , . (a) Experimental absorption spectra (solid curves) with their interpolations to higher energies (dash-dot curves) from Ref. 22, see details in the text. The wafer was thinned down to enabling measurements of up to . Triangles correspond to an undoped sample (from Ref. 17). (b) Concentration dependence of the absorption coefficient at the photon energy . FIG. 4. Luminescence decay rates measured by kinetic experiments and approximated by superposition of two power functions of concentration and temperature [Eq. (3)]. (a) Temperature dependence of the luminescence decay rate for different doping concentrations in units . Dashed lines represent fits to the experimental data as in Eq. (3). For , the experimental point at deviates from the approximation because of the limited detector resolution. (b) The concentration dependence of the coefficients and with approximations by quadratic polynomials. Triangles at correspond to the data of Ref. 4. FIG. 5. Concentration dependence of the recycling factor (squares) and quantum efficiency (dots) obtained from the experimental data. The dashed curves are analytical approximations, and in terms of coefficients , , , and . ## Tables Table I. Photon escape factor out of the detection area, free-carrier absorption factor , and the effective recycling factor at . /content/aip/journal/jap/108/1/10.1063/1.3455874 2010-07-01 2014-04-19 Article content/aip/journal/jap Journal 5 3 ### Most cited this month More Less This is a required field
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826749324798584, "perplexity": 3223.538709776934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
http://slideplayer.com/slide/4108416/
1 1 Slide Decision Analysis Professor Ahmadi. 2 2 Slide Decision Analysis Chapter Outline n Structuring the Decision Problem n Decision Making Without. Presentation on theme: "1 1 Slide Decision Analysis Professor Ahmadi. 2 2 Slide Decision Analysis Chapter Outline n Structuring the Decision Problem n Decision Making Without."— Presentation transcript: 1 1 Slide Decision Analysis Professor Ahmadi 2 2 Slide Decision Analysis Chapter Outline n Structuring the Decision Problem n Decision Making Without Probabilities n Decision Making with Probabilities n Expected Value of Perfect Information 3 3 Slide Structuring the Decision Problem n A decision problem is characterized by decision alternatives, states of nature, and resulting payoffs. n The decision alternatives are the different possible strategies the decision maker can employ. n The states of nature refer to future events, not under the control of the decision maker, which may occur. States of nature should be defined so that they are mutually exclusive and collectively exhaustive. n For each decision alternative and state of nature, there is an outcome. n These outcomes are often represented in a matrix called a payoff table. 4 4 Slide Decision Trees n A decision tree is a chronological representation of the decision problem. n Each decision tree has two types of nodes; round nodes correspond to the states of nature while square nodes correspond to the decision alternatives. n The branches leaving each round node represent the different states of nature while the branches leaving each square node represent the different decision alternatives. n At the end of each limb of a tree are the payoffs attained from the series of branches making up that limb. 5 5 Slide Decision Making Without Probabilities n If the decision maker does not know with certainty which state of nature will occur, then he is said to be doing decision making under uncertainty. n Three commonly used criteria for decision making under uncertainty when probability information regarding the likelihood of the states of nature is unavailable are the optimistic approachthe optimistic approach the conservative approachthe conservative approach the minimax regret approach.the minimax regret approach. 6 6 Slide Optimistic Approach n The optimistic approach would be used by an optimistic decision maker. n The decision with the largest possible payoff is chosen. n If the payoff table was in terms of costs, the decision with the lowest cost would be chosen. 7 7 Slide Conservative Approach n The conservative approach would be used by a conservative decision maker. n For each decision the minimum payoff is listed and then the decision corresponding to the maximum of these minimum payoffs is selected. (Hence, the minimum possible payoff is maximized.) n If the payoff was in terms of costs, the maximum costs would be determined for each decision and then the decision corresponding to the minimum of these maximum costs is selected. (Hence, the maximum possible cost is minimized.) 8 8 Slide Minimax Regret Approach n The minimax regret approach requires the construction of a regret table or an opportunity loss table. n This is done by calculating for each state of nature the difference between each payoff and the largest payoff for that state of nature. n Then, using this regret table, the maximum regret for each possible decision is listed. n The decision chosen is the one corresponding to the minimum of the maximum regrets. 9 9 Slide Example: Marketing Strategy Consider the following problem with two decision alternatives (d 1 & d 2 ) and two states of nature S 1 (Market Receptive) and S 2 (Market Unfavorable) with the following payoff table representing profits ( \$1000): States of Nature States of Nature s 1 s 3 s 1 s 3 d 1 20 6 d 1 20 6 Decisions Decisions d 2 25 3 d 2 25 3 10 Slide Example n Optimistic Approach An optimistic decision maker would use the optimistic approach. All we really need to do is to choose the decision that has the largest single value in the payoff table. This largest value is 25, and hence the optimal decision is d 2. Maximum Maximum Decision Payoff Decision Payoff d 1 20 d 1 20 choose d 2 d 2 25 maximum 11 Slide Example n Conservative Approach A conservative decision maker would use the conservative approach. List the minimum payoff for each decision. Choose the decision with the maximum of these minimum payoffs. Minimum Minimum Decision Payoff Decision Payoff choose d 1 d 1 6 maximum choose d 1 d 1 6 maximum d 2 3 d 2 3 12 Slide Example n Minimax Regret Approach For the minimax regret approach, first compute a regret table by subtracting each payoff in a column from the largest payoff in that column. The resulting regret table is: s 1 s 2 s 1 s 2 d 1 5 0 d 1 5 0 d 2 0 3 d 2 0 3 13 Slide Example n Minimax Regret Approach (continued) For each decision, list the maximum regret. Choose the decision with the minimum of these values. Decision Maximum Regret Decision Maximum Regret d 1 5 d 1 5 choose d2d 2 3minimum 14 Slide Decision Making with Probabilities n Expected Value Approach If probabilistic information regarding the states of nature is available, one may use the expected Monetary value (EMV) approach (also known as Expected Value or EV).If probabilistic information regarding the states of nature is available, one may use the expected Monetary value (EMV) approach (also known as Expected Value or EV). Here the expected return for each decision is calculated by summing the products of the payoff under each state of nature and the probability of the respective state of nature occurring.Here the expected return for each decision is calculated by summing the products of the payoff under each state of nature and the probability of the respective state of nature occurring. The decision yielding the best expected return is chosen.The decision yielding the best expected return is chosen. 15 Slide Expected Value of a Decision Alternative n The expected value of a decision alternative is the sum of weighted payoffs for the decision alternative. n The expected value (EV) of decision alternative d i is defined as: where: N = the number of states of nature P ( s j ) = the probability of state of nature s j P ( s j ) = the probability of state of nature s j V ij = the payoff corresponding to decision alternative d i and state of nature s j V ij = the payoff corresponding to decision alternative d i and state of nature s j 16 Slide Example: Marketing Strategy n Expected Value Approach Refer to the previous problem. Assume the probability of the market being receptive is known to be 0.75. Use the expected monetary value criterion to determine the optimal decision. 17 Slide Expected Value of Perfect Information n Frequently information is available that can improve the probability estimates for the states of nature. n The expected value of perfect information (EVPI) is the increase in the expected profit that would result if one knew with certainty which state of nature would occur. n The EVPI provides an upper bound on the expected value of any sample or survey information. 18 Slide Expected Value of Perfect Information n EVPI Calculation Step 1:Step 1: Determine the optimal return corresponding to each state of nature. Step 2:Step 2: Compute the expected value of these optimal returns. Compute the expected value of these optimal returns. Step 3:Step 3: Subtract the EV of the optimal decision from the amount determined in step (2). 19 Slide Example: Marketing Strategy n Expected Value of Perfect Information Calculate the expected value for the best action for each state of nature and subtract the EV of the optimal decision. EVPI=.75(25,000) +.25(6,000) - 19,500 = \$750 EVPI=.75(25,000) +.25(6,000) - 19,500 = \$750 20 Slide The End of Chapter Download ppt "1 1 Slide Decision Analysis Professor Ahmadi. 2 2 Slide Decision Analysis Chapter Outline n Structuring the Decision Problem n Decision Making Without." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860058784484863, "perplexity": 1185.2466213984585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812932.26/warc/CC-MAIN-20180220090216-20180220110216-00756.warc.gz"}
https://blog.zilin.one/21-259-fall-2013/calculus-potluck-the-ultimate-buffet/
# Calculus Potluck – The Ultimate Buffet Although the last Calculus Potluck was an epic fail, I do not want to give up this project. And this time, I am going to do something slightly different. As it is indicated by the project name, I am going to keep posting practice problems until no one wants to solve them. Here is how it works. 1. I will post 3 practice problems at a time. 2. Send me an email once you figure them out. 3. I will post solutions so that you can check your work. 4. Repeat if I receive emails from at least 3 students. Practice Problems: 1. (a) Find the equation in the form $Ax+By+Cz = D$ of the plane $P$ which contains the line $L$ given by $x = 1-t, y = 1+2t, z = 2-3t$ and the point $(- 1, 1, 2)$. (b) Let $Q$ be the plane $2x+y+z = 4$. Find the component of a unit normal vector for $Q$ projected on a unit direction vector for the line $L$ of part(a). 2. Let $L$ denote the line which passes through $(0,0,1)$ and is parallel to the line in the $xy$-plane given by $y = 2x$. (a) Sketch $L$ and give its equation in vector-parametric form. (b) Let $P$ be the plane which passes through $(0,0,1)$ and is perpendicular to the line $L$ of part(a). Sketch in $P$ (above) and give its equation in point-normal form. 3. Let $r(t) = \langle \cos(e^t), \sin(e^t), e^t\rangle$. (a) Compute and simplify the unit tangent vector $T(t) = r'(t) / | r'(t) |$. (b) Compute $T'(t)$. 4. Consider the function $F(x,y,z)=z\sqrt{x^2+y}+2y/z$. (a) The point $P_0: (1,3,2)$ lies on the surface $F(x,y,z)=7$. Find the equation of the tangent plane to the surface $F(x,y,z)=7$ at $P_0$. (b) If starting at $P_0$ a small change were to be made in only one of the variables, which one would produce the largest change (in absolute value) in $F$? If the change of this variable was of size $0.1$, approximately how large would the change in $F$? (c) What distance from $P_0$ in the direction $\pm(-2,2,-1)$ will produce an approximate change in $F$ of size $0.1$ unites, according to the linearization of $F$? 5. Let $f(x,y)=x+4y+\frac{2}{xy}$. (a) Find the critical points of $f(x,y)$. (b) Use the second-derivative test to test the critical points found in part (a). 6. Let $P$ be the plane with equation $Ax+By+Cz=D$ and $P_0=(x_0, y_0, z_0)$ be a point which is not on $P$. Use the Lagrange multiplier method to set up the equations satisfied by the point $(x,y,z)$ on $P$ which is closest to $P_0$. (Do not solve.) 7. Let $F(x, y, z)$ be a smooth function of three variables for which $\nabla F(1, -1, \sqrt{2})=(1, 2, -2)$. Use the Chain Rule to evaluate $\partial F/\partial \phi$ at $(\rho,\phi,\theta) = (2, \pi/4,-\pi/4)$. 8. Suppose $\iint_R fdA = \int_0^2\int_{x^2}^{2\sqrt{2x}} f(x,y) dy dx$. (a) Sketch the region $R$. (b) Rewrite the double integral as an iterated integral with the order interchanged. 9. Let $G$ be the solid 3-D cone bounded by the lateral surface given by $z = 2 \sqrt{x^2 + y^2}$ and by the plane $z = 2$. The problem is to compute $\bar{z}$ which is the $z$-coordinate of the center of mass of $G$, in the case where the density is equal to the height above the $xy$-plane. (a) Find the mass of $G$ using cylindrical coordinates (b) Set up the calculation for $\bar{z}$ using cylindrical coordinates. (c) Set up the calculation for $\bar{z}$ using spherical coordinates. 10. $F(x,y,z)=(y+y^2z)i+(x-z+2xyz)j+(-y+xy^2)k$. (a) Show that $F(x,y,z)$ is a gradient field using the derivative conditions. (b) Find a potential function $f(x,y,z)$ for $F(x,y,z)$. (c) Find $\int_C F\cdot dr$, where $C$ is the straight line joining the points $(2,2,1)$ and $(1,-1,2)$. 11. In this problem $S$ is the surface given by the quarter of the right-circular cylinder centered on the $z$-axis, of radius 2 and height 4, which lies in the first octant. The field $F(x,y,z) = xi$. a) Sketch the surface $S$ and the field $F$. (b) Compute the flux integral (Use the normal which points ’outward’ from $S$, i.e. on the side away from the $z$-axis.) (c) $G$ be the 3D solid in the first octant given by the interior of the quarter cylinder defined above. Use the divergence theorem to compute the flux of the field $F = xi$ out of the region $G$. 12. $F(x, y, z) = (yz) i + (-xz) j + k$. Let $S$ be the portion of surface of the paraboloid $z=4-x^2-y^2$ which lies above the first octant; and let $C$ be the closed curve $C = C_1 + C_2 + C_3$, where the curves $C_1$, $C_2$ and $C3$ are the three curves formed by intersecting $S$ with the $xy$, $yz$ and $xz$ planes respectively (so that $C$ is the boundary of $S$). Orient $C$ so that it is traversed counter-clockwise when seen from above in the first octant. (a) Use Stokes’ Theorem to compute the loop integral $\oint_C F\cdot dr$ by using the surface integral over the capping surface $S$. (b) Set up and evaluate the loop integral $\oint_C F\cdot dr$ directly by parametrizing each piece of the curve $C$ and then adding up the three line integrals. Solutions: 1. (a) The planes goes through $(1,1,2)$ and $(-1,1,2)$ and contains the direction of $L$, $(-1,2,-3)$. Therefore the normal vector of the plane is $(2,0,0)\times(-1,2,-3)=(0,6,4)$. Hence the equation of the plane is $6y+4z=6\times 1+4\times 2=14$ which boils down to $3y+2z=7$. (b) A unit normal vector for $Q$ is $u=(2,1,1)/\sqrt{6}=(2,1,1)/\sqrt{6}$(or alternatively $u=(-2,-1,-1)/\sqrt{6}$). On the other hand, a unit direction vector for the line $L$ is $v=(-1,2,-3)/\sqrt{14}$ (or alternatively $v=(1,-2,3)/\sqrt{14}$. Therefore the component $u$ on $v$ is $u\cdot v=\frac{-3}{2\sqrt{21}}$. 2. (a) Direction vector for $L$ is $v=(1,2,0)$ because the line in the $xy$-plane given by $y=2x$ can be parametrized as $x=t, y=2t, z=0$. The vector-parametric form of $L$ is $r=(t,2t,1)$. (b) Normal vector for $P$ is $(1,2,0)$ since the plane $P$ is perpendicular to the line $L$. The point-normal form of $P$ is $1(x-0)+2(y-0)+0(z-1)=0$ or $x+2y=0$. 3. (a) $r'(t) = (-\sin(e^t)e^t, \cos(e^t)e^t, e^t)$ implies $|r'(t)|=e^t\sqrt{2}$. Therefore $T(t)=\frac{1}{\sqrt{2}}(-\sin(e^t),\cos(e^t), 1)$. (b) $T'(t)=\frac{-e^t}{\sqrt{2}}(\cos(e^t), \sin(e^t), 0)$. 4. (a) $F_x=\frac{xz}{\sqrt{x^2+y}}$ implies $F_x(1,3,2)=1$. $F_y=\frac{z}{2\sqrt{x^2+y}}+\frac{2}{z}$ implies $F_y(1,3,2)=3/2$. $F_z=\sqrt{x^2+y}-\frac{2y}{z^2}$ implies $F_z(1,3,2)=1/2$. Therefore the normal vector is $(1,3/2,1/2)$ and the equation of the tangent plane is $1(x-1)+3/2(y-2)+1/2(z-2)=0$ or $2x+3y+z=13$. (b) At $P_0$, we have $|F_y|>|F_x|,|F_z|$. So a change in $y$ produces the largest change in $F$ and $\Delta F=F_y\Delta y=\frac{3}{2}\times(3.1-3)=0.15$. (c) The directional derivative of $F$ in the direction $s=(-2,2,-1)$ is $\frac{dF}{ds}=\frac{s}{|s|}\cdot \nabla F=\frac{(-2,2,-1)}{3}\cdot (1,3/2,1/2)=1/6$. Since we want $\Delta F = 0.1$ and we know $\Delta F = \frac{dF}{ds}\Delta s=\Delta s/6$, $\Delta s = 0.6$. 5. (a) Solve $f_x=1-2/(x^2)y=0, f_y=4-2/(xy^2)$ and get $x=2, y=1/2$. There is one critical point at $(2,1/2)$. (b) $f_xx=4/(x^3y), f_xy=2/(x^2y^2), f_yy=4/(xy^3)$, $A=f_{xx}(2,1/2)=1, B=f_{xy}(2,1/2)=2, C=f_{yy}(2,1/2)=16$. Therefore $AC-B^2=12$, $f$ has a local minimum at $(2,1/2)$. 6. The object is to minimize $f(x,y,z)=(x-x_0)^2+(y-y_0)^2+(z-z_0)^2$ subject to $g(x,y,z)=Ax+By+Cz=D$. The equations given by Lagrange multiplier method are $2(x-x_0)=\lambda A, 2(y-y_0)=\lambda B, 2(z-z_0)=\lambda C, Ax+By+Cz=D$. 7. Recall $x=\rho\sin\phi\cos\theta, y=\rho\sin\phi\sin\theta, z=\rho\cos\phi$ and $x_\phi=\rho\cos\phi\cos\theta, y_\phi=\rho\cos\phi\sin\theta, z_\phi=-\rho\sin\phi$. Therefore $(\rho, \phi, \theta)=(2,\pi/4,-\pi/4)$ represents the point $(1,-1,\sqrt{2})$. By the chain rule, \begin{aligned} & \frac{\partial F}{\partial\phi}(2,\pi/4,-\pi/4) \\ = & \frac{\partial F}{\partial x}(1,-1,\sqrt{2})\frac{\partial x}{\partial \phi}(2,\pi/4,-\pi/4) \\ & + \frac{\partial F}{\partial y}(1,-1,\sqrt{2})\frac{\partial y}{\partial \phi}(2,\pi/4,-\pi/4) \\ + & \frac{\partial F}{\partial z}(1,-1,\sqrt{2})\frac{\partial z}{\partial \phi}(2,\pi/4,-\pi/4) \\ = & 1\times 1+2\times (-1)+(-2)\times (-\sqrt{2})=2\sqrt{2}-1.\end{aligned} 8. (a) The region $R$ is enclosed by $y=x^2$ and $y=2\sqrt{2}x$. (b) Since $R$ is also the region described by $y^2/8\leq x\leq \sqrt{y}$ with $0\leq y\leq 4$. The double integral can be rewritten as $\int_0^4\int_{y^2/8}^{\sqrt{y}}f(x,y)dxdy$. 9. (a) Since the density of $G$ at point $(x,y,z)$ is $z$, the mass of $G$ is $\iiint_G z dV = \int_0^{2\pi}\int_0^1\int_{2r}^2 zdzrdrd\theta=\int_0^{2\pi}\int_0^12(1-r^2)rdrd\theta=\pi$. (b) $\bar{z}=\frac{1}{M}\iiint_G z^2 dV=\frac{1}{\pi}\int_0^{2\pi}\int_0^1\int_{2r}^2 z^2dzrdrd\theta$. (c) In spherical coordinates: $z=2$ becomes to $\rho\cos\theta=2 \implies \phi=2\sec\phi$. Limites on $G$ under the spherical coordinates: $0\leq\rho\leq 2\sec\phi, 0\leq\phi\leq\arctan(1/2), 0\leq\theta\leq 2\pi$. Therefore $\bar{z}=\frac{1}{M}\iiint_G z^2 dV=\frac{1}{\pi}\int_0^{2\pi}\int_0^{\arctan(1/2)}\int_0^{2\sec\phi}(\rho\cos\phi)^2\rho^2\sin\phi d\rho d\phi d\theta.$. 10. (a) We have $F=(P,Q,R)$, where $P=y+y^2z, Q=x-z+2xyz, R=-y+xy^2$. Check $\partial P/\partial z=y^2=\partial R/\partial x; \partial Q/\partial z=-1+2xy=\partial R/\partial y; \partial P/\partial y=1+2yz=\partial Q/\partial x$. (b) Suppose $f(x,y,z)$ is the potential function. Then $f_x = P = y+y^2z$ gives $f = xy+xy^2z+a(x,y)$. Then $f_y=x+2xyz+a_y=x-z+2xyz$. Therefore $a_y=-z$ implies $a=-yz+b(z)$ which implies $f=xy+xy^2z-yz+b(z)$. Then $f_z=xy^2-y+b'(z)=-y+xy^2$ implies $b(z)$ is a constant. So $f(x,y,z)=xy-yz+xy^2z+C$. (c) The work is the difference of the potential function at the end points, i.e., $f(1,-1,2)-f(2,2,1)=-10+3=-7$. 11. (b) We know $\hat{n}=\frac{1}{2}(x,y,0)$ and $F=(x,0,0)$. Thus $F\cdot \hat{n}=x^2/2$ and in cylindrical coordinates $dS=2dzd\theta$. On the surface, $x=2\cos\theta$ and the limites of the integration are $0\leq z\leq 4, 0\leq\theta\leq\pi/2$. So $\iint_S F\cdot\hat{n}dS=\iint_S x^2/2 dS=\int_0^{\pi/2}\int_0^4\frac{1}{2}(2\cos\theta)^2 2dzd\theta=4\pi$. (c) $div(F)=\nabla\cdot F=1$ implies $\iint_G\nabla\cdot F dV=\iiint_G 1dV=vol(G)=4\pi$. 12. (a) $\nabla\times F=(x,y,-2z)$ and $\hat{n}dS=(-z_x,-z_y,1)dA=(2x,2y,1)dA$. Therefore $(\nabla\times F)\cdot\hat{n}dS=(2x^2+2y^2-2z)dA=4(x^2+y^2-2)dA$. The limites of integration on $R$ are $0\leq r\leq 2, 0\leq\theta\leq\pi/2$. So $\iint_S(\nabla\times F)\cdot\hat{n}dS=\iint_R4(x^2+y^2-2)dA=4\int_0^{\pi/4}\int_0^24(r^2-2)rdrd\theta=0$. (b) $F=(yz,-xz,1)\implies \int_CF\cdot dr=\int_C(yz)dx-(xz)dy+1dz$. $C_1: x=0, y=t, z=4-t^2$ where t goes from 2 to 0. $\int_{C_1}F\cdot dr=\int_2^0(-2t)dt=4$. $C_2: x=t, y=0, z=4-t^2$ where $t$ goes from 0 to 2. $\int_{C_2}F\cdot dr=\int_0^2(-2t)dt=-4$. On $C_3$, $z=0, dz=0$. So $\int_{C_3}F\cdot dr=0$. Thus $\oint_C F\cdot dr=0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 224, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946054816246033, "perplexity": 110.90955410041596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00374.warc.gz"}
http://www.dailycamera.com/letters/ci_25132738/quentin-mckenna-energy-storage-let-nature-guide-us?source=rss
Here’s an idea! The big stumbling block for renewable energy is energy storage. We need energy all the time not just when the sun is shining or the wind is blowing. Fuel cells with hydrogen have been looked at as one solution but the irony here is that a liter of gasoline actually contains 64 percent more hydrogen (116 grams hydrogen) than there is in a liter of pure liquid hydrogen (71 grams hydrogen). A further problem is that liquid hydrogen requires cryogenic storage needing additional resources to properly contain it. The final kicker here is that the cheapest source of hydrogen turns out to be from natural gas. So why not just use the natural gas directly? The solution at this point should be quite obvious how to have our cake and eat it too by taking our cues from nature. Renewable energy research should henceforth be directed towards producing our own fossil fuels from scratch. Apply our wind and solar renewable energy to convert our carbon dioxide back into a useful fossil fuel form. Fossil fuels indeed turn out to be the most efficient form of energy storage we have, and nature has already shown us the way. Quentin Mckenna Boulder
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039283156394958, "perplexity": 749.7334815488585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00273-ip-10-171-10-70.ec2.internal.warc.gz"}
http://unapologetic.wordpress.com/2010/07/09/the-radon-nikodym-derivative/?like=1&source=post_flair&_wpnonce=c1c9c39881
# The Unapologetic Mathematician ## Mathematics for the interested outsider Okay, so the Radon-Nikodym theorem and its analogue for signed measures tell us that if we have two $\sigma$-finite signed measures $\mu$ and $\nu$ with $\nu\ll\mu$, then there’s some function $f$ so that $\displaystyle\nu(E)=\int\limits_Ef\,d\mu$ But we also know that by definition $\displaystyle\nu(E)=\int\limits_E\,d\nu$ If both of these integrals were taken with respect to the same measure, we would know that the equality $\displaystyle\int\limits_Ef\,d\mu=\int\limits_Eg\,d\mu$ for all measurable $E$ implies that $f=g$ $\mu$-almost everywhere. The same thing can’t quite be said here, but it motivates us to say that in some sense we have equality of “differential measures” $d\nu=f\,d\mu$. In and of itself this doesn’t really make sense, but we define the symbol $\displaystyle\frac{d\nu}{d\mu}=f$ and call it the “Radon-Nikodym derivative” of $\nu$ by $\mu$. Now we can write $\displaystyle\int\limits_E\,d\nu=\int\limits_Ef\,d\mu=\int\limits_E\frac{d\nu}{d\mu}\,d\mu$ The left equality is the Radon-Nikodym theorem, and the right equality is just the substitution of the new symbol for $f$. Of course, this function — and the symbol $\frac{d\nu}{d\mu}$ — is only defined uniquely $\mu$-almost everywhere. The notation and name is obviously suggestive of differentiation, and indeed the usual laws of derivatives hold. We’ll start today by the easy property of linearity. That is, if $\nu_1$ and $\nu_2$ are both $\sigma$-finite signed measures, and if $a_1$ and $a_2$, then $a_1\nu_1+a_2\nu_2$ is clearly another $\sigma$-finite signed measure. Further, it’s not hard to see if $\nu_i\ll\mu$ then $a_1\nu_1+a_2\nu_2\ll\mu$ as well. By the Radon-Nikodym theorem we have functions $f_1$ and $f_2$ so that \displaystyle\begin{aligned}\nu_1(E)&=\int\limits_Ef_1\,d\mu\\\nu_2(E)&=\int\limits_Ef_2\,d\mu\end{aligned} for all measurable sets $E$. Then it’s clear that \displaystyle\begin{aligned}\left[a_1\nu_1+a_2\nu_2\right](E)&=a_1\nu_1(E)+a_2\nu_2(E)\\&=a_1\int\limits_Ef_1\,d\mu+a_2\int\limits_Ef_2\,d\mu\\&=\int\limits_Ea_1f_1+a_2f_2\,d\mu\end{aligned} That is, $a_1f_1+a_2f_2$ can serve as the Radon-Nikodym derivative of $a_1\nu_1+a_2\nu_2$ with respect to $\mu$. We can also write this in our suggestive notation as $\displaystyle\frac{d(a_1\nu_1+a_2\nu_2)}{d\mu}=a_1\frac{d\nu_1}{d\mu}+a_2\frac{d\nu_2}{d\mu}$ which equation holds $\mu$-almost everywhere. July 9, 2010 - Posted by | Analysis, Measure Theory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866788387298584, "perplexity": 179.17411195839287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00066-ip-10-60-113-184.ec2.internal.warc.gz"}
http://web2.0calc.com/questions/quadratic_79243
+0 +5 92 6 what are the steps for finding the roots of this equation using quadratics: x^2+5x+6=0 Guest Mar 3, 2017 #4 +5931 +11 The fastest way is with factoring. x2 + 5x + 6 = 0 (x+2)(x+3) = 0 Set each factor equal to zero. x + 2 = 0       and      x + 3 = 0 x = -2            and      x = -3 You can also use the quadratic formula. $$ax^{2}+bx+c=0 \\ x = {-b \pm \sqrt{b^2-4ac} \over 2a} \\ a=1, b=5, c=6 \\ x = {-5 \pm \sqrt{5^2-4(1)(6)} \over 2(1)} \\ x = {-5 \pm \sqrt{1} \over 2} \\ x = \frac{-5+1}{2} \text{ and } x = \frac{-5-1}{2} \\ x=-2 \text{ and } x=-3$$ A third method is called completing the square, here's an explanation of that method: hectictar  Mar 3, 2017 Sort: #1 +321 0 Use the calculator to find out Davis  Mar 3, 2017 #4 +5931 +11 The fastest way is with factoring. x2 + 5x + 6 = 0 (x+2)(x+3) = 0 Set each factor equal to zero. x + 2 = 0       and      x + 3 = 0 x = -2            and      x = -3 You can also use the quadratic formula. $$ax^{2}+bx+c=0 \\ x = {-b \pm \sqrt{b^2-4ac} \over 2a} \\ a=1, b=5, c=6 \\ x = {-5 \pm \sqrt{5^2-4(1)(6)} \over 2(1)} \\ x = {-5 \pm \sqrt{1} \over 2} \\ x = \frac{-5+1}{2} \text{ and } x = \frac{-5-1}{2} \\ x=-2 \text{ and } x=-3$$ A third method is called completing the square, here's an explanation of that method:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392086625099182, "perplexity": 3171.8322180818923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00720.warc.gz"}
http://mathonline.wikidot.com/line-integrals-on-piecewise-smooth-curves
Line Integrals on Piecewise Smooth Curves # Line Integrals on Piecewise Smooth Curves Recall from the Line Integrals page that if $z = f(x, y)$ is a two variable real-valued function and if the smooth curve $C$ is given parametrically by $x = x(t)$ and $y = y(t)$ for $a ≤ t ≤ b$, then the line integral of $f$ along $C$ is given by: (1) \begin{align} \quad \int_C f(x, y) \: ds = \int_a^b f(x(t), y(t)) \sqrt{ \left ( \frac{dx}{dt} \right)^2 + \left ( \frac{dy}{dt} \right )^2} \: dt \end{align} Now suppose instead that the curve $C$ is not smooth such as the one illustrated below: Geometrically, the curve $C$ is not smooth because $C$ has a sharp point. Now suppose that $C$ is actually a piecewise smooth curve, that is, $C$ is the union of a finite number $n$ of smooth curves $C_1$, $C_2$, …, $C_n$ as illustrated below: Then we can still compute the line integral of $f$ along $C$ as the sum of the line integrals of $f$ along $C_1$, $C_2$, …, $C_n$, that is: (2) \begin{align} \quad \int_C f(x, y) \: ds = \int_{C_1} f(x, y) \: ds + \int_{C_2} f(x, y) \: ds + ... + \int_{C_n} f(x, y) \: ds \\ \quad \int_C f(x, y) \: ds = \sum_{i=1}^{n} \int_{C_i} f(x, y) \: ds \end{align}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981356263160706, "perplexity": 309.83846099052613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00368-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&onejrnl=tran&pubname=one&v1=22E45&startRec=31
# American Mathematical Society Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(22E45) AND publication=(tran) Sort order: Date Format: Standard display Results: 31 to 60 of 70 found      Go to page: 1 2 3 [31] Birgit Speh. Indecomposable representations of semisimple Lie groups . Trans. Amer. Math. Soc. 265 (1981) 1-34. MR 607104. Abstract, references, and article information    View Article: PDF This article is available free of charge [32] Robert P. Martin. Tensor products of principal series for the De Sitter group . Trans. Amer. Math. Soc. 265 (1981) 121-135. MR 607111. Abstract, references, and article information    View Article: PDF This article is available free of charge [33] Jonathan Rosenberg. Realization of square-integrable representations of unimodular Lie groups on $L\sp{2}$-cohomology spaces . Trans. Amer. Math. Soc. 261 (1980) 1-32. MR 576861. Abstract, references, and article information    View Article: PDF This article is available free of charge [34] Bent Ørsted. Composition series for analytic continuations of holomorphic discrete series representations of ${\rm SU}(n,\,n)$ . Trans. Amer. Math. Soc. 260 (1980) 563-573. MR 574799. Abstract, references, and article information    View Article: PDF This article is available free of charge [35] Nolan R. Wallach. The analytic continuation of the discrete series. II . Trans. Amer. Math. Soc. 251 (1979) 19-37. Abstract, references, and article information    View Article: PDF This article is available free of charge [36] Nolan R. Wallach. The analytic continuation of the discrete series. I . Trans. Amer. Math. Soc. 251 (1979) 1-17. MR 531967. Abstract, references, and article information    View Article: PDF This article is available free of charge [37] Hans Plesner Jakobsen. Intertwining differential operators for ${\rm Mp}(n,\,{\bf R})$ and ${\rm SU}(n,\,n)$ . Trans. Amer. Math. Soc. 246 (1978) 311-337. MR 515541. Abstract, references, and article information    View Article: PDF This article is available free of charge [38] Frederick W. Keene. Square integrable representations and a Plancherel theorem for parabolic subgroups . Trans. Amer. Math. Soc. 243 (1978) 61-73. MR 0498983. Abstract, references, and article information    View Article: PDF This article is available free of charge [39] Robert P. Martin. Tensor products for ${\rm SL}(2,\,k)$ . Trans. Amer. Math. Soc. 239 (1978) 197-211. MR 487045. Abstract, references, and article information    View Article: PDF This article is available free of charge [40] Yang Hua. On a degenerate principal series of representations of ${\rm U}(2, 2)$ . Trans. Amer. Math. Soc. 238 (1978) 229-252. MR 0466417. Abstract, references, and article information    View Article: PDF This article is available free of charge [41] Shlomo Sternberg and Joseph A. Wolf. Hermitian Lie algebras and metaplectic representations. I . Trans. Amer. Math. Soc. 238 (1978) 1-43. MR 0486325. Abstract, references, and article information    View Article: PDF This article is available free of charge [42] Floyd L. Williams. Topological irreducibility of nonunitary representations of group extensions . Trans. Amer. Math. Soc. 233 (1977) 69-84. MR 0463364. Abstract, references, and article information    View Article: PDF This article is available free of charge [43] Tuong Ton-That. Symplectic Stiefel harmonics and holomorphic representations of symplectic groups . Trans. Amer. Math. Soc. 232 (1977) 265-277. MR 0476926. Abstract, references, and article information    View Article: PDF This article is available free of charge [44] Kenneth D. Johnson and Nolan R. Wallach. Composition series and intertwining operators for the spherical principal series. I . Trans. Amer. Math. Soc. 229 (1977) 137-173. MR 0447483. Abstract, references, and article information    View Article: PDF This article is available free of charge [45] E. James Funderburk. Module structure of certain induced representations of compact Lie groups . Trans. Amer. Math. Soc. 228 (1977) 269-285. MR 0439992. Abstract, references, and article information    View Article: PDF This article is available free of charge [46] Robert S. Strichartz. Bochner identities for Fourier transforms . Trans. Amer. Math. Soc. 228 (1977) 307-327. MR 0433147. Abstract, references, and article information    View Article: PDF This article is available free of charge [47] R. Penney. Spherical distributions on Lie groups and $C\sp{\infty }$\ vectors . Trans. Amer. Math. Soc. 223 (1976) 367-384. MR 0457632. Abstract, references, and article information    View Article: PDF This article is available free of charge [48] Hrvoje Kraljević. On representations of the group $SU(n,1)$ . Trans. Amer. Math. Soc. 221 (1976) 433-448. MR 0409725. Abstract, references, and article information    View Article: PDF This article is available free of charge [49] Johan F. Aarnes. Differentiable representations. I. Induced representations and Frobenius reciprocity . Trans. Amer. Math. Soc. 220 (1976) 1-35. MR 0417336. Abstract, references, and article information    View Article: PDF This article is available free of charge [50] Tuong Ton-That. Lie group representations and harmonic polynomials of a matrix variable . Trans. Amer. Math. Soc. 216 (1976) 1-46. MR 0399366. Abstract, references, and article information    View Article: PDF This article is available free of charge [51] Kenneth D. Johnson. Composition series and intertwining operators for the spherical principal series. II . Trans. Amer. Math. Soc. 215 (1976) 269-283. MR 0385012. Abstract, references, and article information    View Article: PDF This article is available free of charge [52] Shlomo Sternberg. Symplectic homogeneous spaces . Trans. Amer. Math. Soc. 212 (1975) 113-130. MR 0379759. Abstract, references, and article information    View Article: PDF This article is available free of charge [53] J. Lepowsky. On the Harish-Chandra homomorphism . Trans. Amer. Math. Soc. 208 (1975) 193-218. MR 0376792. Abstract, references, and article information    View Article: PDF This article is available free of charge [54] Robert Paul Martin. On the decomposition of tensor products of principal series representations for real-rank one semisimple groups . Trans. Amer. Math. Soc. 201 (1975) 177-211. MR 0374341. Abstract, references, and article information    View Article: PDF This article is available free of charge [55] Richard Tolimieri. Solvable groups and quadratic forms . Trans. Amer. Math. Soc. 201 (1975) 329-345. MR 0354552. Abstract, references, and article information    View Article: PDF This article is available free of charge [56] Richard Penney. Entire vectors and holomorphic extension of representations . Trans. Amer. Math. Soc. 198 (1974) 107-121. MR 0364556. Abstract, references, and article information    View Article: PDF This article is available free of charge [57] Floyd L. Williams. Laplace operators and the $\mathfrak{h}$ module structure of certain cohomology groups . Trans. Amer. Math. Soc. 197 (1974) 1-57. MR 0379761. Abstract, references, and article information    View Article: PDF This article is available free of charge [58] George F. Leger and Eugene M. Luks. Cohomology of nilradicals of Borel subalgebras . Trans. Amer. Math. Soc. 195 (1974) 305-316. MR 0364554. Abstract, references, and article information    View Article: PDF This article is available free of charge [59] Stephen S. Gelbart. A theory of Stiefel harmonics . Trans. Amer. Math. Soc. 192 (1974) 29-50. MR 0425519. Abstract, references, and article information    View Article: PDF This article is available free of charge [60] Richard Penney. Entire vectors and holomorphic extension of representations. II . Trans. Amer. Math. Soc. 191 (1974) 195-207. MR 0364556. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 31 to 60 of 70 found      Go to page: 1 2 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.916612446308136, "perplexity": 1422.8516705466168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462548.53/warc/CC-MAIN-20150226074102-00168-ip-10-28-5-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/60105-determine-whether-equation-linear-equation-help.html
# Math Help - Determine whether the equation is a linear equation..Help! 1. ## Determine whether the equation is a linear equation..Help! 1. $\frac{x}{2} = 10 + \frac{2y}{3}$ 2. $7n - 8m = 4 - 2m$ 2. Originally Posted by Phresh 1. $\frac{x}{2} = 10 + \frac{2y}{3}$ 2. $7n - 8m = 4 - 2m$ I guess both are linear, because $\frac{x}{2} = 10 + \frac{2y}{3}$ $y = \frac{3}{2}(\frac{x}{2}-10)$ and $7n - 8m = 4 - 2m$ 6m = 4 - 7n $m = \frac{4-7n}{6}$ This is a straight line, too
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323724865913391, "perplexity": 1844.6042577889616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767453.104/warc/CC-MAIN-20141217075247-00045-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/218318-brownian-motion.html
# Math Help - Brownian Motion 1. ## Brownian Motion Hi!!! I hope someone can help me with the following exercise... n>=1, 0=t_{0}<t_{1}<...<t_{n}, a_{1},a_{2},...,a_{n} ε R. Show that the random variable a_{1}*B(t_{1})+...+a_{n}*B(t_{n}) is normally distributed and find its mean value and variance. 2. ## Re: Brownian Motion Hey mathman. Hint: Use the property of a Brownian motion that if if B_a(t) and B_b(t) are brownian motions over disjoint times then they are both independent. 3. ## Re: Brownian Motion How can I find the mean value E(a_{1}*B(t_{1})+...+a_{n}*B(t_{n})) and the variance Var(a_{1}*B(t_{1})+...+a_{n}*B(t_{n}))? Using the property of a Brownian motion that E(B(t)-B(s))=0 and Var(B(t)-B(s))=t-s, 0<=s<t???? 4. ## Re: Brownian Motion Hint: If two variables are independent then E[XY] = E[X]E[Y]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796586632728577, "perplexity": 3385.661917973144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549109.94/warc/CC-MAIN-20141224185909-00088-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.zazzle.com/prayers+bumperstickers
Showing All Results 1,796 results Page 1 of 30 Related Searches: 12 step, serenity, recovery Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo No matches for Showing All Results 1,796 results Page 1 of 30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245528936386108, "perplexity": 4417.34558428646}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345768787/warc/CC-MAIN-20131218054928-00060-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.jobilize.com/online/course/2-3-machine-learning-lecture-4-course-notes-by-openstax?qcr=www.quizover.com&page=3
2.3 Machine learning lecture 4 course notes  (Page 4/5) Page 4 / 5 $\stackrel{^}{\epsilon }\left({h}_{i}\right)=\frac{1}{m}\sum _{j=1}^{m}{Z}_{j}.$ Thus, $\stackrel{^}{\epsilon }\left({h}_{i}\right)$ is exactly the mean of the $m$ random variables ${Z}_{j}$ that are drawn iid from a Bernoulli distribution with mean $\epsilon \left({h}_{i}\right)$ . Hence, we can apply the Hoeffding inequality, and obtain $P\left(|\epsilon \left({h}_{i}\right)-\stackrel{^}{\epsilon }\left({h}_{i}\right)|>\gamma \right)\le 2exp\left(-2{\gamma }^{2}m\right).$ This shows that, for our particular ${h}_{i}$ , training error will be close to generalization error with high probability, assuming $m$ is large. But we don't just want to guarantee that $\epsilon \left({h}_{i}\right)$ will be close to $\stackrel{^}{\epsilon }\left({h}_{i}\right)$ (with high probability) for just only one particular ${h}_{i}$ . We want to prove that this will be true for simultaneously for all $h\in \mathcal{H}$ . To do so, let ${A}_{i}$ denote the event that $|\epsilon \left({h}_{i}\right)-\stackrel{^}{\epsilon }\left({h}_{i}\right)|>\gamma$ . We've already show that, for any particular ${A}_{i}$ , it holds true that $P\left({A}_{i}\right)\le 2exp\left(-2{\gamma }^{2}m\right)$ . Thus, using the union bound, we have that $\begin{array}{ccc}\hfill P\left(\exists \phantom{\rule{0.166667em}{0ex}}h\in \mathcal{H}.|\epsilon \left({h}_{i}\right)-\stackrel{^}{\epsilon }\left({h}_{i}\right)|>\gamma \right)& =& P\left({A}_{1}\cup \cdots \cup {A}_{k}\right)\hfill \\ & \le & \sum _{i=1}^{k}P\left({A}_{i}\right)\hfill \\ & \le & \sum _{i=1}^{k}2exp\left(-2{\gamma }^{2}m\right)\hfill \\ & =& 2kexp\left(-2{\gamma }^{2}m\right)\hfill \end{array}$ If we subtract both sides from 1, we find that $\begin{array}{ccc}\hfill P\left(¬\exists \phantom{\rule{0.166667em}{0ex}}h\in \mathcal{H}.|\epsilon \left({h}_{i}\right)-\stackrel{^}{\epsilon }\left({h}_{i}\right)|>\gamma \right)& =& P\left(\forall h\in \mathcal{H}.|\epsilon \left({h}_{i}\right)-\stackrel{^}{\epsilon }\left({h}_{i}\right)|\le \gamma \right)\hfill \\ & \ge & 1-2kexp\left(-2{\gamma }^{2}m\right)\hfill \end{array}$ (The “ $¬$ ” symbol means “not.”) So, with probability at least $1-2kexp\left(-2{\gamma }^{2}m\right)$ , we have that $\epsilon \left(h\right)$ will be within $\gamma$ of $\stackrel{^}{\epsilon }\left(h\right)$ for all $h\in \mathcal{H}$ . This is called a uniform convergence result, because this is a bound that holds simultaneously for all (as opposed to just one) $h\in \mathcal{H}$ . In the discussion above, what we did was, for particular values of $m$ and $\gamma$ , give a bound on the probability that for some $h\in \mathcal{H}$ , $|\epsilon \left(h\right)-\stackrel{^}{\epsilon }\left(h\right)|>\gamma$ . There are three quantities of interest here: $m$ , $\gamma$ , and the probability of error; we can bound either one in terms of the other two. For instance, we can ask the following question: Given $\gamma$ and some $\delta >0$ , how large must $m$ be before we can guarantee that with probability at least $1-\delta$ , training error will be within $\gamma$ of generalization error? By setting $\delta =2kexp\left(-2{\gamma }^{2}m\right)$ and solving for $m$ , [you should convince yourself this is the right thing to do!], we find that if $m\ge \frac{1}{2{\gamma }^{2}}log\frac{2k}{\delta },$ then with probability at least $1-\delta$ , we have that $|\epsilon \left(h\right)-\stackrel{^}{\epsilon }\left(h\right)|\le \gamma$ for all $h\in \mathcal{H}$ . (Equivalently, this shows that the probability that $|\epsilon \left(h\right)-\stackrel{^}{\epsilon }\left(h\right)|>\gamma$ for some $h\in \mathcal{H}$ is at most $\delta$ .) This bound tells us how many training examples we need in order makea guarantee. The training set size $m$ that a certain method or algorithm requires in order to achieve a certain level of performance is also calledthe algorithm's sample complexity . The key property of the bound above is that the number of training examples needed to make this guarantee is only logarithmic in $k$ , the number of hypotheses in $\mathcal{H}$ . This will be important later. Similarly, we can also hold $m$ and $\delta$ fixed and solve for $\gamma$ in the previous equation, and show [again, convince yourself that this is right!]that with probability $1-\delta$ , we have that for all $h\in \mathcal{H}$ , $|\stackrel{^}{\epsilon }\left(h\right)-\epsilon \left(h\right)|\le \sqrt{\frac{1}{2m}log\frac{2k}{\delta }}.$ Now, let's assume that uniform convergence holds, i.e., that $|\epsilon \left(h\right)-\stackrel{^}{\epsilon }\left(h\right)|\le \gamma$ for all $h\in \mathcal{H}$ . What can we prove about the generalization of our learning algorithm that picked $\stackrel{^}{h}=arg{min}_{h\in \mathcal{H}}\stackrel{^}{\epsilon }\left(h\right)$ ? Define ${h}^{*}=arg{min}_{h\in \mathcal{H}}\epsilon \left(h\right)$ to be the best possible hypothesis in $\mathcal{H}$ . Note that ${h}^{*}$ is the best that we could possibly do given that we are using $\mathcal{H}$ , so it makes sense to compare our performance to that of ${h}^{*}$ . We have: $\begin{array}{ccc}\hfill \epsilon \left(\stackrel{^}{h}\right)& \le & \stackrel{^}{\epsilon }\left(\stackrel{^}{h}\right)+\gamma \hfill \\ & \le & \stackrel{^}{\epsilon }\left({h}^{*}\right)+\gamma \hfill \\ & \le & \epsilon \left({h}^{*}\right)+2\gamma \hfill \end{array}$ The first line used the fact that $|\epsilon \left(\stackrel{^}{h}\right)-\stackrel{^}{\epsilon }\left(\stackrel{^}{h}\right)|\le \gamma$ (by our uniform convergence assumption). The second used the fact that $\stackrel{^}{h}$ was chosen to minimize $\stackrel{^}{\epsilon }\left(h\right)$ , and hence $\stackrel{^}{\epsilon }\left(\stackrel{^}{h}\right)\le \stackrel{^}{\epsilon }\left(h\right)$ for all $h$ , and in particular $\stackrel{^}{\epsilon }\left(\stackrel{^}{h}\right)\le \stackrel{^}{\epsilon }\left({h}^{*}\right)$ . The third line used the uniform convergence assumption again, to show that $\stackrel{^}{\epsilon }\left({h}^{*}\right)\le \epsilon \left({h}^{*}\right)+\gamma$ . So, what we've shown is the following: If uniform convergence occurs,then the generalization error of $\stackrel{^}{h}$ is at most $2\gamma$ worse than the best possible hypothesis in $\mathcal{H}$ ! Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong Do somebody tell me a best nano engineering book for beginners? there is no specific books for beginners but there is book called principle of nanotechnology NANO what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc NANO so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 73, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180668354034424, "perplexity": 537.663167458436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257361.12/warc/CC-MAIN-20190523184048-20190523210048-00313.warc.gz"}
https://crypto.stackexchange.com/questions/33434/how-to-compute-the-discrete-logarithm-of-diffie-hellman-with-a-composite-modulus
# How to compute the discrete logarithm of Diffie-Hellman with a composite modulus? Imagine $n = pq$ with $p-1 = 2 p_1 p_2$ and $q-1 = 2 q_1 q_2$. I can compute the discrete logarithm of $y = g^x \pmod{n}$ by computing the discrete logarithm of $y$ modulo $p$ and $q$. But then how to recompute it? What I'm doing right now is recomputing it in $2p_1p_2q_1q_2 = \frac{(p-1)(q-1)}{2}$ with CRT (they all happen to be co-prime). But we're still not modulo $(p-1)(q-1)$ (it lacks the last 2). I usually have a solution though, and I can get the other solution by adding $(p-1)(q-1)/2$ to it. I figured out I should find the $x$, but I find two other solutions. I can't seem to understand why, shouldn't I find the exact $x$ this way? Also here I'm lucky, most primes (except one of the $2$) were co-prime. What about when $lcm(p-1, q-1)$ is way smaller than $(p-1)(q-1)$ ? • You should specify more precisely how your variables are defined. Are $p$ and $q$ prime numbers? Same remark for $p_1$, $p_2$, $q_1$ and $q_2$? Mar 6 '16 at 21:46 • yes all of them are Mar 7 '16 at 14:38 • So $gcd(p, q) = 1$ and you can directly recompute it modulo $n$ without considering $p_1$, $p_2$, $q_1$, $q_2$. Mar 7 '16 at 14:42 • the $g^x$ mod $p$ and $q$ have solutions in $p-1$ and $q-1$, so I cannot recompute it in $pq$, rather $(p-1)(q-1)$ which is the order of $\mathbb{Z}^\ast_n$. It is the discrete log that I'm trying to recompute, not $y = g^x$ that I already know Mar 7 '16 at 14:55 I think the problem is that you are trying to recompute the result in $(p-1)(q-1)$ instead of $n$ and that is why you don't find the exact $x$. I can compute the discrete logarithm of $y=g^x \mod n$ by computing the discrete logarithm of $y$ modulo $p$ and $q$. But then how to recompute it? As I understand, you want to find the discrete logarithm of $y$ modulo $n$, not modulo $(p-1)(q-1)$. Let be $x_p$ the discrete logarithm modulo $p$ and $x_q$ the discrete logarithm modulo $q$. So to recompute the result we have to solve the system $\left\{ \begin{array}{l} x \equiv x_p \mod p \\ x \equiv x_q \mod q \end{array} \right.$ If $gcd(p,q) = 1$ then you can apply the chinese remainder theorem by using the Gauss algorithm (check also the Garner algorithm which is more efficient) without having to consider $p_1$, $p_2$, $q_1$, $q_2$ and you will find a unique solution. What about when $lcm(p−1,q−1)$ is way smaller than $(p−1)(q−1)$? If $gcd(p,q) \not = 1$ but $gcd(p,q) \text{ | } x_p - x_q$ the system can still be solved but there exist several solutions. Otherwise, you will not be able to recompute the solutions modulo $p$ and $q$ into the modulo $n$. • the discrete logarithms are modulo $p-1$ and $q-1$ in the multiplicative groups $\mathbb{Z}^\ast_p$ and $\mathbb{Z}^\ast_q$. So I guess no Garner. I know that there exists different solutions, from my calculations I reduced the solutions to two, and they still are different than the real solution. But I guess this is because I'm trying to do the CRT in a different group (not mod $pq$) Mar 7 '16 at 14:44 • Also I didn't know that Garner algorithm! Seems way faster than the CRT algorithm, but then it is so fast anyway that it doesn't really matter in my case. Mar 7 '16 at 14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.901024580001831, "perplexity": 181.7790677351857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00275.warc.gz"}
https://www.physicsforums.com/threads/a-few-questions-from-introduction-to-sr-by-rindler.163445/
# A few questions from introduction to sr by rindler. 1. Mar 31, 2007 ### MathematicalPhysicist we have two inertial frames, S and S' where S' is moving with speed v along the x axis. here are a few questions about these frames: 1. if two events occur at the same point in some inertial frame S, prove that their temporal order is the same in all inertial frames, and that the least time seperation is assigned to them in S. 2. if two events occur at the same time in some inertial frame S,prove that there is no limit on the time seperations assigned to these events in other frames, but that their space seperation varies from infinity to a minimum which is measured in S. 3. in the inertial frame S' the standard lattice clocks all emit a 'flash' at noon. prove that in S this flash occurs on plane orthogonal to the x-axis and travlling in the positive x direction at speed c^2/v. well im not sure what to do in 2 or 1. but in three, the beam of light from S' obviously travels at speed c, and according to einstein's postulate the speed of light is constant to all observers, so shouldnt the flash travel at c? anyway, i know that the flash should travel a distance of vt, where v is the speed of S', where t is the time in S, so we should have ct'=vt where t' is the time measured in S', but is this correct? i would like to advise me how to solve 1 and 2. 2. Mar 31, 2007 ### robphy are you familiar with 4-vectors? spacetime diagrams? 3. Mar 31, 2007 ### MathematicalPhysicist im familiar with spacetime diagrams, where t is a function of x. but i havent yet used 4-vectors. anyway, there isn't mathematical way to prove these questions? 4. Mar 31, 2007 ### robphy for 1, Draw the two events on a spacetime diagram, with the past event at the origin. Note that any proper Lorentz boost will slide the future event along a [future] hyperbola centered at the origin, asymptotes along the light-cone. All events on that hyperbola have the same interval with the origin t2-x2=constant > 0. Note that the time-difference [difference in t-coordinates] of any event on that hyperbola is always positive [so the causal order is preserved]... in fact, the smallest value occurs when x=0. 5. Mar 31, 2007 ### bernhard.rothenstein lorentz transformation exercise 1. The events you define are in S, E(1)[x,t(1)] and E(2)[x,t(2)]; Perform the Lorentz transformations to S' , reckon the corresponding time intervals and space separations and you recover the anticipated results. 2. The events you define are in S, E(1)[x(1),t] and E(2)[x(2),t]. Do the same thing as above. Consider the numbers as indexes. use soft words and hard arguments 6. Mar 31, 2007 ### pmb_phy In general t is not a function of x. That happens on certain occasions such as a particle moving at constant velocity. But for particles which increase speed from 0 at x = 0 and then later decrease in speed, turns around and finally reaches x = 0 again. In this case t is not a true function of x since a function must be single valued and in the example I gave you t has two values for which x = 0. Thus t(x) is multivalued. Pete 7. Apr 2, 2007 thanks guys. 8. Apr 2, 2007 ### bernhard.rothenstein question 3 I think you should state it with more details 9. Apr 2, 2007 ### MathematicalPhysicist what isnt clear there? 10. Apr 2, 2007 ### bernhard.rothenstein question 3 3. in the inertial frame S' the standard lattice clocks all emit a 'flash' at noon. prove that in S this flash occurs on plane orthogonal to the x-axis and travlling in the positive x direction at speed c^2/v. a. in which direction are the light signals emitted (supposed simultaneously in S')? b.on or in the plane? 11. Apr 3, 2007 ### George Jones Staff Emeritus Have have written down the question exactly as it appears in the book? If not, please do so. Suppose that noon is taken as t' = 0 in the primed frame. Then, in the primed frame, the coordinates of an arbitrary flash are (t', x', y', z') = (0, A, B, C). What are the unprimed coordinates of an arbitrary flash? 12. Apr 3, 2007 ### MathematicalPhysicist well this is exactly what is written in the book, i guess my only other option is to ask rindler via email what he meant in this question. 13. Apr 3, 2007 ### pervect Staff Emeritus Try looking at http://en.wikipedia.org/wiki/Relativity_of_simultaneity#Lorentz_transformations the diagram of the "line of simultaneity". Specifically http://en.wikipedia.org/wiki/Image:Relativity_of_simultaneity.png The way I interpret the question, Rindler is talking about the set of events that are simultaneous in S (he says "at noon", I read "simultaneous"), and how they appear in frame S'. The Wiki article addresses the same question with two of the spatial dimensions suppressed. 14. Apr 3, 2007 ### George Jones Staff Emeritus I'm trying to lead you to the answer; I just wondered whether the question was phrased a little differently in the book. Can you answer the question I posed in my previous post? 15. Apr 4, 2007 ### MathematicalPhysicist well in this case we only need to use this equation: $$t'=\gamma*t(1-v*U/c^2)$$ where U is the velocity of flash and v is the velocity of S', at t'=0 we would have that U=c^2/v, but how would i prove that flash occurs on plane orthogonal to the x-axis? well if it were to occur at a plane not orthogonal to the x axis of S, then it will not be orthogonal to the S' system. 16. Apr 5, 2007 ### George Jones Staff Emeritus The question is not looking for the speed at which light spreads from a flash point; this is what's confusing about the question. As pervect noted, the question is about simultaneity. Consider a bunch of cameras, one at each point in space (not spacetime) for S'. At t' = 0, all the camera flashes go off simultaneously for S'. The collection of events that represents the camera flashes going off is then $$F = \{(t', x', y', z') = (0, A, B, C) \},$$ where $A$, $B$, and $$C[/itex] are arbitrary real numbers. What does this collection of events look like in the frame of S? Assume that S and S' are related by a Lorentz transformation along the x-axis in the usual way. Apply a Lorentz transformation to the collection of events that represents the flashes going off gives [tex]F = \left{ \left( t, x, y, z \right) = \left( \frac{v}{c^2} \gamma A, \gamma A, B, C \right) \right}.$$ Using $$t = \frac{v}{c^2} \gamma A$$ gives $$F = \left( t, \frac{c^2}{v} t, B, C \right) \right}.$$ This indicate that all the flashes that occur simultaneously in S at time t occur in space at fixed $x = (c^2/v) t$ and at arbitrary $y$ and $z$. For S, this is a spatial plane orthogonal to the x-axis. Now, consider two times, $t_1$ and $t_2$, for S, with $t_1 < t_2.$ At time $t_1$, a bunch of flashes go off simultaneously in the spatial plane $x = c^2/v t_1;$ at time $t_2$, a bunch of flashes go off simultaneously in the spatial plane $x = c^2/v t_2.$ The spatial distance between the planes divided by difference in times gives that the "plane of flashes" propagates with speed $c^2/v.$ Last edited: Apr 5, 2007 17. Apr 5, 2007 ### bernhard.rothenstein events simultaneous in S' Thanks for bringing light in the statement of the problem. I think that we could state it as (the introduction of the light signals is confusing): (g stands for gama and b for beta) Consider the events E'(0.r',p') in a two space dimensions approach using polar coordinates. Using the LT we obtain that one of those events is defined in S by the polar coordinates (r,p) r=r'g[1-bb(sinp')^2]^1/2 (1) tgp=tgp'/g[/I] (2) Equation shows that if the events E' are located in S' on a normal on the O'X' axis the same events are located in S are located on a normal on the same axis. Consider that the events E' take place in S' on a given curve say on the circle r'=R(0). Detected from S they take place on the curve r=R(0)g[1-bb(sinp')^2]^1/2 . (3) The problem can be extended. Thanks for giiving me the opportunity to spend some pleasant time on an interesting problem. Am I correct? Regards Bernhard 18. Apr 5, 2007 ### MathematicalPhysicist thanks for the help. i have just another query: we have two photons which travel along the x axis of S with constant distance L between them. prove that in S' the distance between them is L*(c+v)^0.5/(c-v)^0.5. now the way i sloved is as follows: (x1,0) and (x2,0) are the coordinates in S where x2>x1, then we have x1=ct-L x2=ct and in S' it will be $$x1'=\gamma(ct-L-vt)$$ $$x2'=\gamma(ct-vt)$$ when $$x2'-x1'=\gamma(L)$$ but this is ofcourse not the same as what i need to prove, where did i go wrong? 19. Apr 5, 2007 ### bernhard.rothenstein solving the proposed problem I have found in a paper I studied long time ago that in order to solve a problem it is advisable to start in the reference frame where it is the simplest and to find out there the significant events. Transform them via the LT to another inertial reference frame. So we start in S where the events generated by the two photons (light signals) are E(1)[a,a/c) and E(2)[a+L,(a+L)/c). Detected from S' the space coordinates of the two events are x'(1)=ga(1-b) and x'(2)=g(a+L)(1-b) and so the distance between the two events is x'(2)-x'(1)=Lsqrt[(1-b)/(1+b)] probably the desired result? Do you see some analogy with the formula whixh accounts for the Doppler Effect? Please give me the exact statement of the problem proposed by Rindler and its quotation, because my edition is quite old. 20. Apr 6, 2007 ### MathematicalPhysicist ok, i can see that you wrote down what is t specifically, but still my question is why my approach yields a different answer than the expected one. p.s why do you folks think that im not giving you already the full question from the text, i.e a quoted question? Similar Discussions: A few questions from introduction to sr by rindler.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8576744198799133, "perplexity": 858.895245606876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00393-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.bowaggoner.com/blog/2019/02-03-sub-gamma-concentration/index.html
# The Tiger's Stripes A technical blog on math, computer science, and game theory. Author: Bo Waggoner RSS feed # Sub-Gamma Variables and Concentration Posted: 2019-02-03. For variables that are usually small, subgaussian concentration bounds can often be loose. We can get better Bernstein bounds using the "sub-gamma" approach. The general approach is very similar to subgaussians: 1. Define $(v,c)$-sub-gamma variables (two parameters in this case). 2. Show that sub-gamma variables satisfy (Bernstein) concentration bounds. 3. Show that summing and scaling (up) sub-gamma variables give sub-gamma variables. 4. Show that common variables (especially $[0,1]$-bounded) are actually sub-gamma. ## Example Say $Y = Y_1 + \dots + Y_n$ and each $Y_i$ is a Bernoulli. An approach based on subgaussian random variables can guarantee you that the standard deviation of $Y$ is at most $O(\sqrt{n})$ and is within this distance of its mean with high probability. But what if each $Y_i$ is quite unlikely to be 1? What if $\mathbb{E} Y \ll \sqrt{n}$, for example? In this case better bounds are true, and we can use sub-gamma variables to prove them! In particular, this post will show how to upper-bound $Y$ by a bound depending on its mean, with high probability. ## Sub-Gammaness Recall that the main tool we have for proving concentration bounds is applying Markov's inequality to something like $e^{f(Y)}$. So it is not surprising that, like with subgaussian variables, we impose a condition on the moment generating function $\mathbb{E} e^{\lambda Y}$. Let the random variable $Y$ have mean $\mu$. We say $Y$ is $(v,c)$-sub-gamma (on the right) if, for all $\lambda$ with $0 \lt \lambda \lt 1/c$, $\mathbb{E} e^{\lambda(Y-\mu)} \leq \exp\left( \frac{v ~ \lambda^2}{2(1-c\lambda)} \right) .$ Note that this is one-sided: it only works for $\lambda \gt 0$, whereas the subgaussian condition was symmetric. So we are only going to be able to prove upper bounds using this definition. For intuition in the following bound, (spoiler alert) a sum of independent Bernoullis is $(v,1)$-sub-gamma where $v$ is its mean. Theorem. If $Y$ is $(v,c)$-sub-gamma (on the right), then for all $t \gt 0$, $\Pr[Y - \mathbb{E}Y \geq \sqrt{2vt} + ct] \leq e^{-t} .$ Proof. Start with the standard "Chernoff" approach: for all $\theta \gt 0$ and $\lambda \in (0,1/c)$, by transformations and Markov's, \begin{align*} \Pr[Y-\mathbb{E}Y \geq \theta] &\leq e^{-\lambda \theta} \mathbb{E} e^{\lambda \mathbb{E}(Y-\mathbb{E}Y)} \\ &\leq \exp\left(-\lambda \theta + \frac{v \lambda^2}{2(1-c\lambda)} \right) \end{align*} by definition of sub-gamma. Now, I don't have a non-magical way to continue, so here's a magical "guess-and-check" proof. Let $t = \lambda \theta - \frac{v\lambda^2}{2(1-c\lambda)}$. Then the above expression says $\Pr[Y - \mathbb{E}Y \geq \theta] \leq \exp(-t)$. Now, we're going to find a choice of $\lambda$ such that $\theta = \sqrt{2vt} + ct$. This will prove the theorem. (We'll also have that for any $t \gt 0$, there is such a choice of $\lambda$.) Okay, here's some magic. By definition, we have $\lambda\theta = t + \frac{v\lambda^2}{2(1-c\lambda)}$. Now we subtract $tc\lambda + \sqrt{2tv}\lambda$ from both sides of this equality: \begin{align*} \lambda \theta - tc\lambda - \sqrt{2tv}\lambda &= t - tc\lambda -\sqrt{2tv}\lambda + \frac{v\lambda^2}{2(1-c\lambda)} \\ &= \left(\sqrt{t(1-c\lambda)} - \lambda\sqrt{\frac{v}{2(1-c\lambda)}}\right)^2 . \end{align*} Now we choose $\lambda$ such that $t = \frac{\lambda^2v}{2(1-c\lambda)^2}$, which some calculus shows we can do for any positive $t$ using some $\lambda \in (0,1/c)$. Then we get \begin{align*} \left(\sqrt{t(1-c\lambda)} - \lambda\sqrt{\frac{v}{2(1-c\lambda)}}\right)^2 &= 0 \\ &= \lambda \theta - tc\lambda - \sqrt{2tv}\lambda \\ &= \theta - tc - \sqrt{2tv} . \end{align*} And then we get $\theta = tc - \sqrt{2tv}$, which completes the proof. This theorem basically says there are two regimes. If $v$ (roughly the mean) is medium-small, then the $\sqrt{2vt}$ term promises that, for instance, the probability of exceeding the mean by a factor $\sqrt{\ln(1/\delta)}$ is at most $\delta$. If $v$ is really small or $c$ is large (which corresponds to variables with large ranges or heavy tails), then the second term $ct$ kicks in. ## Calculus of Sub-Gammas Sub-gammas wouldn't be useful unless we could easily show that new variables are sub-gamma. Here are the facts that let us do that. Theorem. If $Y$ is $(v,c)$-sub-gamma, then $Y + r$ is $(v,c)$-sub-gamma for any constant $r$. Proof. Immediate; $Y- \mathbb{E}Y = Y+r - \mathbb{E}[Y+r]$. Theorem. If $Y$ is $(v,c)$-sub-gamma, then for any $a \geq 1/c$, $aY$ is $(a^2v, ac)$-sub-gamma. Proof. Exercise (straightforward). Theorem. If $Y_1$ is $(v_1,c_1)$-sub-gamma and $Y_2$ is $(v_2,c_2)$-sub-gamma, then $Y_1 + Y_2$ is $(v_1+v_2,\max\{c_1,c_2\})$-sub-gamma. Proof. Exercise (straightforward). The key step is, letting $\bar{c} = \max\{c_1,c_2\}$, to use the substitutions $c_1 \leq \bar{c}$ and $c_2 \leq \bar{c}$. So for example, we get that a sum of $(a_i,c)$-sub-gamma variables is $(\sum_i a_i, c)$-sub-gamma. But note sub-gamma variables don't give us quite as much power as Gaussians: $-Y$ is not necessarily sub-gamma, since the definition isn't symmetric, and we also can't generally scale the second parameter $c$ below $1$. (One could define two-sided sub-gamma variables, but it's not clear that interesting variables satisfy this. The applications I've seen focus on e.g. Bernoullis of small mean where the left-tail restriction would only get in the way.) ## Examples Another way sub-gammas wouldn't be useful is if no variables were ever sub-gamma. By pure luck, this doesn't happen (otherwise I'd be really embarassed at this point in the blog post). Theorem. Suppose $Y_i$ is in $[0,1]$ with mean $p$; then $Y_i$ is $(p,1)$-sub-gamma. Proof. We're going to prove three steps. 1. We show $\mathbb{E} e^{\lambda Y_i} \leq p e^{\lambda} + (1-p)$. 2. Using this, we show $\mathbb{E} e^{\lambda(Y_i-p)} \leq \exp\left[ p\left( e^{\lambda} - 1 - \lambda\right)\right] .$ 3. We show that for $0 \lt \lambda \lt 1$, $e^{\lambda} -1 - \lambda \leq \frac{\lambda^2}{2(1-\lambda)}$. Combining the last two proves the theorem. (1) Given any outcome $y \in [0,1]$, we have $e^{\lambda y} \leq y e^{\lambda} + (1-y)$. (This follows by considering a Bernoulli $X$ with mean $y$ and applying Jensen's inequality to $\mathbb{E} e^{\lambda X}$.) Okay, so \begin{align*} \mathbb{E} e^{\lambda Y_i} &\leq \mathbb{E}\left[ Y_i e^{\lambda} + (1-Y_i) \right] \\ &= p e^{\lambda} + (1-p). \end{align*} (2) Using the previous fact and the ever-useful inequality $1+x \leq e^x$, \begin{align*} \mathbb{E} e^{\lambda(Y_i-p)} &= e^{-\lambda p} \mathbb{E} e^{\lambda Y_i} \\ &\leq e^{-\lambda p} \left(p e^{\lambda} + (1-p)\right) \\ &\leq e^{-\lambda p} e^{p(e^{\lambda}-1)} \\ &= \exp\left[p\left(e^{\lambda - 1 - \lambda}\right)\right] . \end{align*} (3) Notice $1+\lambda$ are the first two terms of the Taylor series of $e^{\lambda}$: \begin{align*} e^{\lambda} - 1 - \lambda &= \frac{\lambda^2}{2} + \frac{\lambda^3}{6} + \cdots \\ &\leq \frac{\lambda^2}{2}\left(1 + \lambda + \lambda^2 + \cdots\right) \\ &= \frac{\lambda^2}{2(1-\lambda)} \end{align*} where the series converges and equality holds if $\lambda \lt 1$. This gives us strong tail bounds for sums of bounded variables. Corollary. Let $Y = \sum_i a_i Y_i$ where $Y_i \in [0,1]$ with mean $p_i$. Let $v = \sum_i a_i^2 p_i$, then $Y$ is $(v, \max_i a_i)$-sub-gamma. In particular, with probability at least $1-\delta$, we have $Y - \mathbb{E} Y \leq \sqrt{2 v \ln\frac{1}{\delta}} + \max_i a_i \ln\frac{1}{\delta} .$ In particular, if $Y$ is the sum of $[0,a]$-bounded variables for $a \geq 1$, then with probability $1-\delta$, $Y - \mathbb{E} Y \leq \sqrt{2 a \mathbb{E}Y \ln\frac{1}{\delta}} + a \ln \frac{1}{\delta} .$ Notes. The interested reader can work out, for example, that the exponential distribution is sub-gamma as are more general gamma distributions. Interestingly, in my mind, they aren't "tight" the way the definition of subgaussian is tight for Gaussians. I think this is because sub-gamma variables are sort of capturing several limits or tail behaviors at once, depending on how $\lambda$ and $c$ relate. For example, in the above proof for $[0,1]$ bounded variables, we obtained $\mathbb{E} e^{\lambda X} \leq e^{p(e^{\lambda}-1)}$ when $\mathbb{E}X = p$, which is tight for a Poisson variable with mean $p$. But to get the definition of sub-gamma, we relaxed this bound a bit further. Why? Poisson's are stable in one sense, that sums of Poissons are Poisson, so a definition of "sub-Poisson" variables seems very natural and would apply to $[0,1]$ variables with mean $p$. The problem is that Poissons don't seem to behave so nicely under scaling (e.g. they are not Poissons). This is the power we gained, but we may have given up some mathematical elegance as compared to subgaussians. ## References (2013) St{\'e}phane Boucheron, G{\'a}bor Lugosi, and Pascal Massart. Concentration inequalities: {A} nonasymptotic theory of independence. ## Comments Post a comment: Sentient beings: Ignore the following two fields. Name: Limits to 1000 characters. HTML does not work but LaTeX does, e.g. $\$$x^y\$$ displays as$x^y\$. Please allow a few seconds after clicking "Submit" for it to go through.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996296167373657, "perplexity": 869.3589777331158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00425.warc.gz"}
https://www.openchannels.org/blog/mikehay/bayesian-statistics-common-sense-formalized
Bayesian statistics: Common sense formalized In the last posts (first, second), I outlined a number of common errors in the usage and interpretation of P-values. Due to the base-rate fallacy or the multiple comparisons problem, the significance level $\alpha$ of a null-hypothesis significance test can easily be an order of magnitude lower than the true false positive rate. For example, under $p<0.05$, we could easily have a 50% error rate. These issues are one of the primary causes of the replication crises currently shaking psychology and medical science, where replication studies have found that the majority of significant results examined are insignificant upon replication. Fisheries and marine policy have many of the same risk factors driving the unreliability of scientific findings in those other fields, but no one has actually attempted to do any replication studies yet. Other common techniques such as confidence intervals and likelihood-ratio tests ameliorate some of the shortcomings of P-values and null-hypothesis significance tests. However, these techniques still suffer from most of the really severe problems, like the base-rate fallacy. The reason behind these deficiencies is that all of these methods follow the frequentist statistical paradigm. In the frequentist paradigm, the probability of some event is considered to be the proportion of times, after repeating the experiment an infinite number of times, that the event occurs. Suppose we wish to estimate the mean rockfish length in an MPA. In typical frequentist inference, the results of an experiment (here, the observed fish lengths) are assumed to be random. The probability of observing some data is the proportion of times that data occurs after an infinite number of trials. However, the hypotheses (e.g., the true mean fish length being a particular value) that we are trying to infer with our experiment are considered unknown but not random. This is why frequentist methods cannot give you the probabilities of competing hypotheses, since the hypotheses aren't considered to be random in the first place. For the same reason, frequentist techniques make it very hard to use prior information, which is the underlying cause behind the multiple-comparisons problem and the base-rate fallacy. Fortunately, Bayesian statistical methods offer not only powerful solution to these problems, but also a different, more intuitive way of thinking about probability and evidence than the frequentist paradigm. Bayesian methods are the only way of coherently combining past information with new information, which allows it to more easily avoid issues like this one: (Don't take this cartoon too seriously. It is a hilariously bad example of the base-rate fallacy, not an indictment of frequentist inference in general. xkcd.com; Randall Munroe, CC Attribution-NonCommercial License 2.5) In the Bayesian philosophy, probability represents degrees of belief. While this is ultimately just a philosophical difference, the two paradigms do tend to result in very different methods and interpretation in practice. Bayesian statistics considers the possible hypotheses (here, the true mean fish length) to be random because we don't know which one is true. Before the experiment, we have prior knowledge about the mean fish length (maybe that we know that mean fish lengths in most areas are between 75 and 85 cm), encoded in a prior probability distribution of the mean. We then observe a sample of fish. This gives us more information, which we combine with our already-known prior distribution using Baye's theorem (more about that in a second) to form the posterior distribution encompassing all of our knowledge after doing the experiment. So, after doing our experiment, we don't have a single point estimate of the mean, but a probability distribution (since we still don't know it exactly). Baye's theorem is the following: Suppose that we wish to find the probability of some event or hypothesis $A$ (for example, the mean length of rockfish in an MPA being a certain value) based on prior information giving the prior probability $P(A)$ of $A$ along with new information: We observed some event $B$ (say, the sample mean length of a sample of fish), and we know the likelihood $P(B|A)$ of observing $B$ given that $A$ is true. This likelihood is what frequentist inference gives us, but with Baye's theorem, we can reverse $P(B|A)$ to get the probability of $A$ given that $B$ occurred (this is what we actually want): $\mathrm{P}(A|B) = \frac{P(B|A)P(A)}{P(B)}$. The quantity $\mathrm{P}(B)$ is the marginal probability of observing $B$ independent of $A$. It just acts as a normalizing constant so the probabilities all sum to one. It can be found by summing $P(B|A)P(A)$ over all possible $A$ (or integrating if there are infinite $A$, like if it is a population mean that could be any positive value). The important part is the numerator: We are simply multiplying the prior probability $\mathrm{P}(A)$ by the likelihood $\mathrm{P}(B|A)$ of observing the data $B$ given that the hypothesis $A$ is true. So if we consider the event $A$ to be a priori improbable, we need a high likelihood $\mathrm{P}(B|A)$ from our experiment to overcome our prior beliefs about $A$. For example, if we knew that the average adult rockfish in most areas is nearly always around 90 cm, it might take a very large sample to convince us that it is more like around 110 cm. Baye's theorem is actually just a generalization of common-sense logical rule of contrapositivity (if $A$ implies $B$, then "not $B$" implies "not $A$") to when we are uncertain about $A$ and $B$. Baye's theorem is something you qualitatively employ every day without realizing it. Bayesian statistics just formalizes it. A simple example Let's go back to rockfish lengths. Suppose we wish to estimate the mean rockfish length in an MPA. With Bayesian inference, we start with a model that describes our problem. Let's do that. First, suppose we know that adult rockfish lengths in any population are approximately normally distributed (this isn’t true, but it makes for a good simple example). Given this, we know that the population inside the MPA is normal with some unknown mean $\mu$ and standard deviation $\sigma$. For simplicity in this example, let's assume that we actually know the standard deviation $\sigma$ of rockfish lengths in all populations is exactly 30 cm, but we don't know the mean. Unlike the frequentist hypothesis testing, we are not framing it as a choice between a null hypothesis and an alternative hypothesis. Hypothesis testing (whether frequentist or Bayesian) doesn't make very much sense for this problem, since there are more than two alternatives. The first task is to quantify our prior knowledge of the distribution of fish lengths in the MPA. We also know the standard deviation exactly (30 cm), so we just need to figure out how to represent our uncertainty of the mean fish length. We represent this uncertainty as a prior distribution over the mean. I emphasize that the prior is a distribution of the mean parameter $\mu$ of fish lengths, not a distribution of the fish lengths themselves. The normal distribution is actually a good choice for the prior distribution of the mean, because of the central limit theorem. So, we need a prior mean $\mu_0$ and a prior standard deviation $\sigma_0$ to encode our prior knowledge of the mean in a normal distribution. Let’s say we know that similar MPAs tend to have average lengths around 80 cm, and have a standard deviation of around 2 cm. In other words, about ⅔ of similar MPAs have average lengths between 78 and 82 cm. From this, we can represent our prior beliefs of mean fish length as being approximately normal with mean $\mu_0 = 80$ cm and standard deviation $\sigma_0 = 2$ cm. The mean fish lengths can't of course be exactly normal, because then fish could have negative lengths. But, it doesn't matter too much here, since there is about a 95% chance that the average fish length $\mu$ in the MPA is between $80 \pm 2 \times 2 = 80 \pm 4$. Next, we collect our fish length data, which we know is normally distributed. Suppose that we took a sample of 100 fish, yielding a sample mean length of $\bar{x}=90$ cm. We already know the standard deviation $\sigma$ of fish lengths (as opposed to the mean) is exactly 30 cm, so we don’t have to worry about the sample standard deviation. To summarize, we have the following model, $\mathrm{P}(\mu) \sim \mathrm{Normal}(80,2)$ $\mathrm{P}($length of fish $i = x | \mu) \sim \mathrm{Normal}(\mu,\sigma=30)$ The first line is our prior distribution of mean length $\mu$. The second is the likelihood of observing a fish in our sample with length $x$, given that the mean is some $\mu$ taken from the prior distribution. For our sample of 100 fish, there are 100 such likelihood terms for each fish $i$. Let's just write the total likelihood of observing all the fish that we did, given $\mu$, as $\mathrm{P}(D|\mu)$. Now, we find the posterior distribution of mean fish lengths using Baye's theorem: $$\mathrm{P}(\mu|D) =\frac{P(D|\mu)P(\mu)}{\mathrm{P}(D)},$$ Happily, in this case, the normalizing constant $\mathrm{P}(D)$ can be integrated out analytically. It turns out that the posterior is also a normal distribution: $$\mathrm{P}(\mu|D) = \mathrm{Normal}(\mu_1,\sigma_1),$$ where, $$\sigma _1^2 = \frac{1}{n/\sigma^2 + 1/\sigma_0^2} = \frac{1}{100/30^2 + 1/2^2}\approx 2.8$$ and, $$\mu_1 = \frac{1}{\frac{1}{\sigma_0^2} + \frac{n}{\sigma^2}}\left(\frac{\mu_0}{\sigma_0^2} + \frac{n \bar{x}}{\sigma^2}\right)$$ $$= \frac{1}{\frac{1}{2^2} + \frac{100}{30^2}}\left(\frac{80}{2^2} + \frac{100 \times 90}{30^2} \right) \approx 83$$ The posterior mean of the fish length mean is an average of the prior mean and the sample mean from the data, weighted by the number of observations and standard deviations $\sigma_0=2$ of the prior, and $\sigma=30$ of the fish lengths. So with this analysis, we now think that the average fish length in the MPA is about 83 cm. Our prior information was that mean fish lengths are close to 80 cm the vast majority of the time. Thus, our observed sample mean of 90 cm was most probably due to happening to observe some large fish by chance from an MPA that probably has somewhat above average fish lengths. The corresponding frequentist estimate is 90 cm (the sample mean). This does not take into account prior information. Naively interpreted, the frequentist estimate jumps to the conclusion that the MPA has an incredibly large average fish length, 90 cm. That is 5 standard deviations above the mean, which should essentially never happen. We would come to a completely wrong conclusion about the effectiveness of this MPA with the frequentist estimate. The Bayesian result avoids the base-rate fallacy with prior information, following the adage "extraordinary claims require extraordinary evidence." That was a very simple example. In practice, we would also have a prior distribution over the standard deviation of fish lengths, since we don't actually know the standard deviation (there is still an analytical solution in that case). In the previous example, the normal prior for the mean fish length is a conjugate prior to the normal likelihood $\mathrm{P}($length of fish $i = x | \mu)$ of observing a particular fish length given the mean. When using a prior conjugate to the likelihood, the posterior can be expressed analytically in the same distributional form as the prior. Here, the prior was normal, so the posterior was also normal. The likelihood was also normal, but that only holds in this particular case. For example, the conjugate prior for variance $\sigma^2$ of fish lengths (not the variance of the mean) is the inverse gamma distribution, and the posterior is also inverse gamma, rather than normal. In the last example, we assumed we knew that $\sigma^2$ was exactly $30 \times 30=900$. If we weren't sure about the $\sigma^2$, we could have just encoded our less than exact knowledge by using an inverse gamma prior. For many problems, there are no analytical exact solutions for the posterior distribution as we had here. The bugaboo is the normalization constant $P(A)$, which is often impossible to calculate exactly. Fortunately, there are a number of free and open-source software packages that offer push-button fast and accurate numerical approximations of posterior distributions. Popular ones include Stan (standalone modeling language with interfaces in in R, Python, and Julia), PyMC3 (my favorite, in Python), and Edward (Python). Notice that we did this example by thinking about our problem domain and our prior knowledge, and specifying a model that that follows our knowledge of the problem. We represent anything we don't know exactly as a probability distribution, which we just choose to look like our state of knowledge. These characteristics make the Bayesian model intuitive than typical frequentist null-hypothesis statistical testing, where instead of specifying a model that directly represents our prior knowledge and the experiment, we have to choose from a zoo of statistical tests and techniques with confusing assumptions and unclear relationships to the problem at hand. Due to this unintuitive nature, frequentist inference is often done on the basis of lore and tradition than sound mathematics. Conclusions Bayesian methods offer the following advantages over frequentist techniques: • They provide a logically consistent way of combining previously known information with new information. Thus, Bayesian statistics are a rigorous way of avoiding the base-rate fallacy. The base-rate fallacy only occurs with frequentist methods because they cannot use prior information in a straightforward way. • Bayesian models are more intuitive to correctly specify than frequentist tests. • Bayesian inference tells us what we want to know. Using Baye's theorem, we get actual probabilities of competing hypotheses. This is what we need to make rational decisions under uncertainty. Frequentist methods (including P-values and confidence intervals) do not give us this. • People think in a Bayesian way. This is perhaps a reason why frequentist techniques like P-values and confidence intervals are so consistently misinterpreted in a Bayesian way. For example, the inverse-probability fallacy from the previous post is the misinterpretation of a P-value as a Bayesian posterior probability. • Bayesian statistics completely solves the multiple comparisons problem. The MCP is sidestepped if we use prior information of the probability of hypotheses. • Bayesian statistics avoids the HARKing problem. In the frequentist paradigm, we should only test hypotheses that we have a prior reason for suspecting might be true. This knowledge is implicit and ill-defined. Bayesian statistics makes it explicit and rigorous by encoding the knowledge in the prior distribution. A hypothesis formed after observation of the data can be assigned a prior reflecting the fact that we have no a-priori reason to believe it might be true. Bayesian methods are often criticized for being subjective due to their use of priors. However, all statistical methods are subjective. Anyone who thinks they can banish subjectivity from their data analysis is dangerously fooling themselves. All statistical methods require some kind of prior knowledge about the data. Frequentist inference methods require this prior information to be taken into account in ad-hoc and often qualitative ways, either in experimental design, or in interpretation of the results. Bayesian methods just make this more explicit and intuitive, by encoding prior information in the prior distribution. In addition, the use of a prior distribution is actually conservative. A prior distribution reduces the amount that the data informs our belief. A noisy or small sample will change our beliefs less than a large sample. One may wonder why frequentist methods are dominant in science despite their shortcomings. The reasons behind this have mostly to do with inertia. Bayesian methods tend to be more computationally intensive. Before the advent of fast computers, frequentist methods were the only tractable ways to perform statistical inference in many cases. But now, computers are very fast and they are very cheap, so this advantage of frequentist statistics is largely moot. Apart from computational reasons, scientists are usually only trained in frequentist methods. Most scientists are loathe to spend time learning abstract things not directly related to their work. Lastly, reviewers are often suspicious of methodologies unfamiliar to them, which further discourages the use of Bayesian statistics. However, this is changing, albeit slowly. In geophysics, Bayesian methods are seeing increased use for inverse problems such as imaging the Earth's interior with seismic waves, due to their resistance to overfitting with noisy data, as well as the well-calibrated uncertainty estimates they provide. For the same reasons, Bayesian methods (or various approximations) are the norm in artificial intelligence and machine learning. Bayesian methods have long been used for weather forecasting: The parameters of the equations of weather physics are considered random, and their distributions are updated using Baye's theorem on a nearly continuous basis with new meteorological data to produce weather forecasts. Frequentist statistics are a valid tool in many situations. Most of the problems with P-values that I've laid out can be avoided by properly applying and interpreting frequentist methods. However, doing so tends to be complex, error prone, and highly unintuitive. Bayesian methods and their equivalent frequentist techniques also give very similar answers if the data are highly informative compared to the prior. Many frequentist methods do in fact have some kind of Bayesian interpretation. Often things like confidence intervals are equivalent to Bayesian techniques with certain choices of prior distributions, often uninformative priors which attempt to encode a lack of prior information. However, even these priors contain information, thus subjectivity still remains. Bayesian statistics are a large subject that won't fit into this post. In future posts I will give in-depth examples of Bayesian inference, including how it can be done on a computer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105731844902039, "perplexity": 467.0632124611485}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516194.98/warc/CC-MAIN-20181023132213-20181023153713-00235.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-and-make-the-fraction-7-8-into-a-decimal
Algebra Topics # How do you simplify and make the fraction 7/8 into a decimal? Jun 3, 2018 Perform the indicated division operation. $\frac{7}{8} = 0.875$ #### Explanation: A "fraction" is a representation of a ratio, or relative pairing of values. Sometimes the fractional form is more precise, but in all cases it can be converted into a "decimal" number simply by recognizing the fraction notation as identical to the notation we use for division. $\frac{7}{8}$ = "7" divided by "8" $\frac{7}{8} = 0.875$ In this case we have a specific value (no repeating terms), so it is equivalent in accuracy to the original fraction. ##### Impact of this question 537 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9292017221450806, "perplexity": 994.9985844955818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392057.6/warc/CC-MAIN-20200527013445-20200527043445-00429.warc.gz"}
http://todaynumerically.blogspot.com/2013/02/sunday-17-february-2013.html
## Sunday, 17 February 2013 ### SUNDAY, 17 FEBRUARY 2013 Today is $48^{th}$ day of the year. $48$ is the double factorial of $6$, written $6!!$ The value of the double factorial of a number $n$ is defined as follows: If $n = -1$ or $n = 0$ then $n!! = 1$ otherwise $n!! = n.(n - 2)!!$ Given $n = 6$ then $6!! = 6.4!! = 6.4.2!! = 6.4.2.0!! = 6.4.2.1 = 48$ However, if $n = 5$ then $5!! = 5.3!! = 5.3.1!! = 5.3.1 = 15$ Thus, $6!!.5!! = 6.4.2.1.5.3.1 = 6.5.4.3.2.1 = 6!$ or, equivalently: $$n!!.(n - 1)!! = n!$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9567163586616516, "perplexity": 677.4800708636367}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589573.27/warc/CC-MAIN-20180717051134-20180717071134-00469.warc.gz"}
http://hoggresearch.blogspot.com/2014/03/seeing-whole-universe-in-single-galaxy.html
## 2014-03-06 ### seeing the whole Universe in a single galaxy In the morning, Andrea Maccio (MPIA) gave a nice talk about making star-formation and AGN feedback in numerical simulations of galaxy formation much more realistic. He is very negative about the possibility that AGN can stop star formation: AGN emit jets, which punch through the central part of the ISM and don't really heat all of the necessary volume. He also showed that the the dark-energy equation of state can affect galaxy evolution, not because the dark energy has any direct effect on how structures form, but because it changes the timing of gravitational collapse and star-formation episodes. That got the audience all in a tizzy: Can we infer the dark-energy equation of state from the radial distribution of stars in a galaxy? The answer is no: It looks like this effect is strongly degenerate with feedback parameters, but it is super-intriguing. Late in the day, Foreman-Mackey and I checked in with Dawson about the in-transit noise. She has some systems that show a very strong effect of higher noise in transit than out. We suggested improvements to the statistical tests, and Dawson will try to move to smaller and smaller planets tomorrow.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751584887504578, "perplexity": 1194.5382459683929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246643815.32/warc/CC-MAIN-20150417045723-00108-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/rules-regarding-the-vector-cross-product-and-dot-product.105130/
# Rules regarding the vector cross product and dot product • Start date • #1 80 0 hi, I'm currently doing a mechanics module at Uni. The thing is, I'm not very sure about rules regarding the vector cross product and dot product. For example, it says in my notes for angular momentum: "Introducing polar coordinates $$\mathbf{r} = r(cos \Phi \mathbf{i} + sin \Phi \mathbf{j})$$ $$\mathbf{\dot{r}} = \dot{r} (cos \Phi \mathbf{i} + sin \Phi \mathbf{j}) + r \dot{\Phi}(-sin \Phi \mathbf{i} + cos \Phi \mathbf{j})$$ The angular momentum about 0 is given by $$\mathbf{h} = m \mathbf{r} \times \mathbf{\dot{r}} = mr^2 \dot{\Phi} \mathbf{k}$$" How do you get all those sin and cos functions to cancel out?! I mean what's going on here? How did the k suddenly appear? Last edited: • #2 3,768 11 ElDavidas said: hi, I'm currently doing a mechanics module at Uni. The thing is, I'm not very sure about rules regarding the vector cross product and dot product. For example, it says in my notes for angular momentum: "Introducing polar coordinates $$\mathbf{r} = r(cos \Phi \mathbf{i} + sin \Phi \mathbf{j})$$ $$\mathbf{\dot{r}} = \dot{r} (cos \Phi \mathbf{i} + sin \Phi \mathbf{j}) + r \dot{\Phi}(-sin \Phi \mathbf{i} + cos \Phi \mathbf{j})$$ The angular momentum about 0 is given by $$\mathbf{h} = m \mathbf{r} \times \mathbf{\dot{r}} = mr^2 \dot{\Phi} \mathbf{k}$$" How do you get all those sin and cos functions to cancel out?! I mean what's going on here? How did the k suddenly appear? Well, the easy way out for you is just to calculate the vector product between r and r_dot. You have been given the components, so you should be able to do this. Try it and come back with your result...i will keep an eye on you :) regards marlon edit : keep in mind that those vectors are 3 D. You have a third k unit vector next to i and j but ofcourse the components in the k-direction are 0 for both vectors. eg : for r you actually have $$\mathbf{r}= r(cos \Phi \mathbf{i} + sin \Phi \mathbf{j} + 0\mathbf{k})$$ Last edited: • #3 80 0 marlon said: Well, the easy way out for you is just to calculate the vector product between r and r_dot. Ok, so the formula for calculating the cross product is $$r \times \dot{r} = sin \alpha |r| |\dot{r}|$$ So $$r \times \dot{r} = sin \alpha |r(cos \Phi \mathbf{i} + sin \Phi \mathbf{j})| |\dot{r} (cos \Phi \mathbf{i} + sin \Phi \mathbf{j}) + r \dot{\Phi}(-sin \Phi \mathbf{i} + cos \Phi \mathbf{j}) |$$ what do you do from here? Last edited: • #4 3,768 11 Well, there are actually two formula's for calculating a vector product 1) $$r \times \dot{r} = sin \alpha |r| |\dot{r}| \vec{k}$$ the version you quoted is not entirely correct because a vector product will yield another vector perpendicular to the plane of the two vectors in the cross product. That is why i added the k vector because it is perpendicular to the (i,j)-plane. 2) determinant with in the first row the three unit vector i j k. In the second row the i,j,k-components of the first vector in the cross product. In the third row the i,j,k-components of the second vector in the cross product. Using method 1) you need to a) calculate the magnitude of a vector of which you have been given the components. The formula is : $$| \vec{r}| = |a \vec{i} + b \vec{j} + c \vec{k}| = \sqrt{a^2 + b^2 + c^2}$$ b) determine the angle between the vectors r and r_dot. ie what is the alpha ? regards marlon ps : try applying both methods. • #5 matt grime Homework Helper 9,420 4 ElDavidas said: Ok, so the formula for calculating the cross product is $$r \times \dot{r} = sin \alpha |r| |\dot{r}|$$ No, the LHS is a vector and the RHS is a scalar, so try to have a think about what you really mean. • #6 391 6 matt grime said: No, the LHS is a vector and the RHS is a scalar, so try to have a think about what you really mean. r x r' = ( |r| |r'| sin a )K Where K is a perpendicular unit vector? (Perpendicular to r and r'?) • #7 3,768 11 Pseudo Statistic, marlon • #8 391 6 marlon said: Pseudo Statistic, marlon I know, but I wanted to see exactly. what matt grime was hinting at about the vector and scalar forms. • #9 3,768 11 Pseudo Statistic said: I know, but I wanted to see exactly. what matt grime was hinting at about the vector and scalar forms. Well, just the same. If you do not add the unitvector k (as the OP didn't) than you are not describing a vector but an ordinary number (ie the scalar). A vector consists out of a number (the component) and a unitvector. In the case of a cross product, this unit vector in the RHS of the equation must be directed perpendicular to the plane of r and r_dot. marlon • #10 matt grime Homework Helper 9,420 4 Pseudo Statistic said: I know, but I wanted to see exactly. what matt grime was hinting at about the vector and scalar forms. hinting at? i was hinting at nothing, but merely observing that there was a mistake, one that marlon has adequately explained in my opinion. • #11 80 0 marlon said: Well, there are actually two formula's for calculating a vector product 1) $$r \times \dot{r} = sin \alpha |r| |\dot{r}| \vec{k}$$ Ok, so I follow the above formula I get: $$|r| = r$$ which seems right. It then gets a bit messy $$|\dot{r}| = \sqrt {\dot{r}^2( cos^2\Phi + sin^2\Phi) + r^2 \dot{\Phi}^2 (-sin^2\Phi + cos^2\Phi})}$$ • #12 3,768 11 ElDavidas said: Ok, so I follow the above formula I get: $$|r| = r$$ which seems right. correct It then gets a bit messy $$|\dot{r}| = \sqrt {\dot{r}^2( cos^2\Phi + sin^2\Phi) + r^2 \dot{\Phi}^2 (-sin^2\Phi + cos^2\Phi})}$$ This is incorrect. The i component is $$\dot{r} cos \Phi - r \dot{\Phi} sin \Phi$$ The j component is $$\dot{r} sin \Phi + r \dot{\Phi} cos \Phi$$ Now, what is the magnitude ? marlon • #13 80 0 marlon said: Now, what is the magnitude ? marlon Ok, I get: $$|\dot{r}| = \sqrt {\dot{r}^2 + r^2 \Phi^2}}$$ • #14 3,768 11 ElDavidas said: Ok, I get: $$|\dot{r}| = \sqrt {\dot{r}^2 + r^2 \Phi^2}}$$ that is correct now calculate the angle alpha. It is easier to work with determinants though regards marlon • #15 BobG Homework Helper 223 84 marlon is right. The determinant method for finding the cross product is the best way to solve this. You're just killing yourself doing it your way. $$\vec{h} = \left(\begin{array}{ccc}\hat{i} & \hat{j} & \hat{k}\\r cos \phi & r sin \phi & 0\\ \dot{r}cos \phi - r \dot{\phi} sin \phi & \dot{r} sin \phi + r \dot{\phi} cos \phi & 0\end{array}\right)$$ Last edited: • #16 BobG Homework Helper 223 84 marlon said: regards marlon Dang, you're fast! :rofl: Not only was your correction faster than mine, the correction was gone by time I could check how much you beat my correction by. • #17 Astronuc Staff Emeritus 20,468 4,389 It would be worthwhile to point out that i x j = - j x i = k, and i x i = j x j = k x k = 0. • Last Post Replies 11 Views 5K • Last Post Replies 5 Views 3K • Last Post Replies 4 Views 2K • Last Post Replies 2 Views 2K • Last Post Replies 10 Views 1K • Last Post Replies 5 Views 2K • Last Post Replies 4 Views 5K • Last Post Replies 2 Views 1K • Last Post Replies 19 Views 1K • Last Post Replies 4 Views 17K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304680824279785, "perplexity": 1109.0592146900156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00085.warc.gz"}
http://mathhelpforum.com/number-theory/89014-legendre-problem.html
1. ## Legendre Problem Let p be an odd prime. Show that the Diophantine equation x^2+py+a=0; gcd(a, p)=1 has an integral solution if and only if (-a/p)=1. 2. ## Is correct this equation? Hi, I do not know if this helpful to you but: Solving to y we have: y=-x^2/p-a/p Now if (-a/p) is not 1, there is still solution. Check for a=-15, p=11 (x=2, y=1). If this is what you mean... 3. Originally Posted by gdmath Hi, I do not know if this helpful to you but: Solving to y we have: y=-x^2/p-a/p Now if (-a/p) is not 1, there is still solution. Check for a=-15, p=11 (x=2, y=1). If this is what you mean... I am not sure what you are doing. The meaning of (-a/p) is not a fraction, it means this. Originally Posted by cathwelch Let p be an odd prime. Show that the Diophantine equation x^2+py+a=0; gcd(a, p)=1 has an integral solution if and only if (-a/p)=1. There exists an $x$ and $y$ that solve this equation if and only if $x^2 + a = p(-y)$ if and only if $p|(x^2+a)$ if and only if $x^2\equiv -a(\bmod p)$, and this has a solution if and only if $(-a/p)=1$. 4. Sorry i just missunderstood the question. I am not doing something suspicious, if this you mean.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9112898707389832, "perplexity": 851.5049285960677}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00008-ip-10-233-31-227.ec2.internal.warc.gz"}
http://yutsumura.com/maximal-ideal-in-ring-of-continuous-function-and-quotient-ring/
# A Maximal Ideal in the Ring of Continuous Functions and a Quotient Ring ## Problem 345 Let $R$ be the ring of all continuous functions on the interval $[0, 2]$. Let $I$ be the subset of $R$ defined by $I:=\{ f(x) \in R \mid f(1)=0\}.$ Then prove that $I$ is an ideal of the ring $R$. Moreover, show that $I$ is maximal and determine $R/I$. Contents ## Hint. Consider the map $\phi:R\to \R$ defined by $\phi(f)=f(1),$ for every $f(x)\in R$. ## Proof. Let us consider the map $\phi$ from $R$ to the field of real numbers $\R$ defined by $\phi(f)=f(1),$ for each $f(x)\in R$. Namely, the map $\phi$ is the evaluation at $x=1$. We claim that $\phi:R \to \R$ is a ring homomorphism. In fact we have for any $f(x), g(x)\in R$, \begin{align*} \phi(fg)&=(fg)(1)=f(1)g(1)=\phi(f)\phi(g)\\ \phi(f+g)&=(f+g)(1)=f(1)+g(1)=\phi(f)+\phi(g), \end{align*} hence $\phi$ is a ring homomorphism. Next, consider the kernel of $\phi$. We have \begin{align*} \ker(\phi)&=\{ f(x)\in R\mid \phi(f)=0\}\\ &=\{f(x) \in R \mid f(1)=0\}=I. \end{align*} Since the kernel of a ring homomorphism is an ideal, it follows that $I=\ker(\phi)$ is an ideal of $R$. Next, we claim that $\phi$ is surjective. To see this, let $r\in \R$ be an arbitrary real number. Define the constant function $f(x)=r$. Then $f(x)$ is an element in $R$ as it is continuous function on $[0, 2]$. We have \begin{align*} \phi(f)=f(1)=r, \end{align*} and this proves that $\phi$ is surjective. Since $\phi: R\to \R$ is a surjective ring homomorphism, the first isomorphism theorem yields that $R/\ker(\phi) \cong \R.$ Since $\ker(\phi)=I$ as we saw above, we have $R/I \cong \R.$ Thus, the quotient ring $R/I$ is isomorphic to the field $\R$. It follows from this that $I$ is a maximal ideal of $R$. (Recall the fact that an ideal $I$ of a commutative ring $R$ is maximal if and only if $R/I$ is a field.) ## Related Question. Problem. Let $\Z[x]$ be the ring of polynomials with integer coefficients. Prove that $I=\{f(x)\in \Z[x] \mid f(-2)=0\}$ is a prime ideal of $\Z[x]$. Is $I$ a maximal ideal of $\Z[x]$? For a proof, see the post ↴ Polynomial Ring with Integer Coefficients and the Prime Ideal $I=\{f(x) \in \Z[x] \mid f(-2)=0\}$ ### 1 Response 1. 09/27/2017 […] The proof of this problem is given in the post ↴ A Maximal Ideal in the Ring of Continuous Functions and a Quotient Ring […] ##### Irreducible Polynomial Over the Ring of Polynomials Over Integral Domain Let $R$ be an integral domain and let $S=R[t]$ be the polynomial ring in $t$ over $R$. Let $n$ be... Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881426692008972, "perplexity": 86.47460439404308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00157.warc.gz"}
http://applied-mathematics.net/mythesis/node27.html
Next: An estimation of the Up: The Trust-Region subproblem Previous: The Rayleigh quotient trick   Contents Subsections # Termination Test. In other words, if is small, then the reduction in that occurs at the point is close to the greatest reduction that is allowed by the trust region constraint. Proof for any , we have the identity: ( using : ) ( using : ) (4.23) If we choose such that , we have: (4.24) From 4.27, using the 2 hypothesis: ( Using Equation 4.25: ) (4.25) Combining 4.28 and 4.29, we obtain finally 4.26. ## is near the boundary of the trust region: normal case Lemma From the hypothesis: (4.26) Combining 4.31 and 4.27 when reveals that: (4.27) The required inequality 4.30 is immediate from 4.28 and 4.32. We will use this lemma with . ## is inside the trust region: hard case We will choose as (see paragraph containing Equation 4.15 for the meaning of and ): (4.28) Thus, the condition for ending the trust region calculation simplifies to the inequality: (4.29) We will choose . Next: An estimation of the Up: The Trust-Region subproblem Previous: The Rayleigh quotient trick   Contents Frank Vanden Berghen 2004-04-19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791542887687683, "perplexity": 2800.0050426110033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823114.39/warc/CC-MAIN-20171018195607-20171018215607-00534.warc.gz"}
https://www.gap-system.org/Manuals/pkg/polycyclic-2.14/doc/chap2.html
Goto Chapter: Top 1 2 3 4 5 6 7 8 9 A Bib Ind ### 2 Introduction to polycyclic presentations Let G be a polycyclic group and let G = C_1 ⊳ C_2 ... C_n⊳ C_n+1 = 1 be a polycyclic series, that is, a subnormal series of G with non-trivial cyclic factors. For 1 ≤ i ≤ n we choose g_i ∈ C_i such that C_i = ⟨ g_i, C_i+1 ⟩. Then the sequence (g_1, ..., g_n) is called a polycyclic generating sequence of G. Let I be the set of those i ∈ {1, ..., n} with r_i := [C_i : C_i+1] finite. Each element of G can be written uniquely as g_1^e_1⋯ g_n^e_n with e_i∈ ℤ for 1≤ i≤ n and 0≤ e_i < r_i for i∈ I. Each polycyclic generating sequence of G gives rise to a power-conjugate (pc-) presentation for G with the conjugate relations g_j^{g_i} = g_{i+1}^{e(i,j,i+1)} \cdots g_n^{e(i,j,n)} \hbox{ for } 1 \leq i < j \leq n, g_j^{g_i^{-1}} = g_{i+1}^{f(i,j,i+1)} \cdots g_n^{f(i,j,n)} \hbox{ for } 1 \leq i < j \leq n, and the power relations g_i^{r_i} = g_{i+1}^{l(i,i+1)} \cdots g_n^{l(i,n)} \hbox{ for } i \in I. Vice versa, we say that a group G is defined by a pc-presentation if G is given by a presentation of the form above on generators g_1,...,g_n. These generators are the defining generators of G. Here, I is the set of 1≤ i≤ n such that g_i has a power relation. The positive integer r_i for i∈ I is called the relative order of g_i. If G is given by a pc-presentation, then G is polycyclic. The subgroups C_i = ⟨ g_i, ..., g_n ⟩ form a subnormal series G = C_1 ≥ ... ≥ C_n+1 = 1 with cyclic factors and we have that g_i^r_i∈ C_i+1. However, some of the factors of this series may be smaller than r_i for i∈ I or finite if inot\in I. If G is defined by a pc-presentation, then each element of G can be described by a word of the form g_1^e_1⋯ g_n^e_n in the defining generators with e_i∈ ℤ for 1≤ i≤ n and 0≤ e_i < r_i for i∈ I. Such a word is said to be in collected form. In general, an element of the group can be represented by more than one collected word. If the pc-presentation has the property that each element of G has precisely one word in collected form, then the presentation is called confluent or consistent. If that is the case, the generators with a power relation correspond precisely to the finite factors in the polycyclic series and r_i is the order of C_i/C_i+1. The GAP package Polycyclic is designed for computations with polycyclic groups which are given by consistent pc-presentations. In particular, all the functions described below assume that we compute with a group defined by a consistent pc-presentation. See Chapter Collectors for a routine that checks the consistency of a pc-presentation. A pc-presentation can be interpreted as a rewriting system in the following way. One needs to add a new generator G_i for each generator g_i together with the relations g_iG_i = 1 and G_ig_i = 1. Any occurrence in a relation of an inverse generator g_i^-1 is replaced by G_i. In this way one obtains a monoid presentation for the group G. With respect to a particular ordering on the set of monoid words in the generators g_1,... g_n,G_1,... G_n, the wreath product ordering, this monoid presentation is a rewriting system. If the pc-presentation is consistent, the rewriting system is confluent. In this package we do not address this aspect of pc-presentations because it is of little relevance for the algorithms implemented here. For the definition of rewriting systems and confluence in this context as well as further details on the connections between pc-presentations and rewriting systems we recommend the book [Sim94]. Goto Chapter: Top 1 2 3 4 5 6 7 8 9 A Bib Ind generated by GAPDoc2HTML
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724186062812805, "perplexity": 969.1364792209044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256147.15/warc/CC-MAIN-20190520202108-20190520224108-00011.warc.gz"}
https://www.physicsforums.com/threads/the-electric-generator.83626/
# The electric generator 1. Jul 29, 2005 ### mayo2kett The coil of a generator has a radius of 0.14 m. When this coil is unwound, the wire from which it is made has a length of 5.4 m. The magnetic field of the generator is 0.10 T, and the coil rotates at an angular speed of 35 rad/s. What is the peak emf of this generator? so i have: r= .14m L= 5.4m B= .10T now i thought i would do: emf= BLv emf= (.10T)(5.4m)(.049m/s)= .02646 and peak emf= (square root 2)(emf)= .0374... this problem is wrong the way i tried it, but i'm not sure what i should do differently 2. Jul 30, 2005 ### siddharth The induced EMF (across the ends of the rod) due to the motion of a rod of length 'l' and velocity 'v', in the presence of a magnetic field of strength 'B' is Blv. So this formula is not applicable here as there is a rotating coil and not a rod. To solve this problem, go from the definition of Farady's law. By Farady's law, Emf induced = -d(Magnetic Flux)/dt Let the magnetic field make an angle theta with the area vector of the loop at any time 't' such that at t=0, theta=0. So the Magnetic flux enclosed by the loop is = $n B.A$ where n is the number of loops, B is the magnetic field and A is the area of the loop. $$= (n)(B)(A)(\cos\theta)$$ So, the EMF induced will be $$=\frac {-d[(n)(B)(A)(\cos\theta)]}{dt}$$ From this, can you calculate the EMF as a function of time and from that the peak value? (You will have to find the relation between 'theta' and 't' as well as the value of n) Last edited: Jul 30, 2005 3. Jul 30, 2005 ### KingOfTwilight The coil is rotating in the field. The flux is thus changing and this causes the electric field in the coil. $$\Phi = AB$$, B is constant but A is changing. Can you find A as a function of time? $$E = -N \frac{d\Phi}{dt}$$, so you will also need to find N - the number of layers in the coil. Just find $$\frac{dA}{dt}$$ and the biggest problem is probably solved. 4. Jul 30, 2005 ### mayo2kett thanks guys... you really helped me Similar Discussions: The electric generator
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587786197662354, "perplexity": 725.9203833118958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00479-ip-10-171-10-70.ec2.internal.warc.gz"}
https://andrews.io/blog/so-long-oysters
# So long, Oysters This is a first, for me. I consider myself very lucky in that I don’t have any allergies to foods, medicines or anything else - at least, that I’m aware of. It does seem, however, that I have developed a problem with Oysters. I had hoped that recent post consumption effects were just coincidental or even very bad luck, but three identical instances gives some certainty to what I can only assume is an intolerance. It was all quite sad searching through Google Photos and seeing all the times I have shared what is was quite possibly my favourite of all delicacies. Anyway, a pointless blog but an event which I felt was markedly worthy of some internet words and of course, a photograph. So long, Oysters.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122667074203491, "perplexity": 1240.5950455851664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00143.warc.gz"}
https://physics.stackexchange.com/questions/488382/magnetic-field-generated-by-a-wire-crossed-by-current-clarifications
Magnetic field generated by a wire crossed by current: clarifications We know that from Oersted's experience the lines of force of the magnetic field generated by a wire crossed by current $$I$$, are concentric circumferences of radius $$r$$ (variable) where the centre is a point of the wire. To prove that these are actually circumferences I have followed this path. The law of the magnetic field of a current crosses the wire is: $$B(r)=2k_m\frac{I}{r}, \quad \tag{1}$$ where $$k_m=k_e/c^2$$. Now from $$(1)$$ we have also: $$r=2k_m\frac{I}{B(r)}, \quad \tag{2}$$ If we indicate with $$K=2k_m\frac{I}{B(r)}, \quad \tag{3}$$ we have: $$r=K, \quad \tag{4}$$ But if $$r=\sqrt{x^2+y^2}\,$$ ($$2-$$dimensions plane) then from $$(4)$$, $$x^2+y^2=K^2$$ which is exactly a circumference with center in in the origin (a point of the thread where we have fixed a Cartesian orthogonal reference system). If $$r=\sqrt{x^2+y^2+z^2}\,$$ ($$3-$$dimensions space) should I draw spheres with a thread point in the middle or are they always circumferences with the center of a point of the wire? • Well, in this formula, the $r$ is the one we use with the cylindrical coordinates (because of the cylindrical symmetry of the field), so we can't really "switch" to a spherical $r$. – Syrocco Jun 26 at 22:19 • @Syrocco You're right. I had already thought about this but being a distance I thought about finding a relationship that would give me circumferences. I understood your comment. Could you kindly give me an exhaustive answer on how I should proceed mathematically? These are subjects that I dealt with at least 26 years ago. Thank you. – Sebastiano Jun 26 at 22:27 • I'm not sure I understand what you're asking me to do. You've already found the circumference: $x^2+y^2=K^2$. But you can't generalize this to $x^2+y^2+z^2=K^2$ because $r$ isn't a radius (it's the orthogonal projection of $r$ onto the plane orthogonal to the wire). I'm sorry if I misunderstood your question. – Syrocco Jun 26 at 22:41 • @Syrocco Don't worry about it. Could you give me a detailed mathematical explanation as an answer? Greetings. – Sebastiano Jun 26 at 22:46 • You have written B(r) as a scalar when in reality it is a vector. Also I is not a constant but there is a current distribution. The reason there is no z is that we have reduced dimensionality by exploiting symmetry. Else you would have to sum/integrate B for each current element in the wire. – Paul Childs Jun 26 at 23:59 You said: If $$r = \sqrt{x^2+y^2+z^2}$$ (3−dimensions space) should I draw spheres with a thread point in the middle or are they always circumferences with the center of a point of the wire? But it simply isn't, in that formula $$r$$ is defined as the distance between the point in which you evaluate the field and the wire, in the same manner as you define the distance between a point and a straight line. I really don't understand what you are doing in your calculation. In the first place, I want to clarify a point, what I think you are trying to find with your calculation is the set of points in which the field has the same intensity, but this is not the definition of lines of force, maybe you are making some confusion. Moreover, even if you want to find the set of points in which the field has the same intensity, let me redo that in a cleaner way. You have to start from the assumption of costant intensity of the field: $$B(r) = C$$ where C is a costant. Then, for the law you mentioned: $$2k\frac{I}{r} = C$$ So: $$r = 2k\frac{I}{C}$$ If one further assumes that I is costant, then r is costant too: $$r = C'$$ Being $$r$$ defined as $$\sqrt{x^2 +y^2}$$ then you have: $$\sqrt{x^2 + y^2} = C'$$ That is the equation of a circumpherence. So, this is a correct process to get the set of points in which the field has the same intensity, but I stress again that this are not the lines of force. The lines of force associated to a vector field are defined (in an elementary way) as those lines such that the vector field is always tangent to them, in this case this two distinct concepts coincide, but it is a mere coincidence. • In the meantime, I thank you for your elegant response. I had a doubt simply. If there is something that is not correct you can edit my question. – Sebastiano Jun 27 at 10:49 • Thank you, this is my first answer, and also I'm not a native english speaker, so I apologize if my english is not clear sometimes. – Fabio Di Nocera Jun 27 at 11:47 • Welcome for me and my best regards from Sicily :-) – Sebastiano Jun 27 at 13:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250275254249573, "perplexity": 301.0227089137737}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00375.warc.gz"}
https://kr.mathworks.com/help/econ/ssm-class.html
Documentation ssm class Superclasses: Create state-space model Description `ssm` creates a standard, linear, state-space model object with independent Gaussian state disturbances and observation innovations. You can: • Specify a time-invariant or time-varying model. • Specify whether states are stationary, static, or nonstationary. • Explicitly by providing the matrices • Implicitly by providing a function that maps the parameters to the matrices, that is, a parameter-to-matrix mapping function Once you have specified a model: Construction `Mdl = ssm(A,B,C)` creates a state-space model (`Mdl`) using state-transition matrix `A`, state-disturbance-loading matrix `B`, and measurement-sensitivity matrix `C`. `Mdl = ssm(A,B,C,D)` creates a state-space model using state-transition matrix `A`, state-disturbance-loading matrix `B`, measurement-sensitivity matrix `C`, and observation-innovation matrix `D`. `Mdl = ssm(___,Name,Value)` uses any of the input arguments in the previous syntaxes and additional options that you specify by one or more `Name,Value` pair arguments. `Name` can also be a property name and `Value` is the corresponding value. `Name` must appear inside single quotes (`''`). You can specify several name-value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`. `Mdl = ssm(ParamMap)` creates a state-space model using a parameter-to-matrix mapping function (`ParamMap`) that you write. The function maps a vector of parameters to the matrices `A`, `B`, and `C`. Optionally, `ParamMap` can map parameters to `D`, `Mean0`, or `Cov0`. To specify the types of states, the function can return `StateType`. To accommodate a regression component in the observation equation, `ParamMap` can also return deflated observation data. `Mdl = ssm(DSSMMdl)` converts a diffuse state-space model object (`DSSMMdl`) to a state-space model object (`Mdl`). `ssm` sets all initial variances of diffuse states in `SSMMdl.Cov0` to `1e07`. Input Arguments expand all State-transition coefficient matrix for explicit state-space model creation, specified as a matrix or cell vector of matrices. The state-transition coefficient matrix, At, specifies how the states, xt, are expected to transition from period t – 1 to t, for all t = 1,...,T. That is, the expected state-transition equation at period t is E(xt|xt–1) = Atxt–1. For time-invariant state-space models, specify `A` as an m-by-m matrix, where m is the number of states per period. For time-varying state-space models, specify `A` as a T-dimensional cell array, where `A{t}` contains an mt-by-mt – 1 state-transition coefficient matrix. If the number of states changes from period t – 1 to t, then mtmt – 1. `NaN` values in any coefficient matrix indicate unique, unknown parameters in the state-space model. `A` contributes: • `sum(isnan(A(:)))` unknown parameters to time-invariant state-space models. In other words, if the state-space model is time invariant, then the software uses the same unknown parameters defined in `A` at each period. • `numParamsA` unknown parameters to time-varying state-space models, where ```numParamsA = sum(cell2mat(cellfun(@(x)sum(sum(isnan(x))),A,'UniformOutput',0)))```. In other words, if the state-space model is time varying, then the software assigns a new set of parameters for each matrix in `A`. You cannot specify `A` and `ParamMap` simultaneously. Data Types: `double` | `cell` State-disturbance-loading coefficient matrix for explicit state-space model creation, specified as a matrix or cell vector of matrices. The state disturbances, ut, are independent Gaussian random variables with mean 0 and standard deviation 1. The state-disturbance-loading coefficient matrix, Bt, specifies the additive error structure in the state-transition equation from period t – 1 to t, for all t = 1,...,T. That is, the state-transition equation at period t is xt = Atxt–1 + Btut. For time-invariant state-space models, specify `B` as an m-by-k matrix, where m is the number of states and k is the number of state disturbances per period. `B*B'` is the state-disturbance covariance matrix for all periods. For time-varying state-space models, specify `B` as a T-dimensional cell array, where `B{t}` contains an mt-by-kt state-disturbance-loading coefficient matrix. If the number of states or state disturbances changes at period t, then the matrix dimensions between `B{t-1}` and `B{t}` vary. `B{t}*B{t}'` is the state-disturbance covariance matrix for period `t`. `NaN` values in any coefficient matrix indicate unique, unknown parameters in the state-space model. `B` contributes: • `sum(isnan(B(:)))` unknown parameters to time-invariant state-space models. In other words, if the state-space model is time invariant, then the software uses the same unknown parameters defined in `B` at each period. • `numParamsB` unknown parameters to time-varying state-space models, where `numParamsB = sum(cell2mat(cellfun(@(x)sum(sum(isnan(x))),B,'UniformOutput',0)))`. In other words, if the state-space model is time varying, then the software assigns a new set of parameters for each matrix in `B`. You cannot specify `B` and `ParamMap` simultaneously. Data Types: `double` | `cell` Measurement-sensitivity coefficient matrix for explicit state-space model creation, specified as a matrix or cell vector of matrices. The measurement-sensitivity coefficient matrix, Ct, specifies how the states are expected to linearly combine at period t to form the observations, yt, for all t = 1,...,T. That is, the expected observation equation at period t is E(yt|xt) = Ctxt. For time-invariant state-space models, specify `C` as an n-by-m matrix, where n is the number of observations and m is the number of states per period. For time-varying state-space models, specify `C` as a T-dimensional cell array, where `C{t}` contains an nt-by-mt measurement-sensitivity coefficient matrix. If the number of states or observations changes at period t, then the matrix dimensions between `C{t-1}` and `C{t}` vary. `NaN` values in any coefficient matrix indicate unique, unknown parameters in the state-space model. `C` contributes: • `sum(isnan(C(:)))` unknown parameters to time-invariant state-space models. In other words, if the state-space model is time invariant, then the software uses the same unknown parameters defined in `C` at each period. • `numParamsC` unknown parameters to time-varying state-space models, where `numParamsC = sum(cell2mat(cellfun(@(x)sum(sum(isnan(x))),C,'UniformOutput',0)))`. In other words, if the state-space model is time varying, then the software assigns a new set of parameters for each matrix in `C`. You cannot specify `C` and `ParamMap` simultaneously. Data Types: `double` | `cell` Observation-innovation coefficient matrix for explicit state-space model creation, specified as a matrix or cell vector of matrices. The observation innovations, εt, are independent Gaussian random variables with mean 0 and standard deviation 1. The observation-innovation coefficient matrix, Dt, specifies the additive error structure in the observation equation at period t, for all t = 1,...,T. That is, the observation equation at period t is yt = Ctxt + Dtεt. For time-invariant state-space models, specify `D` as an n-by-h matrix, where n is the number of observations and h is the number of observation innovations per period. `D*D'` is the observation-innovation covariance matrix for all periods. For time-varying state-space models, specify `D` as a T-dimensional cell array, where `D{t}` contains an nt-by-ht matrix. If the number of observations or observation innovations changes at period t, then the matrix dimensions between `D{t-1}` and `D{t}` vary. `D{t}*D{t}'` is the observation-innovation covariance matrix for period `t`. `NaN` values in any coefficient matrix indicate unique, unknown parameters in the state-space model. `D` contributes: • `sum(isnan(D(:)))` unknown parameters to time-invariant state-space models. In other words, if the state-space model is time invariant, then the software uses the same unknown parameters defined in `D` at each period. • `numParamsD` unknown parameters to time-varying state-space models, where `numParamsD = sum(cell2mat(cellfun(@(x)sum(sum(isnan(x))),D,'UniformOutput',0)))`. In other words, if the state-space model is time varying, then the software assigns a new set of parameters for each matrix in `D`. By default, `D` is an empty matrix indicating no observation innovations in the state-space model. You cannot specify `D` and `ParamMap` simultaneously. Data Types: `double` | `cell` Parameter-to-matrix mapping function for implicit state-space model creation, specified as a function handle. `ParamMap` must be a function that takes at least one input argument and returns at least three output arguments. The requisite input argument is a vector of unknown parameters, and the requisite output arguments correspond to the coefficient matrices `A`, `B`, and `C`, respectively. If your parameter-to-mapping function requires the input parameter vector argument only, then implicitly create a state-space model by entering the following: `Mdl = ssm(@ParamMap)` In general, you can write an intermediate function, for example, `ParamFun`, using this syntax: ```function [A,B,C,D,Mean0,Cov0,StateType,DeflateY] = ... ParamFun(params,...otherInputArgs...)``` In this general case, create the state-space model by entering `Mdl = ssm(@(params)ParamMap(params,...otherInputArgs...))` However: • Follow the order of the output arguments. • `params` is a vector, and each element corresponds to an unknown parameter. • `ParamFun` must return `A`, `B`, and `C`, which correspond to the state-transition, state-disturbance-loading, and measurement-sensitivity coefficient matrices, respectively. • If you specify more input arguments than the parameter vector (`params`), such as observed responses and predictors, then implicitly create the state-space model using the syntax pattern `Mdl = ssm(@(params)ParamFun(params,y,z))` • For the optional output arguments `D`, `Mean0`, `Cov0`, `StateType`, and `DeflateY`: • The optional output arguments correspond to the observation-innovation coefficient matrix `D` and the name-value pair arguments `Mean0`, `Cov0`, and `StateType`. • To skip specifying an optional output argument, set the argument to `[]` in the function body. For example, to skip specifying `D`, then set `D = [];` in the function. • `DeflateY` is the deflated-observation data, which accommodates a regression component in the observation equation. For example, in this function, which has a linear regression component, `Y` is the vector of observed responses and `Z` is the vector of predictor data. ```function [A,B,C,D,Mean0,Cov0,StateType,DeflateY] = ParamFun(params,Y,Z) ... DeflateY = Y - params(9) - params(10)*Z; ... end``` • For the default values of `Mean0`, `Cov0`, and `StateType`, see Algorithms. • It is best practice to: • Load the data to the MATLAB® Workspace before specifying the model. • Create the parameter-to-matrix mapping function as its own file. If you specify `ParamMap`, then you cannot specify any name-value pair arguments or any other input arguments. Data Types: `function_handle` Diffuse state-space model to convert to a state-space model, specified as a `dssm` model object. `ssm` sets all initial variances of diffuse states in `DSSMMdl.Cov0` from `Inf` to `1e7`. Any diffuse states with variance other than `Inf` retain their values. To apply the standard Kalman filter instead of the diffuse Kalman filter for filtering, smoothing, and parameter estimation, convert a diffuse state-space model to a state-space model. Name-Value Pair Arguments Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`. Initial state mean for explicit state-space model creation, specified as the comma-separated pair consisting of `'Mean0'` and a numeric vector with length equal to the number of initial states. For the default values, see Algorithms. If you specify `ParamMap`, then you cannot specify `Mean0`. Instead, specify the initial state mean in the parameter-to-matrix mapping function. Data Types: `double` Initial state covariance matrix for explicit state-space model creation, specified as the comma-separated pair consisting of `'Cov0'` and a square matrix with dimensions equal to the number of initial states. For the default values, see Algorithms. If you specify `ParamMap`, then you cannot specify `Cov0`. Instead, specify the initial state covariance in the parameter-to-matrix mapping function. Data Types: `double` Initial state distribution indicator for explicit state-space model creation, specified as the comma-separated pair consisting of `'StateType'` and a numeric vector with length equal to the number of initial states. This table summarizes the available types of initial state distributions. ValueInitial State Distribution Type `0`Stationary (for example, ARMA models) `1`The constant 1 (that is, the state is 1 with probability 1) `2`Diffuse or nonstationary (for example, random walk model, seasonal linear time series) or static state For example, suppose that the state equation has two state variables: The first state variable is an AR(1) process, and the second state variable is a random walk. Specify the initial distribution types by setting ```'StateType',[0; 2]```. If you specify `ParamMap`, then you cannot specify `Mean0`. Instead, specify the initial state distribution indicator in the parameter-to-matrix mapping function. For the default values, see Algorithms. Data Types: `double` Properties expand all State-transition coefficient matrix for explicitly created state-space models, specified as a matrix, a cell vector of matrices, or an empty array (`[]`). For implicitly created state-space models and before estimation, `A` is `[]` and read only. The state-transition coefficient matrix, At, specifies how the states, xt, are expected to transition from period t – 1 to t, for all t = 1,...,T. That is, the expected state-transition equation at period t is E(xt|xt–1) = Atxt–1. For time-invariant state-space models, `A` is an m-by-m matrix, where m is the number of states per period. For time-varying state-space models, `A` is a T-dimensional cell array, where `A{t}` contains an mt-by-mt – 1 state-transition coefficient matrix. If the number of states changes from period t – 1 to t, then mtmt – 1. `NaN` values in any coefficient matrix indicate unknown parameters in the state-space model. `A` contributes: • `sum(isnan(A(:)))` unknown parameters to time-invariant state-space models. In other words, if the state-space model is time invariant, then the software uses the same unknown parameters defined in `A` at each period. • `numParamsA` unknown parameters to time-varying state-space models, where ```numParamsA = sum(cell2mat(cellfun(@(x)sum(sum(isnan(x))),A,'UniformOutput',0)))```. In other words, if the state-space model is time varying, then the software assigns a new set of parameters for each matrix in `A`. Data Types: `double` | `cell` State-disturbance-loading coefficient matrix for explicitly created state-space models, specified as a matrix, a cell vector of matrices, or an empty array (`[]`). For implicitly created state-space models and before estimation, `B` is `[]` and read only. The state disturbances, ut, are independent Gaussian random variables with mean 0 and standard deviation 1. The state-disturbance-loading coefficient matrix, Bt, specifies the additive error structure in the state-transition equation from period t – 1 to t, for all t = 1,...,T. That is, the state-transition equation at period t is xt = Atxt–1 + Btut. For time-invariant state-space models, `B` is an m-by-k matrix, where m is the number of states and k is the number of state disturbances per period. `B*B'` is the state-disturbance covariance matrix for all periods. For time-varying state-space models, `B` is a T-dimensional cell array, where `B{t}` contains an mt-by-kt state-disturbance-loading coefficient matrix. If the number of states or state disturbances changes at period t, then the matrix dimensions between `B{t-1}` and `B{t}` vary. `B{t}*B{t}'` is the state-disturbance covariance matrix for period `t`. `NaN` values in any coefficient matrix indicate unknown parameters in the state-space model. `B` contributes: • `sum(isnan(B(:)))` unknown parameters to time-invariant state-space models. In other words, if the state-space model is time invariant, then the software uses the same unknown parameters defined in `B` at each period. • `numParamsB` unknown parameters to time-varying state-space models, where `numParamsB = sum(cell2mat(cellfun(@(x)sum(sum(isnan(x))),B,'UniformOutput',0)))`. In other words, if the state-space model is time varying, then the software assigns a new set of parameters for each matrix in `B`. Data Types: `double` | `cell` Measurement-sensitivity coefficient matrix for explicitly created state-space models, specified as a matrix, a cell vector of matrices, or an empty array (`[]`). For implicitly created state-space models and before estimation, `C` is `[]` and read only. The measurement-sensitivity coefficient matrix, Ct, specifies how the states are expected to combine linearly at period t to form the observations, yt, for all t = 1,...,T. That is, the expected observation equation at period t is E(yt|xt) = Ctxt. For time-invariant state-space models, `C` is an n-by-m matrix, where n is the number of observations and m is the number of states per period. For time-varying state-space models, `C` is a T-dimensional cell array, where `C{t}` contains an nt-by-mt measurement-sensitivity coefficient matrix. If the number of states or observations changes at period t, then the matrix dimensions between `C{t-1}` and `C{t}` vary. `NaN` values in any coefficient matrix indicate unknown parameters in the state-space model. `C` contributes: • `sum(isnan(C(:)))` unknown parameters to time-invariant state-space models. In other words, if the state-space model is time invariant, then the software uses the same unknown parameters defined in `C` at each period. • `numParamsC` unknown parameters to time-varying state-space models, where `numParamsC = sum(cell2mat(cellfun(@(x)sum(sum(isnan(x))),C,'UniformOutput',0)))`. In other words, if the state-space model is time varying, then the software assigns a new set of parameters for each matrix in `C`. Data Types: `double` | `cell` Observation-innovation coefficient matrix for explicitly created state-space models, specified as a matrix, a cell vector of matrices, or an empty array (`[]`). For implicitly created state-space models and before estimation, `D` is `[]` and read only. The observation innovations, εt, are independent Gaussian random variables with mean 0 and standard deviation 1. The observation-innovation coefficient matrix, Dt, specifies the additive error structure in the observation equation at period t, for all t = 1,...,T. That is, the observation equation at period t is yt = Ctxt + Dtεt. For time-invariant state-space models, `D` is an n-by-h matrix, where n is the number of observations and h is the number of observation innovations per period. `D*D'` is the observation-innovation covariance matrix for all peroids. For time-varying state-space models, `D` is a T-dimensional cell array, where `D{t}` contains an nt-by-ht matrix. If the number of observations or observation innovations changes at period t, then the matrix dimensions between `D{t-1}` and `D{t}` vary. `D{t}*D{t}'` is the state-disturbance covariance matrix for period `t`. `NaN` values in any coefficient matrix indicate unknown parameters in the state-space model. `D` contributes: • `sum(isnan(D(:)))` unknown parameters to time-invariant state-space models. In other words, if the state-space model is time invariant, then the software uses the same unknown parameters defined in `D` at each period. • `numParamsD` unknown parameters to time-varying state-space models, where `numParamsD = sum(cell2mat(cellfun(@(x)sum(sum(isnan(x))),D,'UniformOutput',0)))`. In other words, if the state-space model is time varying, then the software assigns a new set of parameters for each matrix in `D`. Data Types: `double` | `cell` Initial state mean, specified as a numeric vector or an empty array (`[]`). `Mean0` has length equal to the number of initial states (`size(A,1)` or `size(A{1},1)`). `Mean0` is the mean of the Gaussian distribution of the states at period 0. For implicitly created state-space models and before estimation, `Mean0` is `[]` and read only. However, `estimate` specifies `Mean0` after estimation. Data Types: `double` Initial state covariance matrix, specified as a square matrix or an empty array (`[]`). `Cov0` has dimensions equal to the number of initial states (`size(A,1)` or `size(A{1},1)`). `Cov0` is the covariance of the Gaussian distribution of the states at period 0. For implicitly created state-space models and before estimation, `Cov0` is `[]` and read only. However, `estimate` specifies `Cov0` after estimation. Data Types: `double` Initial state distribution indicator, specified as a numeric vector or empty array (`[]`). `StateType` has length equal to the number of initial states. For implicitly created state-space models or models with unknown parameters, `StateType` is `[]` and read only. This table summarizes the available types of initial state distributions. ValueInitial State Distribution Type `0`Stationary (e.g., ARMA models) `1`The constant 1 (that is, the state is 1 with probability 1) `2`Nonstationary (e.g., random walk model, seasonal linear time series) or static state For example, suppose that the state equation has two state variables: The first state variable is an AR(1) process, and the second state variable is a random walk. Then, `StateType` is `[0; 2]`. For nonstationary states, `ssm` sets `Cov0` to `1e7` by default. Subsequently, the software implements the Kalman filter for filtering, smoothing, and parameter estimation. This specification imposes relatively weak knowledge on the initial state values of diffuse states, and uses initial state covariance terms between all states. Data Types: `double` Parameter-to-matrix mapping function, specified as a function handle or an empty array (`[]`). `ParamMap` completely specifies the structure of the state-space model. That is, `ParamMap` defines `A`, `B`, `C`, `D`, and, optionally, `Mean0`, `Cov0`, and `StateType`. For explicitly created state-space models, `ParamMap` is `[]` and read only. Data Types: `function_handle` Methods disp Display summary information for state-space model estimate Maximum likelihood parameter estimation of state-space models filter Forward recursion of state-space models forecast Forecast states and observations of state-space models refine Refine initial parameters to aid state-space model estimation simsmooth State-space model simulation smoother simulate Monte Carlo simulation of state-space models smooth Backward recursion of state-space models Copy Semantics Value. To learn how value classes affect copy operations, see Copying Objects (MATLAB). Examples collapse all Create a standard state-space model containing two independent, autoregressive states, and the observations are the deterministic sum of the two states. Symbolically, the system of equations is `$\left[\begin{array}{c}{x}_{t,1}\\ {x}_{t,2}\end{array}\right]=\left[\begin{array}{cc}{\varphi }_{1}& 0\\ 0& {\varphi }_{2}\end{array}\right]\left[\begin{array}{c}{x}_{t-1,1}\\ {x}_{t-1,2}\end{array}\right]+\left[\begin{array}{cc}{\sigma }_{1}& 0\\ 0& {\sigma }_{2}\end{array}\right]\left[\begin{array}{c}{u}_{t,1}\\ {u}_{t,2}\end{array}\right]$` `${y}_{t}=\left[\begin{array}{cc}1& 1\end{array}\right]\left[\begin{array}{c}{x}_{t,1}\\ {x}_{t,2}\end{array}\right].$` Specify the state-transition matrix. `A = [NaN 0; 0 NaN];` `B = [NaN 0; 0 NaN];` Specify the measurement-sensitivity matrix. `C = [1 1];` Define the state-space model using `ssm`. `Mdl = ssm(A,B,C)` ```Mdl = State-space model type: ssm State vector length: 2 Observation vector length: 1 State disturbance vector length: 2 Observation innovation vector length: 0 Sample size supported by model: Unlimited Unknown parameters for estimation: 4 State variables: x1, x2,... State disturbances: u1, u2,... Observation series: y1, y2,... Observation innovations: e1, e2,... Unknown parameters: c1, c2,... State equations: x1(t) = (c1)x1(t-1) + (c3)u1(t) x2(t) = (c2)x2(t-1) + (c4)u2(t) Observation equation: y1(t) = x1(t) + x2(t) Initial state distribution: Initial state means are not specified. Initial state covariance matrix is not specified. State types are not specified. ``` `Mdl` is an `ssm` model containing unknown parameters. A detailed summary of `Mdl` prints to the Command Window. It is good practice to verify that the state and observation equations are correct. If the equations are not correct, then it might help to expand the state-space equation manually. Create a state-space model containing two independent, autoregressive states, and the observations are the sum of the two states, plus Gaussian error. Symbolically, the equation is `$\left[\begin{array}{c}{x}_{t,1}\\ {x}_{t,2}\end{array}\right]=\left[\begin{array}{cc}{\varphi }_{1}& 0\\ 0& {\varphi }_{2}\end{array}\right]\left[\begin{array}{c}{x}_{t-1,1}\\ {x}_{t-1,2}\end{array}\right]+\left[\begin{array}{cc}{\sigma }_{1}& 0\\ 0& {\sigma }_{2}\end{array}\right]\left[\begin{array}{c}{u}_{t,1}\\ {u}_{t,2}\end{array}\right]$` `${y}_{t}=\left[\begin{array}{cc}1& 1\end{array}\right]\left[\begin{array}{c}{x}_{t,1}\\ {x}_{t,2}\end{array}\right]+{\sigma }_{3}{\epsilon }_{t}.$` Define the state-transition matrix. `A = [NaN 0; 0 NaN];` `B = [NaN 0; 0 NaN];` Define the measurement-sensitivity matrix. `C = [1 1];` Define the observation-innovation matrix. `D = NaN;` Create the state-space model using `ssm`. ` Mdl = ssm(A,B,C,D)` ```Mdl = State-space model type: ssm State vector length: 2 Observation vector length: 1 State disturbance vector length: 2 Observation innovation vector length: 1 Sample size supported by model: Unlimited Unknown parameters for estimation: 5 State variables: x1, x2,... State disturbances: u1, u2,... Observation series: y1, y2,... Observation innovations: e1, e2,... Unknown parameters: c1, c2,... State equations: x1(t) = (c1)x1(t-1) + (c3)u1(t) x2(t) = (c2)x2(t-1) + (c4)u2(t) Observation equation: y1(t) = x1(t) + x2(t) + (c5)e1(t) Initial state distribution: Initial state means are not specified. Initial state covariance matrix is not specified. State types are not specified. ``` `Mdl` is an `ssm` model containing unknown parameters. A detailed summary of `Mdl` prints to the Command Window. It is good practice to verify that the state and observations equations are correct. If the equations are not correct, then it might help to expand the state-space equation manually. Pass the data and `Mdl` to `estimate` to estimate the parameters. Create a state-space model, where the state equation is an AR(2) model. The state disturbances are mean zero Gaussian random variables with standard deviation of 0.3. The observation equation is the difference between the current and previous state plus a mean zero Gaussian observation innovation with a standard deviation of 0.1. Symbolically, the state-space model is `$\left[\begin{array}{c}{x}_{1,t}\\ {x}_{2,t}\\ {x}_{3,t}\end{array}\right]=\left[\begin{array}{ccc}0.6& 0.2& 0.5\\ 1& 0& 0\\ 0& 0& 1\end{array}\right]\left[\begin{array}{c}{x}_{1,t-1}\\ {x}_{2,t-1}\\ {x}_{3,t-1}\end{array}\right]+\left[\begin{array}{c}0.3\\ 0\\ 0\end{array}\right]{u}_{1,t}$` `${y}_{t}=\left[\begin{array}{ccc}1& -1& 0\end{array}\right]\left[\begin{array}{c}{x}_{1,t}\\ {x}_{2,t}\\ {x}_{3,t}\end{array}\right]+0.1{\epsilon }_{t}.$` There are three states: ${x}_{1,t}$ is the AR(2) process, ${x}_{2,t}$ represents ${x}_{1,t-1}$, and ${x}_{3,t}$ is the AR(2) model constant. Define the state-transition matrix. `A = [0.6 0.2 0.5; 1 0 0; 0 0 1];` `B = [0.3; 0; 0];` Define the measurement-sensitivity matrix. `C = [1 -1 0];` Define the observation-innovation matrix. `D = 0.1;` Use `ssm` to create the state-space model. Set the initial-state mean (`Mean0`) and covariance matrix (`Cov0`). Identify the type of initial state distributions (`StateType`) by noting the following: • ${x}_{1,t}$ is a stationary, AR(2) process. • ${x}_{2,t}$ is also a stationary, AR(2) process. • ${x}_{3,t}$ is the constant 1 for all periods. ```Mean0 = [0; 0; 1]; % The mean of the AR(2) varAR2 = 0.3*(1 - 0.2)/((1 + 0.2)*((1 - 0.2)^2 - 0.6^2)); % The variance of the AR(2) Cov1AR2 = 0.6*0.3/((1 + 0.2)*((1 - 0.2)^2) - 0.6^2); % The covariance of the AR(2) Cov0 = zeros(3); Cov0(1:2,1:2) = varAR2*eye(2) + Cov1AR2*flip(eye(2)); StateType = [0; 0; 1]; Mdl = ssm(A,B,C,D,'Mean0',Mean0,'Cov0',Cov0,'StateType',StateType)``` ```Mdl = State-space model type: ssm State vector length: 3 Observation vector length: 1 State disturbance vector length: 1 Observation innovation vector length: 1 Sample size supported by model: Unlimited State variables: x1, x2,... State disturbances: u1, u2,... Observation series: y1, y2,... Observation innovations: e1, e2,... State equations: x1(t) = (0.60)x1(t-1) + (0.20)x2(t-1) + (0.50)x3(t-1) + (0.30)u1(t) x2(t) = x1(t-1) x3(t) = x3(t-1) Observation equation: y1(t) = x1(t) - x2(t) + (0.10)e1(t) Initial state distribution: Initial state means x1 x2 x3 0 0 1 Initial state covariance matrix x1 x2 x3 x1 0.71 0.44 0 x2 0.44 0.71 0 x3 0 0 0 State types x1 x2 x3 Stationary Stationary Constant ``` `Mdl` is an `ssm` model. You can display properties of `Mdl` using dot notation. For example, display the initial state covariance matrix. `Mdl.Cov0` ```ans = 3×3 0.7143 0.4412 0 0.4412 0.7143 0 0 0 0 ``` Use a parameter mapping function to create a time-invariant state-space model, where the state model is AR(1) model. The states are observed with bias, but without random error. Set the initial state mean and variance, and specify that the state is stationary. Write a function that specifies how the parameters in `params` map to the state-space model matrices, the initial state values, and the type of state. Symbolically, the model is ``` % Copyright 2015 The MathWorks, Inc. function [A,B,C,D,Mean0,Cov0,StateType] = timeInvariantParamMap(params) % Time-invariant state-space model parameter mapping function example. This % function maps the vector params to the state-space matrices (A, B, C, and % D), the initial state value and the initial state variance (Mean0 and % Cov0), and the type of state (StateType). The state model is AR(1) % without observation error. varu1 = exp(params(2)); % Positive variance constraint A = params(1); B = sqrt(varu1); C = params(3); D = []; Mean0 = 0.5; Cov0 = 100; StateType = 0; end ``` Save this code as a file named `timeInvariantParamMap.m` to a folder on your MATLAB® path. Create the state-space model by passing the function `timeInvariantParamMap` as a function handle to `ssm`. ```Mdl = ssm(@timeInvariantParamMap); ``` `ssm` implicitly creates the state-space model. Usually, you cannot verify implicitly defined state-space models. If you estimate, filter, or smooth a diffuse state-space model containing at least one diffuse state, then the software uses the diffuse Kalman filter. To use the standard Kalman filter instead, convert the diffuse state-space model to a standard state-space model. `ssm` attributes a large initial state variance (`1e7`) for diffuse states. A standard state-space model treatment results in an approximation to the results of the diffuse Kalman filter. However, `estimate` uses all of the data to fit the model, and `filter` and `smooth` return filtered and smoothed estimates for all periods, respectively. Explicitly create a one-dimensional diffuse state-space model. Specify that the first state equation is ${x}_{t}={x}_{t-1}+{u}_{t}$, and that the observation model is ${y}_{t}={x}_{t}+{\epsilon }_{t}$. ```A = 1; B = 1; C = 1; D = 1; DSSMMdl = dssm(A,B,C,D)``` ```DSSMMdl = State-space model type: dssm State vector length: 1 Observation vector length: 1 State disturbance vector length: 1 Observation innovation vector length: 1 Sample size supported by model: Unlimited State variables: x1, x2,... State disturbances: u1, u2,... Observation series: y1, y2,... Observation innovations: e1, e2,... State equation: x1(t) = x1(t-1) + u1(t) Observation equation: y1(t) = x1(t) + e1(t) Initial state distribution: Initial state means x1 0 Initial state covariance matrix x1 x1 Inf State types x1 Diffuse ``` `DSSMMdl` is a `dssm` model object. Because the model does not contain any unknown parameters, `dssm` infers the initial state distribution and its parameters. In particular, the initial state variance is `Inf` because the nonstationary state has a diffuse distribution by default. Convert `DSSMMdl` to a standard state-space model. `Mdl = ssm(DSSMMdl)` ```Mdl = State-space model type: ssm State vector length: 1 Observation vector length: 1 State disturbance vector length: 1 Observation innovation vector length: 1 Sample size supported by model: Unlimited State variables: x1, x2,... State disturbances: u1, u2,... Observation series: y1, y2,... Observation innovations: e1, e2,... State equation: x1(t) = x1(t-1) + u1(t) Observation equation: y1(t) = x1(t) + e1(t) Initial state distribution: Initial state means x1 0 Initial state covariance matrix x1 x1 1e+07 State types x1 Diffuse ``` `Mdl` is an `ssm` model object. The structures of `Mdl` and `DSSMMdl` are equivalent, except that the initial state variance of the state in `Mdl` is `1e7`. To see the difference between the two models, simulate 10 periods of data from a state-space model that is similar to `Mdl`, except it has known initial state mean of 5 and variance 2. ```SimMdl = ssm(A,B,C,D,'Mean0',5,'Cov0',2,'StateType',2); T = 10; rng(1); % For reproducibility y = simulate(SimMdl,T);``` Obtain filtered and smoothed states from `Mdl` and `DSSMMdl` using the simulated data. ```fMdl = filter(Mdl,y); fDSSMMdl = filter(DSSMMdl,y); sMdl = smooth(Mdl,y); sDSSMMdl = smooth(DSSMMdl,y);``` Plot the filtered and smoothed states. ```figure; plot(1:T,y,'-o',1:T,fMdl,'-d',1:T,fDSSMMdl,'-*'); legend('Simulated Data','Filtered States -- Mdl','Filtered States -- DSSMMdl');``` ```figure; plot(1:T,y,'-o',1:T,sMdl,'-d',1:T,sDSSMMdl,'-*'); legend('Simulated Data','Smoothed States -- Mdl','Smoothed States -- DSSMMdl');``` Besides apparent transient behavior, the filtered and smoothed states between the standard and diffuse state-space models appear nearly equivalent. The slight difference occurs because `filter` and `smooth` set all diffuse state estimates in the diffuse state-space model to 0 while they implement the diffuse Kalman filter. Once the covariance matrices of the smoothed states attain full rank, `filter` and `smooth` switch to using the standard Kalman filter. In this case, the switching time occurs after the first period. expand all Tips Specify `ParamMap` in a more general or complex setting, where, for example: • The initial state values are parameters. • In time-varying models, you want to use the same parameters for more than one period. • You want to impose parameter constraints. Algorithms • Default values for `Mean0` and `Cov0`: • If you explicitly specify the state-space model (that is, you provide the coefficient matrices `A`, `B`, `C`, and optionally `D`), then: • For stationary states, the software generates the initial value using the stationary distribution. If you provide all values in the coefficient matrices (that is, your model has no unknown parameters), then `ssm` generates the initial values. Otherwise, the software generates the initial values during estimation. • For states that are always the constant 1, `ssm` sets `Mean0` to 1 and `Cov0` to `0`. • For diffuse states, the software sets `Mean0` to 0 and `Cov0` to `1e7` by default. • If you implicitly create the state-space model (that is, you provide the parameter vector to the coefficient-matrices-mapping function `ParamMap`), then the software generates any initial values during estimation. • For static states that do not equal 1 throughout the sample, the software cannot assign a value to the degenerate, initial state distribution. Therefore, set static states to `2` using the name-value pair argument `StateType`. Subsequently, the software treats static states as nonstationary and assigns the static state a diffuse initial distribution. • It is best practice to set `StateType` for each state. By default, the software generates `StateType`, but this behavior might not be accurate. For example, the software cannot distinguish between a constant 1 state and a static state. • The software cannot infer `StateType` from data because the data theoretically comes from the observation equation. The realizations of the state equation are unobservable. • `ssm` models do not store observed responses or predictor data. Supply the data wherever necessary using the appropriate input or name-value pair arguments. • Suppose that you want to create a state-space model using a parameter-to-matrix mapping function with this signature: `[A,B,C,D,Mean0,Cov0,StateType,DeflateY] = paramMap(params,Y,Z)` and you specify the model using an anonymous function `Mdl = ssm(@(params)paramMap(params,Y,Z))` The observed responses `Y` and predictor data `Z` are not input arguments in the anonymous function. If `Y` and `Z` exist in the MATLAB Workspace before you create `Mdl`, then the software establishes a link to them. Otherwise, if you pass `Mdl` to `estimate`, the software throws an error. The link to the data established by the anonymous function overrides all other corresponding input argument values of `estimate`. This distinction is important particularly when conducting a rolling window analysis. For details, see Rolling-Window Analysis of Time-Series Models. References [1] Durbin J., and S. J. Koopman. Time Series Analysis by State Space Methods. 2nd ed. Oxford: Oxford University Press, 2012.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9754759669303894, "perplexity": 1452.698919093027}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00110.warc.gz"}
https://www.physicsforums.com/threads/instantaneous-action-in-potential.840646/
# Instantaneous action in potential 1. Oct 31, 2015 Suppose we have a particle trapped in a potential V(x), we calculate the bound states of the particle for t<0, so that our most general solution is $$\Psi(x,t)=\sum_{n}C_{n}\phi_{n}(x)e^{-iE_{n}t/\hbar}$$ Then, at t=0. we change the potential, so that the potential is now another function V'(x). How does the state of the particle change? The energy spectrum is changed, but the particle couldn't change its energy levels (see that because psi is a LC of the (old) eigenstates, the particle is at the same time at different energy levels, each of them with different probability) inmediatly, since no action can be taken in an amount of time 0. I would say that the Schrodinger equation needs to be solved after t=0 with the old wavefunction being the new initial condition, is that alright? Ideas please! Thanks! 2. Oct 31, 2015 Also, because the potential has changed, we would expect the particle to change to a new state as before, but with new energy levels and eigenfunctions. Energy is not conserved here, but how does this change to the new state takes place? any ideas? 3. Oct 31, 2015 ### Staff: Mentor That's correct. What do you mean by state? The state of the quantum system doesn't change (see above), by its description in term of the eigensates of the Hamiltonian changes since the Hamiltonian is no longer the same. 4. Nov 1, 2015 Yes it was a bit confusing as I said it. So we solve the Schrodinger equation for t<0 and after t=0, our more general state will have the form of the equation I wrote above, but with different eigenvectors and different energy states (En). The point is, how does this happen, so that for that general solution I wrote there (but now for t>0), our initial condition is the general solution I wrote before. 5. Nov 1, 2015 ### Staff: Mentor You need to find how the states $\phi(x)$ obtained with $V(x)$ are related to the states $\phi'(x)$ obtained with $V'(x)$. This is related to the general prescription for the spectral decomposition of a state: given a set of basis functions $\phi_n(x)$ (such as the eigenstates of the Hamiltonian, but any basis set will do), any wave function $\psi(x)$ can be written as $$\psi(x) = \sum_n c_n \phi_n(x)$$ with $c_n$ begin complex coefficients given by $$c_n = \int \phi_n(x)^* \psi(x) dx$$ In your case, since you know $\psi(x,t=0)$, you simply need to rewrite in terms of the energy eigenstates of the new Hamiltonian. 6. Nov 2, 2015 Yes but my point is, at t=0, the second solution has the form: $$\Psi_{2}=\sum_{k} c_{k}\phi_{k}(x)$$, whereas from the first part, we know that the wavefunction (also at t=0) was:\\$$\Psi_{1}=\sum_{n} c_{n}\phi_{n}(x)$$ and then the condition is:\\ $$\Psi_{1}=\Psi_{2}$$ at t=0, is that correct? Otherwise, there is a discontinuity in time in the wave-function, im just interested to know how that happens 7. Nov 2, 2015 Of course both $$\phi(x)$$ are different, they are just the bound states of each potential, the one at t<0 and the one at t>0 8. Nov 2, 2015 ### Staff: Mentor What you say isn't incorrect, but I fear that there might some underlying misunderstanding, so I'll try to clarify things. First, there is a single wave function describing the quantum system. Therefore, saying $\Psi_1 = \Psi_2$ doesn't make much sense, but your idea that the wave function is single-valued is correct. At any given time, there is a definite $\Psi(t)$. Second, it doesn't matter which basis set you use to describe the wave function: both basis sets are valid for any time $t$. It is simply that in one basis, calculating the time evolution when $t<0$ is trivial, while in the other basis it is trivial when $t>0$. That's why you want to change basis. But describing the wave function using the eigenfunctions of the Hamiltonian of $t<0$ will also work after the potential has changed, but instead of having $c_n e^{-i E_n t/\hbar}$, you will have $c_n(t)$ with a complex time dependence. To summarize, you can always write $$\Psi(x;t) = \sum_n c_n(t) \phi_n(x)$$ with the $\phi_n$ forming a complete basis set of functions. Choosing these functions to be the eigenstates of the Hamiltonian, $\hat{H} \phi_n = E_n \phi_n$, then the time evolution becomes \begin{align*} \Psi(x;t) &= e^{-i \hat{H} t/\hbar} \Psi(x;0) \\ &= e^{-i \hat{H} t/\hbar} \sum_n c_n(0) \phi_n(x) \\ &= \sum_n c_n(0) e^{-i E_n t/\hbar} \phi_n(x) \end{align*} The coefficients $c_n(0)$ are obtained from $\Psi(x;0)$ following the prescription I give in post #5. 9. Nov 2, 2015 I see, is much clearer now, many thanks! 10. Nov 4, 2015 Hi, sorry but I have to come back again one more time. Im going to describe exactly the example Im working in. Imagine our particle in a box of size L/2 at t<0. Exactly at t=0, we remove the barrier and the well is now of length L. Now lets prepate our initial state being a linear combination of the first eigenstates (that is, the states where the size of the well was L/2) This is:$$\Psi(x,t<0)=\sum_{n}C_{n}\sin(k_{n}x)e^{-iE_{n}t/\hbar}$$ where explicitly I am choosing a very particular case, where I want my initial wavefunction to be a sum of all possible eigenstates, and the coefficients I choose are: $$C_{n}=\frac{\sqrt{6}}{n\pi}$$ It is clear that with that choose $$\sum_{n}|C_{n}|^{2}=1$$ and this is is going to be my initial state. Of course, $$k_{n}=\frac{2\pi n }{L}$$ since the size of the box is L/2. The initial wavefunction is then:$$\Psi(x,t=0)=\sum_{n}C_{n}\frac{2}{\sqrt{L}}\sin(k_{n}x)e^{-iE_{n}t/\hbar}$$ where the factor of 2/sqrt(L) comes from the normalization of the eigenstates for the L/2 well. Now, following our previous discussion, we work out the eigenstates for the well of size L (that is when we remove the barrier at t=0), and we want exactly to express the total wavefunction as a linear combination of the eigenstates for the well of size L. With that $$\Psi(x,t>0)=\sum_{m}A_{m}\sqrt{\frac{2}{L}}\sin(k_{m}x)e^{-iE_{m}t/\hbar}$$ where now the $$k_{m}=\frac{\pi m }{L}$$ following our discussion, calculate the Am coefficients:$$A_{m}=\int_{0}^{L}dx\sum_{n=1}^{+\infty}|C_{n}|\frac{2\sqrt{2}}{L}\sin(k_{n}x)\sin(k_{m}x)$$ This reads$$A_{m}=\frac{2\sqrt{12}}{\pi L}\sum_{n=1}^{\infty}\frac{1}{n}\int_{0}^{L}\sin(\frac{2n\pi x}{L})\sin(\frac{m\pi x}{L})$$ The integral gives $$\frac{L}{2\pi}\bigg(\frac{\sin(\pi(m-2n))}{m-2n}-\frac{\sin(\pi(m+2n))}{m+2n}\bigg)$$ This integral is non zero only for a single value of n $$n=\frac{m}{2}$$ in which the left side of the integral becomes Pi. Our coefficients $$A_{m}=\frac{2\sqrt{12}}{(m\pi)}$$ But now, with that, the sum $$\sum_{m}|A_{m}|^{2}\neq 1$$ What is going wrong here? This is really annoying me because I cant see what going on, I would be grateful if you can help or see where am I doing it wrong, thanks! 11. Nov 4, 2015 ### Staff: Mentor Since $t=0$, you actually have $$\Psi(x,t=0)=\sum_{n}C_{n}\frac{2}{\sqrt{L}}\sin(k_{n}x)$$ but that's not very important. This is where your problem lies. $\Psi(x,t=0) = 0$ for $x> L/2$, so the integral actually only runs from $0$ to $L/2$. 12. Nov 4, 2015 Fixed, many thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786244034767151, "perplexity": 381.431842019279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00459.warc.gz"}